Sie sind auf Seite 1von 8

Quality of Service Analysis for IPTV

Provisioning
T. Janevski* and Z. Vanevski**
*
University “Kiril i Metodij”, Faculty of Electrical Engineering and Information Technologies, Skopje, Macedonia
**
Makedonski telekom, IP Networks Department, Skopje, Macedonia
tonij@feit.ukim.edu.mk; zoran.vanevski@telekom.mk

Abstract - IPTV is one of the killer Internet applications Quality of Service (QoS) as designed by ITU (ITU-T
today. Due to the real-time nature of this service, the E.800) as overall objective effect from the network on the
Quality of Service is essential for its provisioning. In this performance of a given service (in this particular case, on
paper we have performed QoS analysis for IPTV traffic IPTV). Components of the QoE are shown in Fig. 1.
using measurements from real pre-operational IPTV
commercial network. We have performed several
experiments regarding the IPTV traffic, background best-
effort traffic, video buffer at the receiver and scheduling
algorithm towards the user access links.

I. INTRODUCTION
Today IP is common networking technology for all Figure 1. QoE components
telecommunication services, including IPTV as one of the
killer services, due to transition from analogue to digital Generally, there is correlation between the subjective
television. Most important challenge for IPTV providers and objective merits. Regarding the Internet services, QoS
are customer satisfaction and their quality of experience parameters usually are packet losses, packet delay, jitter,
[1]. Pre-request of activating IPTV service is as well as throughput or link utilization. IPTV belongs to
implementing end-to-end QoS at IP network. Currently, family of real-time Internet services, and hence we will
IPTV providers usually use DSL technology as an access refer to the QoS via such parameters.
network, which has bandwidth limitations. Their
customers usually are using triple play service over DSL B. IPTV measurements with MDI (Media Delivery Index)
[2] including Internet connectivity, Voice over IP (VoIP) The Media Delivery Index (MDI) is a set of
and IPTV. Capacity of the DSL link is limited, internet measurements used for monitoring and troubleshooting
traffic will directly influent on IPTV traffic. Here, we networks carrying any IPTV traffic [3]. The video
perform analysis of IPTV traffic measurements obtained component of the triple play offering presents unique
from a live pre-operational network regarding the demands on the network because of its high bandwidth
characteristics of IPTV traffic as well as its quality requirements and low tolerance to jitter and packet loss.
parameters, objective and subjective. For measuring of The media delivery index (MDI) measurement gives an
IPTV quality is proposed Media Delivery Index [3-4], indication of expected video quality i.e. QoE based on
which is explained further in the paper. Also, network level measurements. It is independent of the
understanding of Quality of Experience (QoE) is given in video encoding scheme and examines the video transport
[5]. Results on analysis of IPTV transmission using itself. MDI is a set of two parameters. One is DF Delay
multicast and unicast techniques, as well as Factor which is an indicator for the size of the needed
standardization efforts on certain QoE parameters for buffer or interarrival time of IP packets. Another merit is
IPTV are given in [6-19]. media loss (MPEG packet losses in IPTV case) called
This paper is organized as follows. Next Section MLR (Media Loss Rate), which refers to number of lost
discusses the Quality of Experience. Challenges for packets of a given transport flow in a given time period
transport of IPTV are outlined in Section 3. Section 4 (usually, one second).
covers QoS mechanisms. Network setup and measured Media Delivery Index (MDI) for IPTV networks
IPTV traffic are given in Section 5. In Section 6 are predicts expected video quality based on IP network layer.
shown the results from analysis of IPTV traffic It is independent for the encoder type. MDI, in fact, is
measurements. Finally, Section 7 concludes the paper. combination of media Delay Factor (DF) and media Loss
II. QOE (QUALITY OF EXPERIENCE) Rate (MLR), which counts for number of lost MPEG
packets in one second. DF refers to the time for which the
A. QoE (Quality of Experience) IPTV flow is buffered on the receiving side at nominal bit
Quality of Experience (QoE) is defined by ITU [1] as rate when there are no packet losses. MDI is usually given
common merit for quality of a given service to the end in a table with two columns DF: MLR, or as graph
user. QoE is consisted of subjective quality as experienced
by the end user know as Mean pinion Score (MOS), and
“window” in which the y-axis is used for DF and x-axis is value is 0.019 (here we are considering worst case
used for MLR. scenario when there are 7 MPEG packets in every IP
MDI is defined by IETF with RFC 4445 [3]. It defines packet).
the influence on the video streams by using network jitter.
However, although indirectly MDI provides merits which C. MOS (Mean Opinion Score)
influence the QoS of the video, but it is not QoS merit for The quality of transmitted video signal is subjective
video traffic. MDI-DF as a merit can be used to determine because clarity and clearness are differently perceived by
the network nodes and links which have congestion each TV viewer. Different encoding schemes for IPTV
anytime and in any part of the network. Such results are video also result in different quality experience. Common
very useful for network providers, because they can easily objective classes which are used to mark the quality of the
determine whether their buffer settings on different transmitted video are referred to as MOS (Mean Opinion
Score). Using the MOS factor, the viewers can objectively
devices can provide the required MPEG TS bit rate. grade the video quality from 1 (the worst quality) to 5 (the
Good MDI results do not mean the quality of the best quality) as shown in Table I. The MOS value is
picture is good, because they are not dependent from the determined from the average grades from large number of
quality of the video signal. MDI values are realistic viewers. Also, MOS values are correlated to QoS
expression of the problems for transmission of video parameters as well (refer to Table I).
signal over any type of network.
ETSI technical report TR 101 290 [2] defines the group TABLE I.
of standards and recommendations for digital video MOS VALUES CORRELATED TO QOS PARAMETERS
systems and their minimal recommended values. MPEG
Therefore, before the commercial start of IPTV network it Packet
packet Description quality MOS
loss[%]
is necessary to determine the bottlenecks in the network loss
and other possible network problems. Errors in the video 0–3 < 20 The best 5
signal at the receiving end can be detected using MOS
techniques as well, but such methods are subjective and 4 - 13 20-100 Very good (periodical freezing ) 4
they do not locate the problem in the network. 14-23 100-60 Good (loss of video frames) 3
DF component of the MDI is time value which
indicates how many milliseconds the packets should be 24-33 160-230 Bad (not clear picture) 2
buffered to avoid jitter. It is calculated in the following 34 -43 > 230
Worst (freeze picture or black
1
way: after arrival of a packet, calculate the difference screen)
between received and sent bytes. This is referred to as
MDI buffer [4].
III. CHALENGES FOR TRANSPORT OF REAL-TIME TRAFFIC
∆ =| received _ bytes − sent _ bytes | (1)
Elastic contents and traffic are not sensitive to delay or
limited packet loss, such as www, file transfer, electronic
Then, in a given time interval, calculate the difference mail etc., i.e. so-called non-real time traffic. On the other
between the minimal and maximal values of MDI buffer side, real-time traffic is delay sensitive and less tolerant to
and divide with the bit rate: packet losses. IPTV traffic is real-time traffic, where the
IGMP control messages are more sensitive to packet
(max(∆) − min(∆)) (2) delays. However, there is challenge in IP networks to
DF =
bitrate provide simultaneous transmission of the elastic and IPTV
traffic without any significant packet delays or losses.
As an example, bit rate for IPTV over ADSL access Common for all traffic types is the buffer memory
links is usually 2.55Mbps MPEG per video flow. Let where packets are scheduled prior to transmission over
assume that in time interval of one second maximal outgoing links. The packets from video stream on their
volume of data in the virtual buffer is 2.555 Mbits and way from the encoder to the end user are traveling via
minimal volume of data is 2.540 Mbits. Then, delay factor many heterogeneous network nodes and devices, and each
(DF) can be calculated as follows: one of them has its own network buffers, application
2.555Mb − 2.540Mb 15kb buffers or server buffers. Some of them have resource
DF = = = 6ms (3) management utilities to provide lower waiting time in the
2.55Mb / s 2.55Mb / s buffer memory. Hence, first recommendation is to
minimize the number of network nodes which are in the
Hence, to avoid packet losses in the above example, path of video flow from the source to the destination.
receiving buffer should be 15 Kbits, which will introduce Most critical are the buffers in the backbone network and
6 ms delay. the access network, because there are used for
Usually merit !DI MLR is expressed in media heterogeneous traffic. In this part of this paper we will
packets per second. QoE standards for IPTV are still in the focus on the mutual dependence of elastic Internet traffic
preparation phase, but current recommendations which are on one side and video traffic on the other. It is well known
considered by IPTV providers are WT-126 from DSL that TCP traffic counts for most of the Internet traffic
Forum [5], which declares that maximal losses should be today due to www on the first place. However, TCP uses
5 packets in 30 minutes of standard definition TV (SDTV) larger buffers due to requirements for no-errors on the
video flow, while for high definition TV (HDTV) it is 4 application layer for this kind of traffic, which results in
hours. Mathematically speaking, this means that MLR retransmissions of all lost or damaged TCP segments.
Contrary to this, video streaming traffic is using UDP, real-time) data packet (e.g. www, e-mail etc.). But, non-
because there is no real sense for retransmission of lost real-time traffic has no strict end-to-end delay
data from real-time stream, because it is useless after requirements such as real-time video traffic. Dropped
given moment of presentation of the content to the end video packet cannot be retransmitted by the source,
user. Hence, all streaming traffic is based on UDP, which because it is useless as discussed before.
causes less delay and requires smaller buffers. Such MPEG packets loss which are parts from “I” or “"”
different buffer requirements from TCP and UDP traffic frame will more decrease video quality than “B” frame.
are solved by using smaller FIFO buffers, which give Any video IP packet which contains maximum 7 MPEG
better utilization of the link capacities, or by using packets will affect all frame types, but lost of an “I” frame
dedicated buffers with different sizes (dependent upon the will affect the whole GOP. Also this type of lost will have
traffic type) for each traffic class (in the latter case it is duration as the GOP, i.e. usually 0.5 - 1 second.
supposed that traffic is classified in number of traffic
DSL access technology during interleaving has
classes). However, capacity of each link is limited and
therefore there will always be added packet delays and uncorrected burst loss events of typical 8 ms and 16 ms.
packet loss from each link in delay budget and loss The “ripple effect” is the result of rounding to an integer
budget, respectively. Due to this discussion, we have number of lost/corrupted IP packets. Typical DSL burst
made measurements of the video traffic using different loss is with duration 8 ms. Here we present MPEG-4
types of background Internet traffic in different scenarios, transport stream at a bit rate of 3 Mbps:
and then performing analysis of packet delay and packet 3Mbps 1 Total MPEG packets
Total MPEG packets / = * = 1994.7
loss. 8 bits 188B s
The challenge remains in the field of resource !PEG packets
1994.7
reservations and granularity of flows. Integrated Services s
Total IP packets / = = 285 IP packets /
(IntServ) are using dynamic resource reservations per flow 7
thus guaranteeing the Quality of Service (QoS) per flow.
In fact, in such case the application must use signaling Using the above results, loss of 8 ms corresponds to:
end-to-end to reserve the resources (similar to No.7 IPpackets loss / = 285 IP packets / * 0.008 =
signaling in PSTN) before start of data transmission. The = 2.28 IP packets
signaling for IntServ is done with RSVP (Resource
Reservation Protocol). However, today IPTV providers
are usually using DiffServ (Differentiated Services), IP packets are lost if a part of a packet is lost, so 2.28 is
which has less refined QoS support, per traffic class, and rounded to the next integer 3 IP packets, bytes are not
not per flow as it is the case with IntServ. When using necessarily aligned to IP packet boundaries, this would be
DiffServ, each packet is marked with Type of Service (or further rounded to 4 IP packets. In “worst case” scenario if
DiffServ Code Point) associated with a given traffic class all IP packets have maximum 7 MPEG packets, lost will
from limited number of traffic classes in the network. All be 28 MPEG packets.
packets belonging to a same traffic class in a DiffServ
domain are served with the same “priority” in network
nodes, which are using FIFO scheduling within packets
belonging to the given traffic class.

A. Jitter #) b)
Jitter is variation in end-to-end packet delay in respect Figure 2. Unsatisfied quality example
to the average time delay. Packets arriving at a destination (captured with Visual !PEG Analyzer)
at a constant rate exhibit zero jitter. Packets with an
irregular arrival rate exhibit non-zero jitter. After IPTV providers should have in consideration that main
traversing the network separating the data source and the role in customer QoE has the following encoding
destination, and being queued, routed and switched by parameters:
various network elements, packets are likely to arrive at Length of GOP (Group of Pictures): the advantage
the destination with some rate variation over time. In any of using longer GOP for IPTV is getting lower bit
event, if the instantaneous data arrival rate does not match rate of the video stream, but from customer
the rate at which the destination is consuming data, the perspective this will cause longer bad quality video
packets must be buffered upon arrival. There should be scenes.
implemented jitter-buffer, which main function is to
Video frame frequency: typical video frame rates
buffer IPTV packets until all packets are transferred.
are 30-60 fps. If IPTV provider uses lower frame
However, this type of buffer will increase latency of
rates it will affect the quality of dynamical scenes,
transport end-to-end; hence it is not recommended to use
as shown in Fig. 2.
larger values than 500 ms.
GOP Structure: if channel is video encoded with
“BBBP” GOP structure, then there will be greater
B. Packet Loss possibility lost packets to belong to “$” frame
Packet losses occur when packets are not reaching the type. This type of GOP structure is recommended
destination due to any cause on their path. In IP networks to all future IPTV providers.
today, all video packets which carry video information are
treated as data traffic, i.e. these packets will be dropped in
the case of congestion in network nodes as any other (non-
IV. QOS MECHANISMS priorities (e.g., to serve first all VoIP packets, then all
Implementing QoS at any parts in the IP network will IPTV packets, and then to continue with all other IP
increase QoE for all IPTV customers. There are several packets in the buffers). However, there should be caution
QoS mechanisms which are recommended for because in such case there is possibility for higher priority
implementation on every segment of IPTV network. Each classes to monopolize the bandwidth. Therefore,
mechanism is using unique parameters, standards and classification can be considered mainly in the user access
technology. We are recommending modular and cyclic links, not in the backbone network.
QoS management on network infrastructure, presented in
Fig. 3. This type of QoS management consists of four B. Traffic Provisioning
types of QoS mechanisms: Traffic provisioning provides QoS. This is a technique
Bandwidth Allocation which uses applications based on the specified bit rates,
Bandwidth Management which are guaranteed to the end subscribers. In the case of
Traffic Prioritization IPTV provisioning over ADSL it is convenient to limit the
bandwidth between the DSLAM and STB (Set-Up Box)
Traffic Provisioning for elastic Internet traffic and for IPTV traffic. In
Network management is main approach for all four particular, the bandwidth that is not dedicated to IPTV
mechanisms. IPTV providers, during implementation of traffic should be redirected to separate buffer as shown in
QoS management, shall follow the given hierarchy above Fig. 5. Then we will have separate buffer for IPTV and
(starting from Bandwidth Allocation). If a provider separate buffer for other traffic types.
implements Traffic Provisioning, and QoS is still poor in
its network, then the provider should start again with
Bandwidth Allocation and so on.

Figure 5. IPTV traffic provisioning in separate buffer

C. Buffering
Regarding the buffering and scheduling of packets and
their practical realization with off-the-shelf product there
Figure 3. Modular and cyclic QoS management on network are several possibilities [7]:
infrastructure
First-in, first-out (FIFO) queuing
Priority queuing (PQ)
A. Traffic Prioritization
Guaranteed throughput
Traffic prioritization is useful for IPTV when IPTV
traffic is mixed with non-real-time background traffic and WFQ (Weighted Fair Queuing), which can be:
at the same time overall link capacity is higher than peak - Flow based
bit rate of the IPTV stream (e.g. 3 Mbps peak bit rate - Class based
IPTV stream over 8 Mbps ADSL access link). The simplest mechanism is FIFO. It is the default
If we use classification for Type of Service field [6] scheduling mechanism if something else is not specified.
which contains 8 bits (given in Fig. 4), then one should We have already discussed traffic prioritization at the
use for traffic classification the three p-bits (i.e. beginning of this section. It is a potential solution, but
precedence bits) in ToS field of the IP packet header with bandwidth monopolization of higher priority classes is a
IPTV content in the packet payload. problem.
Guaranteed throughput means that constant bit rate is
allocated to a given IPTV flow and it can not be used by
other traffic classes. However, such scheme provides low
utilization of network links and does not allow statistical
multiplexing which is a common concept for
heterogeneous Internet traffic.

D. WFQ (Weighted Fair Queuing)


The value of the WFQ algorithm comes from the usage
Figure 4. ToS field in IP header of the three precedence bits from ToS field to achieve
better service by using classification of IP packets and
Priority queuing and routing are dedicated for packet their queuing and scheduling according to the precedence
scheduling from different traffic classes with different
bits (p-bits). These p-bits can have values from 0 to 7 (6
and 7 are reserved).
WFQ is efficient because it can utilize the whole
available bandwidth, starting from higher priority traffic
flows and going to the lower priority ones. WFQ works
for two protocols in practice - IP precedence and Resource
Reservation Protocol (RSVP), which provide QoS and
guaranteed service, respectively. Figure 6. Channel changing and IGMP delay
WFQ allocates weight coefficient to each flow which
determines the scheduling of each buffered packet.
According to this scheme, smaller values provide better
service. For example, the traffic with IP Precedence field
with value 5 receives smaller weight than traffic with IP
Precedence field with value 3, which then results in
priority of one type of packets over the other.
The weight coefficient i.e. “Weight” is a number which
is calculated from the value which is set in IP precedence
field in IP packet headers. These values are used by the
WFQ algorithm to determine when a given packet should
be sent.
Capacity distribution calculation for WFQ goes in the
Figure 7. Network setup for IPTV measurements
following manner: for instance, if we have one flow for
each precedence level on a given network interface (that
is, 8 flows, each with different precedence mark), then IPTV provider should take in consideration tuning of
each flow will be allocated (precedence+1) parts of the overhead burst - parameter as very crucial function.
link i.e. the bandwidth: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = 36. Overhead has direct influence on two points in the
The flows will accordingly get 8/36, 7/36, 6/36, 5/36 from network, limited bandwidth at customer premises and
the link capacity, etc. utilizing of backbone links. First point is during worst
V. IPTV TRAFFIC case scenario when the customer has two STBs and
change channels in same time so overhead directly
IPTV providers provide mainly two types of traffic: 1)
depends from maximum bandwidth at customer device.
multicast for delivering video channels and 2) unicast
Second point is unicast traffic generated during channel
dedicated for channel changing, video on demand and
changing should be transported through IP backbone
other applications. Here, we capture traffic traces from a
links. The capacity of a Digital Subscriber Line ("DSL")
live network. For example, in Fig. 6 we illustrate a
channel is limited. Engineering a network to support
subscriber performing a channel change (stream switch)
channel change as described above requires several Mbps
in a typical GoP-based IPTV system. In this example the
of reserved bandwidth. Such a configuration will either
subscriber is synchronized to channel 1. At a particular
reduce the DSL serving area, reduce the number of video
time, the subscriber issues a switch command to channel
streams that can be delivered, and/or compromise other
2, which triggers IGMP leave to the multicast stream
services during channel changing periods.
group 1 and joins the multicast stream group 2. Then, the
If IPTV provider is using DSL technology then it
subscriber starts to receive multicast stream 2.
should set very low overhead, but distributed unicast
The network setup which is used for IPTV
traffic is very high so it will utilize backbone links. IPTV
measurements is shown in Fig. 7. It is used for both,
providers should make calculations and measurements on
unicast and multicast delivery of IPTV streams from
access network and first input for calculation of overhead
IPTV stream generator to Set-up box (i.e. end user).
should be DSL bandwidth. Reasonable value for
Instant Channel Change (ICC) is using a buffering
overhead is 20 %, with average bit rate of 3.2 Mbps, and
technique on channel changing servers. This method
burst time is around 10 seconds for standard definition
creates multiple unicast streams that are sent to the
TV stream, as we can see from captured IPTV traffic
customer along with the broadcast multicast and it gets
shown in Fig. 8.
buffered for the amount of time that is the anticipated
Dimensioning of the demand of bandwidth is based on
multicast establishment time. So when the user requests a
the following measurements in Table II, for different
channel swap it immediately switches to the buffered
percentage of ICC overhead. We have measured
content as it proceeds with the new multicast request.
multicast stream with average bit rate 2.37 Mbps and
Channel changing servers maintain sliding bursts of
maximum peak of 2.72 Mbps, where we can conclude
live TV service streams for some period of time. The
that overhead is percentage of peak-average bit rate
exact time depends on the bit rate of the stream, the
difference for the IPTV stream. Multicast stream was
structure of the key “I” frames in the GOP Group of
constituted from 85 % H.264 video stream, 8% audio
Picture, the delay characteristics of the stream. Overhead
stream MPEG1 and 4% teletext stream, as it is shown in
is difference (in percentage) between multicast stream
Fig. 9.
and unicast burst during the channel changing.
Ch_overhead_30% 500000 MC_traffic
4,00 unicast
Ch_overhead_20%
450000
AGR_traffic
3,50 Ch_overhead_10%
400000

Traffic Intensity (Bytes/100ms)


3,00 350000
bitrate [Mbps]

2,50 300000

250000
2,00
200000
1,50
150000

1,00 100000

0,50 50000

0
0,00
0 25000 50000 75000 100000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Time (ms)
time [sec]
Figure 10. Measured IPTV traffic traces
Figure 8. Measured unicast bursts for different value of overhead
1,0

MC_Traffic
TABLE II. ICC_UNIC
0,8
ICC BURSTS BITRATE AND TIME AGR_traffic

Ch. Overhead 30% 20% 10% 0,6

Bit rate [Mbps] 3,46 3,19 2,92

Correlation coeficient
0,4

Unicast Bytes[MB] 3,84 4,78 5,11


0,2
Time [s] 10 12 14
0,0
100,00%
1 101 201 301 401 501 601 701 801 901 1001

90,00%
-0,2

80,00%
-0,4
70,00% lag k
H.264 Video
60,00% Audio Figure 11. Autocorrelation Function for Multicast, Unicast and
50,00%
TeleText Aggregated IPTV traffic
ECM
40,00% PAT
PMT 2,50E+06
30,00%
1500
20,00% 1000
500
2,00E+06
10,00%

0,00%
M

Bit rate [bps]


T
eo

xt

T
d io

PM

1,50E+06
EC

PA
Te
id

Au
V

le
Te
64
.2
H

Figure 9. Histogram of IPTV captured traffic per program id 1,00E+06

We made traces scanning live IPTV traffic on edge 5,00E+05


router network interface towards clients’ side.
Measurements and analyses are made per traffic type,
0,00E+00
multicast from all popular channels and unicast from all 1 12,5 25 37,5 50 62,5 75 87,5 100

instant popular channels changing as well as the Link utilization [%]

aggregate traffic (segments of IPTV traces are shown in Figure 12. Bit rate of the IPTV stream for different link utilization
Fig.10). Autocorrelation functions of mentioned IPTV
traffic types are shown on Fig. 11. The autocorrelation of
In Fig. 13 we show packet loss ratio dependence upon
unicast and aggregate traffic decays hyperbolically rather link utilization. As one can expect packet loss increases as
than exponentially fast. This shows that IPTV traffic is link utilization increases, which is usually the case in
self-similar, i.e. bursty. Therefore higher link utilization packet networks with bursty traffic such as IPTV.
leads to lower bit rate of the IPTV stream (Fig. 12). Significant packet losses start after reaching link
utilization of 55-60%, which leads to packet losses of
VI. ANALYSES OF IPTV TRAFFIC MEASUREMENTS IPTV traffic. However, this means that also there are
After we have defined all QoS network parameters as packet losses in background TCP based traffic, which
well perform IPTV traffic measurements, we perform causes congestion avoidance mechanism in TCP or slow
analyses of the measurements data. We present four start (depending upon TCP version and number of lost
measurement scenarios with aim to analyze the influence segments within a congestion window of the TCP), which
of the data traffic on the IPTV traffic. The link capacity is provides back-off of the TCP streams leaving more room
limited and therefore the goal is to determine the influence for UDP-based traffic such as IPTV is our analyses.
of the background Internet traffic on the IPTV traffic Therefore, there is lowering the loss ratio after reaching
when there is no traffic classification. first peak at 65% link utilization, which (the lower value
of packet loss ratio) occurs near 85% link utilization. Of 14%

course, if link utilization continues to higher values (over


12%
85%) the packet loss ratio continues to increase
exponentially. 10%

IP Packet loss [%]


Further, we have made experimental measurements 8%
using different sizes of IP packets and different flow data
rates. Using WAN killer packet generator we have 6%

generated data packets with sizes of 50, 200, 1000 and 4%


1500 bytes. Then, we have made 9 measurements with
precisely defined throughput going from 0 up to 8 Mbps 2%

with step of 1 Mbps. The measurement results for MDI


0%
are given in Table III. The results show that when link 1% 12,5% 25% 37,5% 50% 62,5% 75% 87,5% 100%

utilization is low then packet size does not influence MDI. Link utilization [%]
However, for higher link utilization values, smaller packet Figure 13. Packet loss ratio of IPTV flow versus link utilization
sizes cause higher MDI while larger packets lead to
smaller MDI. The results show the MDI dependence upon 300
MDI DF 1500
1000
the background packets sizes and the influence on IPTV 500
250
traffic regarding the link utilization. One may conclude
that larger background packet which are multiplexed on 200
the same link with a given IPTV flow have smaller
influence on the IPTV traffic compared to smaller

DF
150
background IP packets. For large IP packets of 1500 bytes
the influence on IPTV traffic was insignificant. On the 100
other side, background IP packets with sizes of 50 bytes
have significant impact on the MPEG packets (IPTV 50

traffic) even at 12% link utilization, something that can be


noticed from Fig.14 and Fig. 15. 0
1 12,5 25 37,5 50 62,5 75 87,5 100
If we analyze such behavior from buffer point of view, Link utilization [%]

what are the reasons for these results, we may conclude Figure 14. MDI DF values for different link utilizations and different
that the reason is in that the number of transmitted IP sizes of background IP packets
packets, with WFQ applied on buffer’s side, is higher for
smaller than for larger packets. Larger packets have larger 1600
MDI DF
serving time thus producing higher values for packet delay
1400
and jitter. In all this measurements IPTV traffic uses 1500
1000
packet size of 1400 bytes. Larger packet sizes of IPTV 1200 500
50
traffic increases the probability of congestion of IPTV 1000
traffic when it is mixed with background traffic.
DF

800
Also, the parameters which influence the quality of
IPTV flows are worsening with link utilization increase, 600

where at 75% link utilization we have obtained very “bad” 400


MDI DF values.
200
Using the obtained results for packet sizes of 50 and
1500 bytes, one can easily notice the difference. That is, if 0
1 12,5 25 37,5 50 62,5 75 87,5 100
there are users in the network which generate packets with Link utilization [%]
sizes of 50 bytes then the performances and quality of Figure 15. MDI DF values for different link utilization and different
IPTV traffic will significantly degraded. On the other side, sizes of IP packets (DoS)
packet with sizes of 500 or 1500 bytes (usually originating
from www or FTP traffic) give us opposite results. However, with aim to provide stable value for the video
However, in practice there is no such non-real-time traffic buffer for SD flows, IPTV providers usually set video
with small packets such as 50 bytes long packets, and it is buffers in STBs with sizes of 1000 ms.
usually anomaly produced by Denial of Service (DoS) The results in Table IV show significant degradation of
attacks, something that should be expected in IPTV quality at value of 75% link utilization and worse at higher
network as well. These conclusions can be drawn from link utilization. The same behavior can be seen in Fig. 16.
Fig. 16. Again, the exception is the case with very small IP packets
If we use these MDI DF results obtained form the user (in this case with packet size of 50 bytes), which does not
access link, we may calculate maximum retransmission influence the losses of MPEG video packets.
time which causes IPTV quality degradation. The Regarding the MOS values given in Table I, after
retransmission is during the unicast burst. From the reaching the link utilization of 75% the MOS values
figures this value is 150 ms at 75% link utilization. This decreases from 4 to 2, when there is no QoS mechanism
way we may calculate the real maximum value for the used on the link. The solution to achieve higher MOS
video buffer, i.e.: values is usage of WFQ mechanism, which resulted in
Time for exhausting the buffer = (maximal retransmission MOS values in the range 4-5 at link utilization of 87.5%.
time * 100 / Percentage of unicast burst)
VII. CONCLUSION 500
MDI MLR
1500
1000
450 500
In this paper we have performed analyses on the QoS 50
400
parameters for IPTV traffic. As a merit we have used the
350
Media Delivery Index (MDI) which is standardized and
unified merit for QoS in IPTV networks. Also, we have 300

LR
performed analyses regarding the scheduling mechanisms 250

for IPTV traffic. 200

The results showed that we may use IPTV with 150

satisfactory quality even at higher loads, up to 85% link 100


utilization. Packet losses are even lower at 85% link 50
utilization than at 65% utilization. 0
Also, packet sizes of the background traffic influences 1 12,5 25 37,5 50 62,5 75 87,5 100
Link utilization [%]
the IPTV quality. Smaller IP packets (for background
elastic traffic) cause higher degradation in IPTV traffic Figure 16. MPEG packets losses - MLR values for different link
and vice versa. Hence, IPTV traffic can be efficiently utilizations and different packet sizes
multiplexed with non-real-time traffic such as www, email
etc., with different scheduling schemes. However, DoS %ABLE III.
!DI DF PARAMETERS OF IPTV FLOW USING DIFFERENT PACKET SIZES
attacks with small size IP packets can cause significant AND DIFFERENT LINK UTILIZATION
traffic degradation to the IPTV stream.
Efficient separation of IPTV traffic from other traffic B|% 1 25 50 75 100
types on a same link can be efficiently achieved with 1500 73.69 58.59 83.09 96.26 286.05
WFQ by using precedence bits in Type of Service field in
IP headers. However, with WFQ the user will experience 1000 76.81 98.03 68.87 132.52 290.63
noticeable quality degradation after reaching the link 500 65.76 80.68 76.58 150.11 261.23
utilization of 75%. The results showed that bad values for 50 72.06 884.03 1430.8 1455.54 1469.4
MDI DF can be compensated by using larger buffers for
video packets. TABLE IV.
Regarding the losses of video packets and delay !DI MLR MEASUREMENTS, INFLUENCE OF DIFFERENT TYPES OF DATA
parameters IPTV packet should be classified, and for such TRAFFIC ON IPTV TRAFFIC PERCENTAGES OF UTILIZATION
purpose the class marking of IPTV packet should be done
closer to the stream source (i.e. IPTV platform) to be able B|% 1 25 50 75 100
to achieve certain end-to-end QoS. 1500 0 0 0 41.50 197.20
1000 0 0 0 148.40 328.60
REFERENCES
500 0 0 0 203.36 483.60
[1] Tim Rahrer, Nortel, Riccardo Fiandra, FastWeb, Steven Wright,
BellSouth “ TR-126 Triple-play Services Quality of Experience 50 4.00 64.56 37.33 123.00 155
(QoE) Requirements and Mechanisms For Architecture &
Transport” , DSL Forum, February 21, 2006. [13] Patrik Osterberg, “Fair Treatment of Multicast Sessions and Their
[2] “Migration to Ethernet Based DSL Aggregation For Architecture Receivers – Incentives for more efficient bandwidth utilization”,
and Transport Working Group”, TR-101, DSL Forum, May 2004. Department of Information Tech. and Media - Mid Sweden
[3] J. Welch, J. Clark, “A Proposed Media Delivery Index (MDI)”, University Doctoral Thesis, Sundsvall, Sweden 2007.
IETF RFC 4445 - IneoQuest Technologies, Cisco Systems April [14] Thomas Wiegand, Gary J. Sullivan, Senior Member, Gisle
2006. Bjontegaard, Ajay Luthra “Overview of the H.264/AVC Video
[4] “IPTV QoE: Understanding and interpreting MDI values”, Agilent Coding Standard”, IEEE Transactions on Circuits and Systems for
Technologies, Inc. 2006. Video Technology, Vol. 13, No. 7, pp. 560-576, Jul. 2003.
[5] Tim Rahrer, Nortel, Riccardo Fiandra, Fast Web, Steven Wright, [15] Geert Van der Auwera, Prasanth T. David, and Martin Reisslein
BellSouth, “Triple-play Services Quality of Experience (QoE) “Traffic and Quality Characterization of Single-Layer Video
Requirements and Mechanisms”, WT 126, DSL Forum, 2006. Streams Encoded with the H.264/MPEG–4 Advanced Video
[6] P. Almquist, “Type of Service in the Internet Protocol Suite”, RFC Coding Standard and Scalable Video Coding Extension”,
1349 July 1992. Broadcasting, IEEE Transactions, Issue:3, Part 2 On page(s): 698-
[7] "QoS Solutions for PPPoE and DSL Environments", Cisco 718, Sept. 2008.
Systems, Document ID: 23706, Cisco Press, Aug, 2005. [16] Fengdan Wan, “Traffic Modeling and Performance Analysis for
[8] ITU-T Focus Group on IPTV, “IPTV Focus Group Proceedings - IPTV Systems”, PhD Thesis, Dept. of Electrical and Computer
Architecture and requirements”, ITU-T, Handbook 2008. Engineering, University of Victoria, British Columbia, Canada,
[9] ITU-T Focus Group on IPTV- SG2, “Operational aspects of 11-Aug-2008.
service provision, networks and performance”, ITU-T, Handbook [17] Arni Lie, “Enhancing Rate Adaptive IP Streaming Media
2008. Performance with the use of Active Queue Management”,
[10] Beau Williamson, “Developing IP Multicast Networks”, Cisco Doctoral thesis for the degree of doktor ingenior Trondheim, April
Systems Press Publications, 2004. 2008.
[11] Meeyoung Cha, W. Art Chaovalitwongse, Zihui Ge, Jennifer [18] Donald E. Smith, “IPTV Bandwidth Demand: Multicast and
Yates, Sue Moon. “Path Protection Routing with SRLG Channel Surfing”, published in IEEE INFOCOM, Anchorage,
Constraints to Support IPTV in WDM Mesh Networks”, In Proc. Alaska, USA, May, 2007.
IEEE Global Internet Symposium, Barcelona, Spain, April 2006. [19] Meeyoung Cha, Pablo Rodriguez, Sue Moon, and Jon Crowcroft,
[12] M. Cha, G. Choudhury, J. Yates, A. Shaikh, and S. Moon. Case “On Next-Generation Telco-Managed P2P TV Architectures”, In
Study: “Resilient Backbone Network Design for IPTV Services”, Proc. of International Workshop on Peer-To-Peer Systems
In Proc. of International Workshop on IPTV Services over World (IPTPS), February 2008.
Wide Web, May 2006.

Das könnte Ihnen auch gefallen