Beruflich Dokumente
Kultur Dokumente
Abstract
Quality of Service requirements are very important to
multimedia applications. Ensuring that these requirements
are met is key to many of todays applications and creating
new technologies to ensure that stricter requirements can be
met will help create new devices in the future. This is a
study on the values of the QoS Requirements for Multimedia
Applications.
Introduction
Whentheinternetwasdesigned,itwasintendedtobeused
fortransfer oftextorothersimpledatatypes,wherethe
levelofservicedidnotmatter.Theonlythingthatmattered
wasreliability. Conceptssuchasdelay,jitterandpacket
losspercentagesdidnoteffect theservice,theonlything
that mattered was that the service existed. When the
internetstartedbeingusedforapplicationssuchasinternet
telephony,streaminglivevideoandevenremotesurgery,
things like jitter and packet loss began to matter.
Multimedia,unliketext,hasaneedforserviceguaranteesor
else the services become useless. Trying to carry on a
conversationwhenwordsarriveoutoforderwouldbequite
frustrating. Thus,theintroductionofmultimediaintothe
internetledtotheconceptofqualityofservice.
Codec Name
G.711
GIPS Enhanced
G.711
G.723.1
G.728
G.729
Miscellaneous
Voice activity detection
100
2
Optimized for low delay
10
Voice activity detection
Once the voice is encoded with a particular codec it is
transmitted over the internet using internet protocol l(IP).
1. Voice
With the growth of the internet and easier access to high
speed internet connections, more and more people are
turning towards computer networks to handle their long
distance voice communication instead of the traditional
telephone system. Using the internet to replace standard
telephone lines has many advantages. One of the biggest
advantages being that using the internet for voice
From Figures 1 and 2, one can see that the delay introduced
from a codec alone can approach almost 120 ms. This
number does not include the other various delays introduced
by the network such as propagation delay, queuing delay
and transmission delay. Based on the ITU recommendation
G.114, the delay in a telephone call should be less than 100150 ms. The reasoning behind this is a psychological factor.
If the delay is much more than this the caller will be
dissatisfied with the service. Even though a delay of 100150 ms is acceptable most QoS requirements for VoIP ask
for 50-80 ms of delay or less.
1.3 Solutions
Since delay must be minimized to ensure satisfactory
telephone service we must employ some techniques to
reduce this delay. The first major way to speed up voice
communication is to compress the audio signal. If the sheer
size of the data being transported is reduced it will arrive at
the destination quicker.
Some notable low bit rate
compression algorithms used are ITU G.723.1 and G.729A.
Another way to reduce the payload of transmitting voice
over IP is to use silence suppression. Due to the fact that
during normal telephone conversation one person talks
2. Video
Video traffic is being sent more and more often in todays
internet and will only increase in the future. Applications
such as video conferencing are becoming business standards
and many websites, CNN.com for example, offer videos on
demand. Today many homes even have digital cable
television service which transmits video information over a
network. Since video imaging requires lots of data,
compression and reservation protocols are going to become
Figure 4: Number of Television Channels for Various Averaged Motions Within the Images [4]
Average
Very Slow
Slow
Normal
Fast
Motion
Pixel Change
2 kp/s
4 kp/s
8 kp/s
16 kp/s
Rate
Channel
Bit Rate
12 kb/s
24 kb/s
48 kb/s
96 kb/s
PC local LAN
30 kb/s
2.5
1
0
0
Modems
56 kb/s
4
2
1
0
ISDN
64 144 kb/s
12
6
3
1
T-1, DSL
1.5 Mb/s
125
62
31
15
Ethernet
10 Mb/s
833
416
208
104
T-3
42 Mb/s
3500
1750
875
437
Fiber Optic
200 Mb/s
16,666
8,333
4,166
2,083
increments of 1k (1024) pixels/second. In normal television
this translates to approximately one square inch of changed
image per second. This change does not need to be in one
contiguous block, it can be scattered throughout the entire
image. Figure 4 illustrates this by showing the number of
simultaneous channels various types of links can support for
different rates of motion.
3.1 Definition
One type of interactive multimedia is interactive gaming.
Interactive gaming, in this case, refers to players on their
own machine connecting remotely to other machines to
compete in the same event against each other. The device
used to connect could be a PC, a console game system or a
handheld device. Each of the devices already has most of
the game data, such as the engine and the graphics, so only
certain data needs to be sent to the central server. This data
may include character positioning and orientation, as well
as their current action, and the central server sends the
pertinent data to the connected computers for processing.
Figure 5: Delay Requirements for Data Types [6]
Application
Video
Voice
Interactive
Multimedia
Delay (ms)
150
150+80
15
3. Interactive Gaming
Recently, interactive multimedia, such as network gaming,
remote visualizations, remote surgery and tele-immersion,
has become a very large part of the
still developing internet. Compared to video and voice,
these types of applications often have QoS requirements
that are even tougher to satisfy than video and voice
program. This is often due to the fact that these applications
can generally not afford to lose packets or suffer from any
noticeable latency, or there is a good chance the experience
will be affected, if not ruined.
3.3 Throughput
Throughput is a QoS requirement that varies from game to
game. Most games only require 56K dial up connections
(40 kpbs) to run smoothly. For example, two of todays
most popular online games, Guild Wars and Counter-Strike,
can both be played online with a 56K connection. CounterStrike, for example, only needs around 16 Kbps per
connected user to avoid slowdown [7]. This number can
vary greatly depending on the genre of game. Games where
players have to take turns, such as Massive Multiplayer
packet loss. As the paper says, even the users who suffered
a delay of 40 ms, but with a jitter of 20 ms, were affected
greatly. All of the users reported horrible gameplay during
this experiment, and the delay never was over 100 ms,
which in the game they played (Unreal Tournament 2003),
would not be devastating, but some of the users could not
even continue the game [11]. This is due to the jitter. If the
packets dont arrive on time, then game information can be
lost. Even worse, if the packets arrive out of order, then the
packet that arrives late is useless, and the possibly important
information is lost.
In network gaming, only data that is relevant to the game is
usually sent. As a result, there can be almost no packet loss,
since all data is important. Effectively, the packet loss has
to be at 0% for a game to run smoothly. The only way that
a packet loss is acceptable is if the packet contains
insignificant details. However, almost no packets (if any)
contain any insignificant details. Therefore, packet loss can
be crippling to a game session.
> 200 ms
Intolerable gameplay
must wait for every action to perform, they will not want to
use the system. The QoS requirements that most directly
affect remote visualizations are:
1. low delay
2. high throughput/bandwidth
3. low latency
4.2 Delay
Generally, the way remote visualizations work is simple a
computer collects the input, storing it as raw data. That data
is then turned into triangles, which are translated to an
image, and then only the image is sent to the user, who then
views the image or executes a command, modifying the
data and forcing the system to take a new snapshot.
Therefore, most of the data is handled at the side of the
computer that is given the raw data. [16]
For the calculations, the data gathering computer first has to
receive the raw data. Often, this data can approach size of
gigabytes or terabytes, but due to visualization algorithms,
the size can often be dropped down to around 100 MB. The
size of the raw data is listed here on as n. Next, the
number of triangles per frame in the image can be written as
K, where K is often 500,000 (but can range from 50,000 to
1,000,000). With n and K, the delay to generate the
triangles from raw data is on the order of O(log(n) + K),
with the actual time depending on the CPU[16].
4. Remote Visualizations
Remote visualizations are another type of multimedia
becoming a reality as the internet continues to grow.
Basically, a remote user connects to a data set that is either
generated or has previously been generated. The key is that
the data must be interactive for example, the user can
often do things like rotate the data, zoom in, and even add
in slice planes to see inside the data. Right now, however,
there is just too much in the way of performing consistent
remote visualizations.
4.3 Throughput/Bandwidth
This is a big factor that keeps remote visualization from
being possible. The transmission speed is the last key.
Obviously, a high speed link is needed at the end, because
the transmission both ways must be as quick as possible, so
that the only delay is at the computer gathering data.
When performing remote visualizations that do NOT work
like the above one, usually, a link speed about around 700
Mbps is necessary. This is to make sure the data (which
would be the 100M) is transmitted in a timely fashion. The
more observers of the data the more time is needed. If that
compression is put into motion, then a link of only 100
200 Mbps is necessary. Again, the more observers, the
more the overall link will need (so 2 users will be around
200 400 Mbps from the central server, and so on). In
addition, if stereoscopic rendering (that is, creating the
illusion of depth in the image), the amount of bandwidth
needed is doubled [17].
4.4 Latency
Although QoS data on exact requirements for latency are
difficult to track down, a simple experiment using a
visualization kit (VTK, in this case), can demonstrate the
effect of latency on distance visualizations [26]. On VTKs
website, they offer many sample programs in C++ that use
to perform visualizations. Although these occur solely on
the users computer, the sleep command from
<windows.h> can be used to simulate remote visualization
delay. Since the command sleep(50) tells the processor to
sleep for 50 ms before continuing on to the next program.
Inserting a sleep(X) command wherever a command appears
in the VTK program will cause the program to pause for X
ms before it continues. Since every time a command is
issued, the computer waits a certain amount of time to begin
it, this is a very effective latency simulation.
5. Telesurgery
5.3 Time-Delay
On September 7th, 2002, the worlds first human longdistance operation was performed between New York, USA
and Strasbourg, France (14,000 km distance), demonstrating
the feasibility and safety of performing a complete surgical
operation from remote locations. The two sites were
connected through a high-speed terrestrial optical-fiber
network that transports data through dedicated connections
using Asynchronous Transfer Mode (ATM) technology. A
bandwidth of 10 Mbps has been reserved through a network
that interconnects applications at both sites using a network
termination unit (NTU), which provides a multiservice path
to different applications [19].
6. Tele-Immersion
6.3 Lag
Lag is the term used to describe the perceived sum of all the
sources of latency in a system. Typically, it is thought of as
the delay between action in the real world and the perceived
response of the system to that action. Lag is the critical
issue for usability; reducing lag is a major technical
6.4 Jitter
Jitter in the network will more greatly impact collaborative
coordination than latency [23]. Higher latencies with low
jitter will still allow collaborators to make reasonable
predictions of how an environment will behave (albeit
overall task performance will decline.) However high jitter
reduces predictability and hence collaborators are forced to
employ a purely sequential interaction strategy
Type
Control
Text
Audio
Video
Tracking
Database
Simulation
Haptic
Rendering
Latency
< 30 ms
< 100 ms
< 30 ms
< 100 ms
< 10 ms
< 100 ms
< 30 ms
< 10 ms
< 30 ms
Streaming
No
No
Yes
Yes
Yes
No
Maybe
Maybe
Maybe
DynQoS
Low
Low
Medium
Medium
Medium
High
High
High
Medium
Conclusion
Astheinternet hasevolvedithascertainlyimprovedthe
quality of service. What was once only able to support
simpletexttransferisnowcapableofallowingsurgeryto
beperformedfrom3,000milesaway. Simplemultimedia
applicationssuchasvoicetrafficandstreamingvideoare
veryeasytodowithtodaystechnologiesandapplications
that require precise data transfers with large payloads,
which were once inconceivable, are starting to become
commonplace. Technology will spawn more and more
complex applications, each requiring finer and more
guaranteed quality of service. In order to support these
future technologies more effective means of ensuring
qualityofservicearegoingtoneedtobedevelopedaswell.
REFERENCES
[2]Voiceage:TheWorldsPremierSupplierofSpeechand
AudioCodecs;http://www.voiceage.com/g729.php
[3]Birin,Gil,VoiceoverFrameRelay,IPandATM;
http://www.protocols.com/papers/voe.htm