Sie sind auf Seite 1von 44

Communication Networks

for Multimedia

The Evolution of
Communication Networks
For over 100 years, the POTS (Plain Old
Telephone System) has been the primary focus of
conventional voice-band communications
POTS network is well designed and well
engineered for the transmission and switching of
3-Khz voice calls
Real-time
Low-latency
High-reliability
Moderate-fidelity

Packet Networks
POTS network is not designed for other forms of
communications, such as wide-band speech,
audio, images, video, facsimile and data.
About 30 years ago, a second communications
network was created with the goal of providing a
better transport mechanism for data networking.
The resulting network is called a packet network
because data is transmitted and routed along the
network in the form of units of information

Packet Networks (contd)


Packet networks evolved independently of telephone
networks for the purpose of moving bursty, non-realtime
data among computers.
Packets consist of a header (information about the
source and destination addresses) and a payload
(actual data being transmitted).
Packet networks are especially well-suited for sending
data of various types, including messages, facsimile,
and still images. Packet networks are not well suited
for sending real-time communication signals such as
speech, audio and video.

The Open Systems Interconnect


(OSI) Architecture
Physical layer is concerned with the transmission and
reception of unstructured bit stream over any physical
medium. It deals with the mechanical aspects and signal
voltage levels. Examples are RS-232-C and X.21
Datalink layer ensures reliable transfer of data across
the physical medium. It also provides access control to
the media in the case of local area networks. Examples
are High-level Data Link Control (HDLC), LLC and SDLC
Network layer provides the upper layers with
independence from the switching technology. It is
responsible for establishing, maintaining and terminating
connections. It is also responsible for routing. Examples
are X.25 and IP

The OSI Architecture (contd)


Transport layer is responsible for reliable and transparent
transfer of data between end points, takes care of end-to-end
flow control and end-to-end error recovery. An example is
TCP
Session layer provides a means for establishing, managing
and terminating connections between processes. It may
also provide checkpoints, synchronization and restart of
service
Presentation layer performs a transformation on the data to
provide a standardized interface to applications. It help to
resolve the syntactic differences when the internal
representation of the data differs from machine to machine
Application layer provides services that can be used by user
applications.

Media Access Control


Media Access Control (MAC)* systems may be
divided into three categories:
Round robin: each station on the network, in turn,
is given an opportunity to transmit. When it is
finished it must relinquish its turn and the right to
transmit passes to the next station in logical
sequence. Control of turns may be centralized or
distributed. Token ring is an example of such
scheme
* MAC: The lower sublayer of the OSI data link layer. The interface
between a node's Logical Link Control and the network's physical layer.

Media Access Control (contd)


Reservation. Typically, the time on the medium is
divided into slots (time-division multiplexing). To
transmit, a station reserves future slots for an
extended or indefinite period. Shared satellite channel
is an example of this scheme
Contention. No control is exercised to determine
whose turn it is to transmit. These methods are likely
to lead to collisions and may require retransmission.
These techniques perform well when the network load
is less, progressively drop off at moderate loads, and
perform poorly at high loads. CSMA/CD is an
example

Network and Transport Layers


Together, the network and transport layers
establish a data pipe between the source
computer and a process on the destination
computer
the network layer is responsible for setting up
routes from a source node to a destination node
the transport layer handles end-to-end issues
between processes running on this nodes, such as
error control, sequencing, flow control

Unicast and Multicast


Multimedia communications involve two basic modes:
unicast and multicast
In unicast mode there are two communication
partners, or peers, and the resulting mode is called
peer-to-peer communications
Example: individual client-server applications (home
shopping, video on demand)
Multicast mode involves 1 to N communications
(peer-to-multipeer) as well as 1 to all communications
(broadcast mode)
Examples: distance learning, multipeer
teleconferences

Routing
Network: graph of nodes (subnetworks) and edges
(links between subnetworks)
Problem: find an optimal path from a given source to
a given destination node
Routing is the problem of the main task of the
network layer and involves two major subproblems:
Find an optimal path in the routing graph, under
changing network loads and perhaps even a
changing network topology
Get all incoming packets through a router at runtime
in an optimal way

Approaches to Routing
Connectionless: the pathfinding algorithm is
executed every time a packet is injected into the
network (e.g., IP). Each packet finds its way
independent of other packets and carries a
destination address.
Efficient for short connections (no connect and
disconnect phases in the protocol)
Robust in the case of a node failure (no state
information stays in the nodes)
Easy internetworking

Approaches to Routing (contd)


Connection-oriented (aka virtual circuit): a path
from source to destination is computed only once
for the duration of a connection. All packets of a
connection follow the same path through the graph
(e.g., X.25. Frame relay, ATM)
The connection set-up packet leaves a trace with
routing information in each node on the path (a
connection identifier plus an output port), and all
subsequent packets follow the same path.
Efficient routing at runtime (no pathfinding
algorithms to be executed during a connection)
The ability to use access control to avoid network
congestion (a new call is rejected when the network
is overloaded)

Routing Algorithms
Static routing: all routes are pre-computed for a
given topology and are independent of the current
network load
Each node has a table with entries in the form
[source; destination; outgoing link]
An incoming packet contains the destination
address (or, in the case of virtual circuits, the
connection identifier)
Routing decision is reduced to a quick table look-up
When the network topology changes, a network
control center re-computes the global routing table,
and the new table is downloaded into all nodes

Routing Algorithms (contd)


Adaptive routing: the path-finding algorithm
automatically takes into account new or obsolete
nodes and links as well as the current load of
nodes and links. Each node gets some limited
information from neighboring nodes and/or
extracts information from packets underway.

Broadcast Routing
Send a distinct packet to each destination
Bandwidth wasteful
Requires the source to have a complete list of all
destinations
Flooding
Every incoming packet in a node of the subnet is
sent out to every outgoing line except the one it
arrived on
Must have a way to dump the number of
duplicate packets
E.g., each router can keep track of which
packets in a sequence have already been sent

Broadcast Routing (contd)


Multi-destination Routing
Each packet contains a list of destinations
New copies of the packets are generated at each router
for the output lines that are needed
After a sufficient number of hops, each packet will carry
only one destination and can be treated as a normal
packet
Spanning Tree Broadcasting
Spanning Tree (ST) = subset of nodes of the subnet that
includes all the routers but no loops
If each router knows which of its lines belong to the
spanning tree, it can copy an incoming broadcast packet
onto the ST lines except the one it arrived from
Most efficient but all routers must know the ST

CBR and VBR Traffic


Multimedia traffic can be characterized as
constant bit rate (CBR) or variable bit rate
(VBR)
For CBR applications, it is important that the
network that transports the data streams has a
constant throughput (otherwise, extensive buffering
would be required at each end of the system)
VBR traffic often occurs in bursts or spurts (typical
case: video compression)
A good measure of burstiness is given by the ratio
of peak traffic rate over mean traffic rate over a
given period of time

CBR and VBR Traffic (contd)


Even with CBR networks, the throughput may vary
with time due to the following reasons:
Node or link failure
Network congestion (when the demand for network
capacity exceeds the availability)
Throughput decreases with increasing load,
especially when bottlenecks are present in the
network
Flow control . It is an end-to-end protocol that
places limits on the rate of data transmission
between two end-systems connected through a
network in order to prevent loss of data at the
receiving end-system due to buffer overflow

Congestion and Flow Control


Congestion happens when too many packets are
present in (a part of) a subnet (performance degrades)
Congestion control: makes sure that the subnet
can carry the offered traffic.
Flow control: makes sure that a fast sender
cannot continuously send data faster than the
receiver can absorb it (involves feedback from the
receiver)

Congestion and Flow Control


(contd)
Examples:
Fiber optic network at 1000 Gb/s on which a fast
computer is trying to transfer a file to a PC at 1Gb/s
There is no congestion but flow control is required
Network with 1 Mb/s lines and 1000 computers, half
of which are trying to transfer files at 100 Kb/s to
the other half
There are not fast senders overpowering slow
receivers but the total offered traffic exceeds what
the network can handle (congestion)

Congestion Control:
Traffic Shaping
Sender and network agree on average rate and
burstiness of data transmission
It is not so important for file transfer but very
important for real-time data (audio/video) which do
not tolerate congestion well
Needs traffic policing to monitor the traffic flow
and make sure that the customer is following the
agreement

Example: Leaky Bucket


The Leaky Bucket algorithm consists of a token counter
and a timer. The counter is incremented by one at each T
units of time and can reach a maximum value C. A packet
is admitted into the network if and only if the counter is
positive. Each time a packet is admitted, the counter is
decremented by one.
The traffic generated by a Leaky Bucket regulator consists
of a burst of up to C packets followed by a steady stream
of packets with minimum inter-packet time of T
Parameters:
Capacity C (packets or bytes)
Flow (packets or bytes per second)

Leaky Bucket (contd)


Example:
A computer can produce data at 25 MB/s.
The routers can handle this data rates only for short
intervals. For longer intervals, they work best at less
than 2MB/s.
Data comes in 1MB bursts (one 40 ms burst every
second)
To reduce the average rate to 2MB/s, we could use
a leaky bucket with =2MB/s and capacity C=1MB
This means that bursts up to 1MB can be handled
without data loss, and that such bursts are spread
out over 500 ms, no matter how fast they come in.

Flow Control
Flow control is typically performed using the
Sliding Window mechanism
The sliding-window algorithm allows the sender to
transmit packets at its own speed until a window of
size W is used up. It then has to stop and wait until
acknowledgments from the receiver open the
window again.
In the TCP protocol, W is not counted in terms of
packet but in terms of bytes in transfer

Sliding Window

Requirements and Performance


The three most important parameters of a
communication network for multimedia
communications are:
Throughput
Error rate
Delay
They form the basic parameters of the Quality of
Service (QoS)

Throughput
The throughput of a network corresponds to its
effective bandwidth or bit rate, i.e., the physical link
bit rate minus the various overheads
Example: ATM technology over a SONET
(Synchronous Optical NETwork) fiber optics
transmission system. The network carriers
provisioned bit rate is 155.52 Mb/s. Principal
overheads are approximately 3% for SONET and
9.5% for ATM. Thus, the maximum throughput of
this network is actually 136 Mb/s
Other factors that affect throughput are network
congestion, bottlenecks, node or line faults

Error Rate
Defined in terms of the bit (packet) error rate,
i.e., the ratio of the average number of corrupted
bits (packets) to the total number of bit (packets)
transmitted
Examples:
In fiber optics transmission, the bit error rate range
from 10-8 to 10-12
In satellite transmission systems, the bit error rate is
on the order of 10-7

Causes of Errors in
Packet-Switching Systems
Individual bits in packets are inverted or lost
Error-correction codes are able to correct the error or
detect it and request retransmission
Bit error recovery is based on error detection and
retransmission
The sender learns about bit error in one of two ways:
The receiver sends a negative acknowledgment
(NACK)
The sender signals a time-out unless a positive
acknowledgment is received within a predefined
interval

Causes of Errors in
Packet-Switching Systems (contd)
Packets are lost in transit (inadvertent error),
dropped by an intermediate node (deliberate error) or
delayed
In a connection-oriented network, when packets are lost
or dropped, the receiving end-system is usually able to
detect such a situation and inform the sending side
Packet loss recovery is based on sequence numbers
In the case of connectionless networks, packet loss or
dropped packets are difficult, if not impossible to detect
The primary reason for packets being dropped or lost
in high-speed networks is insufficient buffer space at
the receiving end-system due to congestion in the
network

Causes of Errors in
Packet-Switching Systems (contd)
Packets arrive out-of-order
It is the job of the receiving end-system to
rearrange the received packets in the numerical
sequence in which they were originally sent
IMPORTANT: packet retransmission (especially if
it has to be carried out on an end-to-end basis)
significantly increases latency
For real-time video or audio transmission, delay is a
more important performance issue than error rate,
so in many cases it is preferable to forget the error
and simply work with the received data stream as is

Delay (Latency)
End-to-end delay is formed by:
Network delay, composed of
transit delay which depends on the physical
distance between the two ends
transmission delay which is the time required to
transmit a block of data and depends on the bit rate
and on processing delays in the intermediate nodes,
including routing and buffering
Interface delay, which is the delay incurred
between the time a sender is ready to begin ending
a block of data and the time the network is ready to
transmit the data

Delay (Latency) (contd)


For connection-oriented networks, when end-to-end
acknowledgments are required, round-trip
delay is useful
Round-trip delay is defined as the total time
required for a sender to send a block of data
through a network and receive an
acknowledgment that the block was received
correctly

Delay Variation (Jitter)


Extremely important for synchronous multimedia
streams (e.g., audio and video)
Network traffic can be:
Asynchronous (no upper bound to the latency)
Synchronous (an upper bound to the latency
exists)
Isochronous (there is a constant transmission
delay for each message, i.e., if two data streams
traverse the network at essentially the same rate
and arrive at the destination at the same time)
Isochrony may be recovered by an appropriate
playout buffer at the destination

Quality of Service (QoS)


QoS indicates how well a network performs in
dealing with a multimedia application
Individual applications have different expectations
of how well the network carries out its tasks
Real-time conferencing may impose QoS
requirements on latency and throughput
Downloading a video might require small error rates
but not have tight restrictions on latency or
throughput

QoS (contd)
Resource Reservation and Scheduling
If an application knows in advance that it requires certain
QoS resources it can make a reservation with the network for
those resources for the period in question. The network can
either deny the request or schedule the application for that
period and reserve the resources requested

Resource Negotiations
If the network administrator feels that the requested
resources might overtax the capabilities of the network, it can
negotiate with the requester and offer lower QoS parameters.
A mutually acceptable set of QoS parameters can then be
negotiated.

QoS (contd)
Admission Control
If the QoS demands of the particular application are
so high that the network cannot meet them, the
network has the choice of not letting the application
on to the network.
Guaranteed QoS
The user may expect a guaranteed level of service
from the network. Whether these guarantees are
statistical or absolute depends upon the
negotiations between the user and the network.

Example of QoS Requirements


for Audio

Example of QoS Requirements


for Video

Media Filtering
In a multicast scenario, not all receivers have the
same QoS requirements
E.g., a PC connected via a telephone line will not
be able to receive video at the same rate as a
highend UNIX workstation connected via ATM
A solution: media filtering
The internal network nodes implement media filters,
so that the sender needs to create only one flow
satisfying the maximum QoS, saving considerable
bandwidth

Media Filtering - Example

Media Scaling
A problem with a static QoS contract between
the sender and receivers of a multicast stream is
the variance of many parameters throughout the
duration of the transmission, both at the end notes
and within the network.
It would be desirable to adjust the QoS
parameters during a multimedia connection. When
applied to a multimedia data stream, this is called
media scaling.

Media Scaling (contd)


Media scaling allows control of parameters other
than just the data rate (which was already
supported by traditional connection-oriented
protocols via flow control mechanisms).
Example: image quality in video stream
To implement media scaling, the interface
between the application and the network must be
extended to pass control information.
Example: if the network signals increasing
congestion, an MPEG video encoder can adjust its
data rate out via any of its scalability techniques.

Das könnte Ihnen auch gefallen