Sie sind auf Seite 1von 20

Resource Allocation :

Process by which network elements try to


meet the competing demands that
applications have for network resources
primarily link bandwidth and buffer space in
routers-

Congestion control :
Efforts made by network nodes to prevent
or respond to overload conditions

Involve both host and network elements
such as routers

In network elements: queuing disciplines

At the end host, the congestion-control
mechanism paces how fast sources are
allowed to send packets flow control

A. Network Model
Packet-switched network





Congestion in a packet-switched network


Contrast with circuit-switched network, were links are
reserved for certain transmission
Source
1
Source
2
Router
Destination
10-M
bps E
thernet
100-M
bps F
D
D
I
1.5-Mbps T1 link
Connectionless Flow
All datagrams are certainly switched
independently, but it is usually the case that a
stream of datagrams between a particular pair of
host flows through a particular set of routers

Soft state :
state of information for each flow, information
that can be used to make resource allocation
decisions about the packets belong to the flow

Flow explicit
implicit
Service Model
Best-effort service
no guarantees for packet delivery, order delivery,
and the integrity of data
unreliable service

Quantitative guarantees of QoS
example : bandwidth needed for video streaming

We will use best-effort service model for the
rest of discussion

1. Router-centric versus Host-centric
Router centric :
each router makes responsibilities for deciding
(forward or drop packets) as well as informs end
host how many packets which is allowed to send

Host centric :
end hosts observe the network conditions & adjust
their behavior accordingly
2. Reservation-based vs Feedback-based
Reservation-based system
end host asks the network for a certain amount
of capacity at the time a flow is established

Feedback-based system
end hosts begin sending data without first
reserving any capacity, then their sending rate
according to the feedback they receive
Explicit i.e. congested router sends a please slow
down message to the host
Implicit i.e. end host adjusts its sending rate
accordingly to externally observable behavior of the
network such as packet losses
3. Window-based versus Rate-based
Window-based system
receiver advertises a window to the sender
(window advertisement)

Rate-based system
how many bit per second the receiver or the
network is able to absorb
Ex.: multimedia streaming application
Effective Resource Allocation
Two principal metrics of networking (effectiveness):
throughput and delay
As much throughput and as little delay as possible
Ratio :




The objective is to maximize the ratio, which is a
function of how much load placed on the network






Delay
Throughput
Power =
T
h
r
o
u
g
h
p
u
t
/
d
e
l
a
y
Load
Optimal
Load
Ratio of throughput to delay as a function of load
Fair means equal ?
Raj Jains fairness index :









Flow throughput = (x
1
,x
2
,, x
n
) in bps

( )
2
1
2
1
2 1
,..., ,

=
=
|
|
.
|

\
|
=
n
i
i
n
i
i
n
x n
x
x x x f
Suppose a congestion-control
scheme results a collection of
competing flows that achieve the
following throughput rates: 120
KBps, 480000 bps, 130 KBps,
90000 Bps, and 150000 Bps.
Calculate the fairness index for
this scheme!

1. FIFO or FCFS
First packet that arrives at a router is the first
packet to be transmitted
Combined with tail drop policy

Free buffers
Queued packets
Arriving packet
Next free
buffer
Next to
transmit
FIFO Queuing
Arriving packet
Next to
transmit
Drop
Tail drop at a FIFO Queue
Priority queuing :
a variation of basic FIFO queuing

Idea :
mark each packet with a priority, usually in
ToS (Type of Service)

Routers implement multiple FIFO queues,
one for each priority class
The network charge more to deliver high-
priority packets than low-priority packets
economic reason
Solve main problem in FIFO queuing :
discriminate different traffic sources
Idea :
maintain separate queue for each flow currently being
handled by the router and services these queues in a
round-robin manner


Flow 2
Flow 3
Flow 1
Flow 4
Round Robin
service
If F
i
denotes time when router finishes
transmitting packets i (called timestamps)
then the next packet to transmit is always
one with the lowest timestamp

2 important things about FQ:
Link is never left idle as long as there is at least
one packet in queue, known as work-conserving
If link is fully loaded & there are n flows sending
data, we cant use more than 1/n
th
of the link
bandwidth
Example of fair queuing in action :
Output
(a)
F = 8
F = 5
Flow 1
F = 10
Flow 2
Output
(b)
F = 10
F = 2
Flow 1
(arriving)
Flow 2
(transmitting)
A variation of FQ, is Weighted Fair Queuing
(WFQ)

Idea:
allows a weight to be assigned to each flow

The weight logically specifies how many
bits to transmit, effectively controls the
percentage of links bandwidth of the flow
1. Peterson, Larry. Computer Network: A
System Approach. 3
rd
edition. Morgan
Kauffman.

Das könnte Ihnen auch gefallen