You are on page 1of 100

IP QoS

Case Study

Agenda
TxQ

Clarence Filsfils

WFQ

VIP-SRAM vs
drop

RED

IP/ATM

CAR

VIPLL

Cisco Systems Confidential

TxQ

Clarence Filsfils

Cisco Systems Confidential

TxQ vs Interface Queue


Cisco 75xx

Si

Forwarder

Interface
Congested?
No

IP

Clarence Filsfils

VIP

Transmission
Queue

Cisco Systems Confidential

Yes
FIFO/PQ/CQ/WFQ
Interface
Queues

Always FIFO

Depth of this TxQ


If high, then affects Fancy Queueing
TxQ needs to be shortened if FQ enabled

If low, then the CPU is hit (more


interrupt) and the line might not be
100% utilized
IOS tunes the TxQ based on the configured
bandwith

Clarence Filsfils

Cisco Systems Confidential

Effect of the Bandwidth


For Fancy Queueing
othello(config-if)#fair-queue
othello(config-if)#ban 64
othello#sh cont cbus
txacc 4800008A (value 3), txlimit 3
othello(config-if)#ban 2000
txacc 4800008A (value 6), txlimit 6

Do not forget to set the bandwidth accurately

Clarence Filsfils

Cisco Systems Confidential

Effect for the Traffic


TxQ

Extra Delay
WFQ system

TxQ

Clarence Filsfils

Cisco Systems Confidential

VIP Architecture
7500 + Classic IP
sh controller cbus & tx-queue-limit

7500 + VIP
VIP dequeues very fast from Memds TxQ
Real TxQ is between VIP and PA
no conf/show command currently
Do not tweak tx-queue-limit (this could cause Rxbuffering on the incoming VIP)

Clarence Filsfils

Cisco Systems Confidential

WFQ

Clarence Filsfils

Cisco Systems Confidential

Fair-Queueing Illustration
.2

Asimov

145.0.0.4/30
.6

.3

King
.4

.5
64k

.5

.3 ping .4

FTPD
.1

.3 ftp put big-file on .1


FTPD

.1

Pagent
Source

.3 ftp put big-file on .2

.2

.1 sends ill-behaved

.3
30.0.0.0/8

172.17.248.0/21

Clarence Filsfils

black hole

traffic to .3

Cisco Systems Confidential

10

The 4 different tests


We will run 4 tests:
A: FIFO, no ill-behaved traffic
B: WFQ, no ill-behaved traffic
C: WFQ, ill-behaved traffic
D: FIFO, ill-behaved traffic

Clarence Filsfils

Cisco Systems Confidential

11

Ping Round Trip Time


6000

5000

4000

B (wfq) 0%drop

3000

C (wfq+pagent) 0%drop
2000

D (fifo+pagent) 72%drop

1000

496

481

466

451

436

421

406

391

376

361

346

331

316

301

286

271

256

241

226

211

196

181

166

151

136

121

106

91

76

61

46

31

16

0
1

RTT (ms)

A (fifo) 0%drop

session time (s)

Clarence Filsfils

Cisco Systems Confidential

12

Throughput for FTP sessions

FTP1
FTP2
rate
rate
4.0kBps 3.9kBps

FTP1
time
218s

FTP2
time
223s

3.9kBps 3.9kBps

219s

221s

2.6kBps 2.6kBps

332

330

0.0kBps 0.0kBps

(FIFO)

B
(WFQ)

C
(WFQ+pagent)

D
(FIFO+pagent)
Clarence Filsfils

Cisco Systems Confidential

13

Conclusion (1)
A (fifo) vs B (wfq) ==> FAIRNESS
ping RTT is lower with WFQ and more
stable (low-bandwidth interactive traffic is
priviledged wrt high-bandwidth elastic traffic)
with WFQ both FTP sessions get the same
fair share (under fifo the bandwidth sharing
might be much less fair!)

Clarence Filsfils

Cisco Systems Confidential

14

Conclusion (2)
B (wfq, no pagent) vs C (wfq, pagent)
==> ISOLATION
ping RTT is not affected AT ALL by illbehaved source
the bandwidth available for high-bandwidth
sessions is FAIRLY divided between sessions
2.6 = (3.9 + 3.9)/3!!

Clarence Filsfils

Cisco Systems Confidential

15

Conclusion (3)
C (wfq, pagent) vs D (fifo, pagent)
ping RTT is very high and a loss rate of 72%
is experienced with FIFO
the 2 FTP sessions time out due to
excessive loss
Under FIFO, the ill-behaved flow
STARVES the other conversations!!!

Clarence Filsfils

Cisco Systems Confidential

16

Per-Flow SubQueue

In theory, 1 SubQ per Flow


In practice, Flows are hashed in a set
number of SubQ (16 up to 4096)
Clarence Filsfils

Cisco Systems Confidential

17

Birthday Game
Fair-queue <cdt> <subq> <rsvp>
How many people in a room do you need to
have a probability of collision in birthday
higher than 50%?
Only 23!
If too low => hashing collision => back to fifo behavior for
conversations hashing in the same queue: no fairness, high
jitter
If too high, you loose memory in data structures (32 bytes per
sub-queue)

Clarence Filsfils

Cisco Systems Confidential

18

Max-limit enforcement

CDT = 8, Max-Limit=14

Clarence Filsfils

Cisco Systems Confidential

19

Tuning for Burst Resilience

If the offered load constantly fills the link capacity, then


you need to add bw or decrease the load (see before)
Otherwise, the drops come from the traffic burstiness. It
is then needed to increase the buffering capacity of WFQ
increase CDT to allow for more buffering
Start with the default (CDT to 64, MAX-LIMIT to 1000)

Clarence Filsfils

Cisco Systems Confidential

20

Tuning for Burst Resilience


- Too high I/O consumption The CDT is not a hard limit on the WFQ queue length,
hence this latter can grow as high as 700 packets
although the CDT is 64!
CSCdi90521 introduced the max-limit parameter in order
to strictly limit the length of the WFQ queue
If I/O memory gets depleted and/or the queue length
needs to be limited, decrease the max-limit parameter

Clarence Filsfils

Cisco Systems Confidential

21

DWFQ vs WFQ

Clarence Filsfils

Cisco Systems Confidential

22

DWFQ vs WFQ
Flow-based support
Class-based support
Performance Scalability
Behavior at low speed

Clarence Filsfils

Cisco Systems Confidential

23

Low speed
6000

5000

4000

3000

2000

1000

0
1

13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 103 109 115 121 127 133 139 145 151 157 163 169 175 181 187 193 199

Clarence Filsfils

Cisco Systems Confidential

24

RED

Clarence Filsfils

Cisco Systems Confidential

25

Defeating TCP Sync.

http://adm.ebone.net/~smd/red-1.html
Clarence Filsfils

Cisco Systems Confidential

26

Simply RED
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect
random-detect

precedence
precedence
precedence
precedence
precedence
precedence
precedence
precedence

0
1
2
3
4
5
6
7

10
10
10
10
10
10
10
10

40
40
40
40
40
40
40
40

10
10
10
10
10
10
10
10

Set the same thresholds for all the


precedences
Clarence Filsfils

Cisco Systems Confidential

27

VIP SRAM
managemnt vs Drop
tuning
Clarence Filsfils

Cisco Systems Confidential

28

Max available VIP buffers


othello#sh

int s1/0/0 fair/random

Serial1/0/0 queue size 0


pkts out 114, drops 102,

nobuffer drops 0

WFQ: aggreg. queue limit 65, indiv. queue limit 32


max available buffers 65

Note: the maximum number of VIP buffers is a


function of: the amount of SRAM on the VIP, the
amount of ports on the VIP, the Port Bandwidth
and the Port MTU. This is automatically
computed by the IOS.
Clarence Filsfils

Cisco Systems Confidential

29

CAR and TCP

Clarence Filsfils

Cisco Systems Confidential

30

CAR and TCP


CE

2Mbps

PE

Contract: 1Mbps

web

rate-limit CR [bps] Normal_Burst [byte]


Excess_Burst [byte]

PE has to allow enough burst in CAR!

Clarence Filsfils

Cisco Systems Confidential

31

CAR and TCP


Packet Discard %

100

Bucket
Depth
Normal
Burst

Clarence Filsfils

Extended
Burst

Cisco Systems Confidential

32

CAR and TCP


CE

2Mbps

PE

Contract: 1Mbps

ftp

Test3: clocked at 1007616 bps, no CAR

Throughput: 965672 bps (96%)

Test4: clocked at 2015232 bps,


CAR: rate-l out 1007616 8000 8000 c t e d

Throughput: 630160 bps (63%)


Test5: clocked at 2015232 bps,
CAR: rate-l out 1000000 187500 375000 c t e d

Throughput: 976304 bps (97%)


Clarence Filsfils

Cisco Systems Confidential

33

Impact of many flows


Limitation of the test: only one flow
Intuitively, the impact of CAR on the perflow TCP goodput should decrease when
the number of flows increase
The CAR drops should be spread between the active
TCP flows instead of being concentrated on one flow.

Still, this RED-alike dropping is needed


when the traffic is TCP-dominant
Clarence Filsfils

Cisco Systems Confidential

34

GTS/FRTS

BE setting
Becn integration

Clarence Filsfils

Cisco Systems Confidential

35

Determining cir, Bc, Be


Customer site

ISP

Ex1: want to sell a service in which a customer may use all


of a T1 line for 30 seconds in a burst, but on a long term
average is limited to 64 KBPS. This would be to restrict
the amount of load the system can induce on the net
outbound.
Configuration:
interface <his serial interface>
traffic-shape rate 64000 8000 46320000
interface <LAN interface>
traffic-shape rate 64000 8000 46320000
Clarence Filsfils

Cisco Systems Confidential

36

An example of BECN integration


9000

BECN Integration

becn

8000

INC a dde d e ve ry Tc in the toke n Bucke t

becn
7000

6000

5000
Inc
4000

traffic-shape rate 64000 8000 8000


traffic-shape adaptive 32000

3000

2000

BECN received at Tc#1 and Tc#3


1000

Hypothesis: no idle traffic


0
1

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

time represented in units of Tc

Clarence Filsfils

Cisco Systems Confidential

37

Frame Relay Adaptivity to


Congestion(1)
Goal:you want to normally not restrict a flow across a
frame relay sub-interface which has been layered onto a
single DLC, but throttle it back to CIR in the presence of
BECN bits from the network. The access rate in the
example is assumed to be 1544000bps, and the CIR
64000bps.
Configuration:
interface <relevant sub-interface>
traffic-shape rate 1544000 or
traffic-shape adaptive 64000

Clarence Filsfils

Cisco Systems Confidential

38

Frame Relay Adaptivity to


Congestion(2)
Goal: idem but you also know that the other end access
speed is limited to 128kbps...
Configuration:
interface <relevant sub-interface>
traffic-shape rate 128000
traffic-shape adaptive 64000

Clarence Filsfils

Cisco Systems Confidential

39

IP/ATM QoS

Clarence Filsfils

Cisco Systems Confidential

40

Loss of Routing peers!

Neighbor lost
stuck in active
Clarence Filsfils

Cisco Systems Confidential

41

Problem1: the AIP drops


MEMD

AIP
FEIP

Drop !
Sh atm int atm

Congestion
M

Pvcs

NO Congestion

E
M

11.1

Clarence Filsfils

Cisco Systems Confidential

42

Problem1: why?
AIP-MEMD: no backpressure
why? Because the intf does not provide per-VC TxAcc,
hence if the AIP stops reading from MEMD because a pvc is
congested this incurs drops on other pvcs
Buffer resource = only the one from the AIP!

On-board packet memory: 512Kbytes


No per-VC management of AIP mem

Clarence Filsfils

Cisco Systems Confidential

43

Problem1: Effect

Unstability of Routing neighbors!


No IP QoS possible!

Clarence Filsfils

Cisco Systems Confidential

44

Sol1: Opening the Dam!


MEMD

AIP
FEIP
M
M
E
E

SCR x 2
Pvcs

M
M
Clarence Filsfils

Cisco Systems Confidential

45

Sol1: effect
The ATM utilization increase!
AIP utilization
ATM trunk utilization
BPX utilization

Needs provisioning (problem2)!

Clarence Filsfils

Cisco Systems Confidential

46

Sol2: AIP-BPX at least OC3!

Upgrade the AIP-BPX link: E3 to OC3!


Make sure traffic does not stack up on the AIP
because of congestion of the AIP/BPX link

Allows future use of opening the dam

Clarence Filsfils

Cisco Systems Confidential

47

OC3-AIP: the solution? No!


What-if OC-3 AIP gets again congested?
Opening the dam again?
Yes but ATM trunks are finite!
We replace uncontrolled AIP drops by
potential uncontrolled ATM drops

Clarence Filsfils

Cisco Systems Confidential

48

ATM drops better than AIP drops?


NO! much more harmful!
Congestion collapse
1 cell drop => 1 packet drop
TCP goodput collapses
Network-wide problem
Intf-drop => ATM network drop

Problem 3!
Clarence Filsfils

Cisco Systems Confidential

49

Sol3: The BXM isolates VCs

Per-VC queueing
If a trunk gets congested, the BXM card will
drop cells from the congesting VCs only!

Alleviates the network-wide issue

Clarence Filsfils

Cisco Systems Confidential

50

Prob4: ATM BPX configuration


AIP (VBR) vs BPX (ABR)
AIP SCR = 4Mbps
ATM MCR = 400kbps!
Fiddle factor: 10%!
Oversubscription: 1000%
Clarence Filsfils

Cisco Systems Confidential

51

Sol4: reparameter the ATM net


SCR = MCR
Fiddle = 100%
Meet the Contract
NO drop!
Low rtt!

Clarence Filsfils

Cisco Systems Confidential

52

So what did we do?


Sol1: By "opening the dam
relieved the AIP lack of shaping power

Sol2: AIP/BPX: from E3 to OC3


avoided AIP/BPX link oversubscription

Sol3: BXM Per-VC Queueing


avoided network-wide ATM losses
Clarence Filsfils

Cisco Systems Confidential

53

Conclusion
Uncontrolled packets drops through all
pvcs of an AIP, have been postponed
postponed
Potential Uncontrolled cell drops on
congested pvcs have been introduced
Fear of ATM-IP congestion collapse

Clarence Filsfils

Cisco Systems Confidential

54

What is THE solution?


Per-VC ATM card
Per-VC backpressure between the
previous ATM card and its driving CPU
Per-VC Intelligent queue management
on the driving CPU
Lossless atm network!
Clarence Filsfils

Cisco Systems Confidential

55

IPATMCOS + PA-A3

V
I
P
2

IP functions:

- shaping / ABR

- Smart Qing

- segmentation

- per-VC

- per-VC

Stop this VC!


P
A
A
3
O
C
3

Clarence Filsfils

ATM functions:

Cisco Systems Confidential

56

PA-A3-OC3 function
Provides ATM services
Segmentation and re-assembly
Shaping for UBR, VBR pvcs
ABR

Per-VC buffering/shaping
Per-VC backpressure to VIP/NPE
Clarence Filsfils

Cisco Systems Confidential

57

VIP/NPE Function
If PA-A3 says STOP(VC101)
VIP/NPE creates a queue for VC101

Intelligent Queueing means


not dropping system packets (EIGRP, BGP)
Pak-Priority flag!
IPATMCOS feature!

Clarence Filsfils

Cisco Systems Confidential

58

Pak-Priority
Flag found in internal packet
encapsultion
If set, the router must do whatever is
possible to not drop that packet
Example: EIGRP hellos

Clarence Filsfils

Cisco Systems Confidential

59

So the solution is
11.1CC/12.0 (IPATMCOS)
VIP2-50, 8Mbyte of SRAM
PA-A3
DWRED/DWFQ queue management

Clarence Filsfils

Cisco Systems Confidential

60

IP/ATM QoS
Test setup

Clarence Filsfils

Cisco Systems Confidential

61

Model: Topology and Flows


Gen1

EIGRP-only
EIGRP+ISIS+Traffic
EIGRP+ISIS+Traffic

R1

EIGRP+ISIS+Traffic
EIGRP+ISIS+Traffic

F1
R2

ISIS-only

Gen2

R3
FE

Clarence Filsfils

Count1

Sink2

Each core VC is:


PCR:16Mbps
SCR: 8Mbps
MBS: 512 cells
Cisco Systems Confidential

62

Test1: Constant load


Intf

AIP

PA-A3

AIP

PA-A3

Off.
Load

32M

32M

96M

96M

SCR

8M

8M

8M

8M

yoyo

EIGRP 10
Loss
Clarence Filsfils

Cisco Systems Confidential

63

Test2: QoS delivery


Gen2 sends 4 * 8 Mbps
Gen1 sends 10kbps of prec-111 to
count1
AIP: 16% of drop
PA-A3+IPATMCOS: 0% of drop

Clarence Filsfils

Cisco Systems Confidential

64

Test3: Burst Resilience


Gen2 sends 4 * 7.5 Mbps
Gen1 sends 4 * 56 1250-byte pkts at
8Mbps periodically (4 * 1000 cells)
AIP: EIGRP/ISIS neighbor loss during
bursts
PA-A3+IPATMCOS: no loss
Clarence Filsfils

Cisco Systems Confidential

65

Chasing the Drops

Memd? WRED?
VIP-SRAM? PA-A3?

7513-1-31#sh int atm 11/0/0


Output queue 0/40, 801870 drops;
drops input queue
0/75, 0 drops
0 input errors, 0 CRC, 0 frame, 0 overrun,
1225 ignored, 0 abort
0 output errors, 0 collisions, 0 interface
resets
0 output buffers copied, 0 interrupts, 0
failures

Memd?
VIP-SRAM? PA-A3?

Clarence Filsfils

Cisco Systems Confidential

66

AIP on-board output drops

7513-1-31#sh atm int atm 11/0/0


fast, 0 OUT fast, 113648 out drop

For AIP, since 11.1,


output drops on the board
itself
Clarence Filsfils

Cisco Systems Confidential

67

PA-A3 monitoring
7513-1-31#sh atm vc 100
InPktDrops: 0, OutPktDrops: 0

PA-A3 input drops

Clarence Filsfils

PA-A3 output drops


If WRED enabled,
this should be 0!

Cisco Systems Confidential

68

VIP-DWRED Monitoring
If <>0, problem!!!
sh queueing int atm 11/0/0.101
...drops 109254,
109254 nobuffer drops 0
WRED: ... max available buffers 446
Precedence 0: ...drops: 99591 random,
random
9663 threshold
If <>0, problem!
Clarence Filsfils

Cisco Systems Confidential

69

Virtual IP Leased Line


(VIPLL) service with
bandwidth guarantee

Clarence Filsfils

Cisco Systems Confidential

70

VIPLL
Strong Interest in EMEA & Asia/Pacific
Stems from
high traffic US--> non-US
high international bandwidth cost, born by one end

Goals:
change business model
reflect cost of Intl bandwidth to end-user based on individually
contracted bandwidth

Means:
11.1CC IP Qos Features

Clarence Filsfils

Cisco Systems Confidential

71

Why IP QoS (vs TDM/ATM)


integrated technology vs overlay model
Work conserving: no waste
Inflatable Bandwidth: each VIPLL user
may have access to excess capacity on a
best effort basis
No protocol overhead (say compared to
ATM)

Clarence Filsfils

Cisco Systems Confidential

72

VIPLL Service Characteristics


Per-VIPLL user guaranteed minimum
bandwidth to/from the US
each VIPLL user exceed on a best effort basis
When VIPLL user does not use his guaranteed
bandwidth, it is reused by other users.
Each VIPLL user can simultaneously have
non-limited bandwidth for traffic not coming
from the US
Clarence Filsfils

Cisco Systems Confidential

73

VIPLL Reference Model


USA

Europe

R-us

Service Providers Administrative Domain


R-Eu

US
Internet
R-pe1

Clarence Filsfils

R-pe2
Unlimited
Unlimited

Minimum
N1 Mb/s
Assumption:
Manages
Router in US

Minimum
N2 Mb/s

Backbone

R-c1
ISP 1
Domain
Cisco Systems Confidential

R-c2
Business
Customer
74

D1: one COS per VIPLL


Set Precedence to P1 for traffic to Routes Advertised with C1 (QPPB)
Set Precedence to P2 for traffic to Routes Advertised with C2 (QPPB)

R-us
US
Internet

R-Eu
on output interface:
Activate per Precedence DWFQ
Set Weight of Precedence P1 queue to N1 Mb/s
Set Weight of Precedence P2 queue to N2 Mb/s
Set Weight of Precedence 0 to rest of bandwidth

on input interface:
CAR setting all Traffics Precedence to 0
Advertise ISP1 Routes
with Community C1
in BGP
Clarence Filsfils

R-pe1

R-pe2
Advertise Customer 2
Routes with
Community C2 in BGP

R-c2
R-c1

Cisco Systems Confidential

75

D1: one COS per VIPLL


Pros:
strict minimum bandwidth per VIPLL
access to excess bandwidth (proportional to
contracted bw)
no reordering

Cons:
very limited number of VIPLLs (say 5)
no best-effort access to excess

Clarence Filsfils

Cisco Systems Confidential

76

D1: BGP AS Classification


Set Precedence to P1 for traffic to Routes terminating/transiting in AS_1
Set Precedence to P2 for traffic to Routes terminating/transiting in AS_2

R-us
US
Internet

R-Eu
on output interface:
Activate per Precedence DWFQ
Set Weight of Precedence P1 queue to N1 Mb/s
Set Weight of Precedence P2 queue to N2 Mb/s
Set Weight of Precedence 0 to rest of bandwidth

R-pe2

on input interface:
CAR setting all Traffics Precedence to 0

R-pe1
eBGP
AS_1

Clarence Filsfils

R-c1

eBGP
Easy control when
ISP is Transit (AS)
Cisco Systems Confidential

R-c2
AS_2
77

D1: BGP ACL Classification


Set Precedence to P1 for traffic to Routes of Customer 1
Set Precedence to P2 for traffic to Routes of Customer 2

R-us
US
Internet

R-Eu
on output interface:
Activate per Precedence DWFQ
Set Weight of Precedence P1 queue to N1 Mb/s
Set Weight of Precedence P2 queue to N2 Mb/s
Set Weight of Precedence 0 to rest of bandwidth

R-pe2

on input interface:
CAR setting all Traffics Precedence to 0

R-pe1
Is case VIPLL users
dont have AS
Clarence Filsfils

R-c1

Cisco Systems Confidential

R-c2
78

D2: single VIPLL COS + WFQ


on input interface:
Rate-limit N1 Mb/s Customer1-Routes :
if conforms set Precedence to 1, if exceeds set Precedence to 0
Rate-limit N2 Mb/s Customer2-Routes :
if conforms set Precedence to 1, if exceeds set Precedence to 0
For all other traffic set Precedence to 0

R-us
US
Internet

R-Eu

on output interface:
Activate per Precedence DWFQ
Set Weight of Precedence 1 queue to SUM(Ni) Mb/s
Set Weight of Precedence 0 to rest of bandwidth

R-pe1

R-pe2

Suggested to also
activate WRED
(TCP sync, lower delay,..

R-c2
Clarence Filsfils

R-c1
Cisco Systems Confidential

79

D2: single VIPLL COS + WFQ


Pros:
strict bandwidth guarantee to each VIPLL
access to excess bandwidth on best-effort basis
(on a fair basis across all VIPLLs and also on a fair basis with any
best effort customer)
nb of VIPLLs not limited by the nb of Precedences
Cons:
Possible reordering of traffic
(in and out go in separate queues)
nb of VIPLLs ultimately limited by the CAR performance on R-us
(depends on the nb and complexity of the ACLs and on the transatlantic link speed)

Clarence Filsfils

Cisco Systems Confidential

80

D3: single VIPLL COS + WRED


on input interface:
Rate-limit N1 Mb/s Customer1-Routes :
if conforms set Precedence to 1, if exceeds set Precedence to 0
Rate-limit N2 Mb/s Customer2-Routes :
if conforms set Precedence to 1, if exceeds set Precedence to 0
For all other traffic set Precedence to 0

R-us

US
Internet

R-Eu

on output interface:
Activate per Precedence DWRED
Set RED parameters of Precedence 1
to discard only on extremely severe congestion
Set RED parameters of Precedence 0
to discard very early on light congestion

R-pe1

Clarence Filsfils

R-c1

R-pe2

Relies on WREDs efficiency


(may need fine-tuning + extra
protection like UDP rate-limit)

R-c2
Cisco Systems Confidential

81

D3: single VIPLL COS + WRED


Pros:
bandwidth guarantee to each VIPLL
access excess bandwidth on a best-effort basis
Nb of VIPLLs not limited by nb of Precedences
no reordering of traffic
nb of VIPLLs not limited by the nb of Precedences
Cons:
Fine-tuning and testing of WRED
nb of VIPLLs ultimately limited by the CAR performance on R-us
(depends on the nb and complexity of the ACLs and on the transatlantic link speed)

Clarence Filsfils

Cisco Systems Confidential

82

D4: single VIPLL COS


+ BE rate-limit
on input interface:
Rate-limit N1 Mb/s Customer1-Routes :
if conforms set Precedence to 1, if exceeds set Precedence to 0
Rate-limit N2 Mb/s Customer2-Routes :
if conforms set Precedence to 1, if exceeds set Precedence to 0
For all other traffic set Precedence to 0

R-us
US
Internet

R-Eu

on output interface:
Rate-limit [Total_Bandwidth-SUM(Ni)] Precedence=0
if conforms transmit, if exceeds drop

R-pe2

R-pe1

Clarence Filsfils

R-c1

R-c2
Cisco Systems Confidential

83

D4: single VIPLL COS


+ BE rate-limit
Pros:
bandwidth guarantee to each VIPLL
access excess bandwidth on a best-effort basis
Nb of VIPLLs not limited by nb of Precedences
no reordering of traffic
nb of VIPLLs not limited by the nb of Precedences
Cons:
unused VIPLL bandwidth wasted
nb of VIPLLs ultimately limited by the CAR performance
on R-us (depends on the nb and complexity of the ACLs
Clarence Filsfils
and on the trans-atlantic
link speed)
Cisco Systems Confidential

84

D5: single VIPLL COS + CAR


R-us
US
Internet

Set Precedence to 1 for traffic to Routes Advertised with C1


Set Precedence to 1 for traffic to Routes Advertised with C2

R-Eu
on output interface:
Activate per Precedence DWFQ
Set Weight of Precedence 1 queue to SUM(Ni)+margin Mb/s
Set Weight of Precedence 0 to rest of bandwidth

on input interface:
CAR setting all Traffics Precedence to 0
Advertise Customer 2 Routes
with Community C2 in BGP

Advertise ISP1 Routes with


Community C1 in BGP
on output interface:
Rate-limit N1 Mb/s
Precedence=1
if conforms transmit,
Clarence Filsfils
if exceeds discard

on output interface:
Rate-limit N2 Mb/s
Precedence=1
if conforms transmit,
R-c1
Cisco Systems Confidentialif exceeds discard

R-pe2

R-pe1

R-c2
85

D5: single VIPLL COS + CAR


Pros:
bandwidth guarantee to each VIPLL
access excess bandwidth on a best-effort basis
Nb of VIPLLs not limited by nb of Precedences
no reordering of traffic
nb of VIPLLs not limited by the nb of Precedences
nb of VIPLLs not limited by CAR performance
Cons:
some expensive international bandwidth wasted
(discards occuring after intl link; relies on TCP adaptation)
no access to excess bandwidth

Clarence Filsfils

Cisco Systems Confidential

86

11.1(20+)CC features
Following design options take advantage of
some DWFQ enhancements introduced in
11.1(20)CC:
drop probability with ToS-based WFQ

Tos-Based WFQ supports 4 WFQ queues


ignores MSB of Precedence for queue selection
MSB can be used for drop-propability inside WFQ
queue
Qos-Group-based WFQ

WFQ queues per Qos-group

Clarence Filsfils

Cisco Systems Confidential

87

D6: 1! VIPLL Q + ToS-WFQ


on input interface:
Rate-limit N1 Mb/s Customer1-Routes :
if conforms set Precedence to B:110, if exceeds set Precedence to B:010
Rate-limit N2 Mb/s Customer2-Routes :
if conforms set Precedence to B:110 , if exceeds set Precedence to B:010
For all other traffic set Precedence to B:000

R-us

US
Internet

Clarence Filsfils

R-Eu

on output interface:
Activate ToS-Based DWFQ
set Weight of Precedence B:x10 to SUM(Ni) Mb/s
set Weight of Precedence B:x00 to rest
R-pe1
Activate WRED
Set RED parameters of Precedence B:1xx
to discard only on severe congestion
Set RED parameters of Precedence B:0xx
to discard early on light congestion

R-pe2

R-c2

R-c1
Cisco Systems Confidential

88

D6: 1! VIPLL Q + ToS WFQ


= Design 2 + drop-probability to avoid reordering
Pros:
strict bandwidth guarantee to each VIPLL
access to excess bandwidth on best-effort basis
(on a fair basis across all VIPLLs and also on a fair basis with
any best effort customer)
nb of VIPLLs not limited by the nb of Precedences
no reordering
Cons:
nb of VIPLLs ultimately limited by the CAR performance on Rus (depends on the nb and complexity of the ACLs and on the
trans-atlantic link speed)
Clarence Filsfils

Cisco Systems Confidential

89

Design 7: single VIPLL queue + ToS-Based WFQ + Qoson input interface:


group
Rate-limit N1 Mb/s qos-group=Q1 :
if conforms set Precedence to B:110, if exceeds set Precedence to B:010
Rate-limit N2 Mb/s qos-group=Q2 :
if conforms set Precedence to B:110 , if exceeds set Precedence to B:010
For all other traffic set Precedence to B:000
Set qos-group to Q1 for traffic to Routes Advertised with C1
Set qos-group to Q2 for traffic to Routes Advertised with C2
Set qos-group to Q0 for traffic to other Routes

US
Internet

R-us

Advertise ISP1 Routes


with Community C1
in BGP

Clarence Filsfils

R-Eu

on output interface:
Activate per-Precedence DWFQ
set Weight of Precedence B:x10 to SUM(Ni) Mb/s
set Weight of Precedence B:x00 to rest
Activate WRED
Set RED parameters of Precedence B:1xx
to discard only on severe congestion
Set RED parameters of Precedence B:0xx
to discard early on light congestion

R-pe1
Advertise Customer 2
Routes with
Community C2 in BGP

R-c1

Cisco Systems Confidential

R-pe2
R-c2

90

Design 7: single VIPLL queue + ToS-Based


WFQ + Qos-group
= Design 6 + use of Qos-group to enhance CAR perf.
Pros:
strict bandwidth guarantee to each VIPLL
access to excess bandwidth on best-effort basis
(on a fair basis across all VIPLLs and also on a fair basis with any
best effort customer)
nb of VIPLLs not limited by the nb of Precedences
no reordering
Cons:
nb of VIPLLs ultimately limited by the CAR performance on R-us
(depends on the nb and complexity of the ACLs and on the transatlantic link speed)

Clarence Filsfils

Cisco Systems Confidential

91

D8: QoS-group WFQ


Set qos-group to Q1 for traffic to Routes Advertised with C1
Set qos-group to Q2 for traffic to Routes Advertised with C2
Set qos-group to Q0 for traffic to other Routes

R-us
US
Internet

R-Eu
on output interface:
Activate per qos-group DWFQ
Set Weight of qos-group Q1 queue to N1 Mb/s
Set Weight of qos-group Q2 queue to N2 Mb/s
Set Weight of qos-group Q0 to rest of bandwidth

Advertise ISP1 Routes


with Community C1
in BGP
Clarence Filsfils

R-pe1

R-pe2

Advertise Customer 2
Routes with
Community C2 in BGP

R-c2
R-c1Cisco Systems Confidential

92

D8: QoS-group WFQ


Possible Variations on BGP criteria:
BGP Communities
AS Path
BGP ACLs

Clarence Filsfils

Cisco Systems Confidential

93

D8: QoS-group WFQ


= Design 2 + drop-probability to avoid reordering
Pros:

strict bandwidth guarantee to each VIPLL


access to unused bandwidth proportional to bandwidth
nb of VIPLLs not limited by the nb of Precedences
nb of VIPLLs not limited by CAR performance
no reordering
Cons:

nb of VIPLLs limited by the nb/performance/granularity of QoSgroup WFQ (several tens)


no access to excess bandwidth (unless Best-Effort traffic is not
using its bandwidth)

Clarence Filsfils

Cisco Systems Confidential

94

Further Design Considerations


meaning of Precedence deviated
Overbooking
opportunity to overbook with higher nb of
VIPLLs

focused on single router/single link

Clarence Filsfils

Cisco Systems Confidential

95

Premium Internet Access


instead of, or in addition to, VIPLL
Also, assumes that main bottleneck is Intl link in
direction from US
customer doesnt contract for individual guaranteed
bandwidth
customer contracts for a better Internet access (ie an
Internet access with lower oversubscription)
customer gets better performance (throughput of a
compliant TCP flow, delay..)
ISP gets better money for extra resources

Clarence Filsfils

Cisco Systems Confidential

96

VIPLL + Premium Internet Access


on input interface:
Rate-limit N1 Mb/s Customer1-Routes :
if conforms set Precedence to B:110, if exceeds set Precedence to B:010
Rate-limit N2 Mb/s Customer2-Routes :
if conforms set Precedence to B:110 , if exceeds set Precedence to B:010
For all other traffic set Precedence to B:000
Set Precedence to B:001 for traffic to Routes Advertised with Cp

R-Eu

US
Internet

R-us

on output interface:
Activate ToS-based DWFQ
set Weight of Precedence B:x10 to SUM(Ni) Mb/s
set Weight of Precedence B:x01 to Premium Mb/s
set Weight of Precedence B:x00 to rest
Activate WRED
Set RED parameters of Precedence B:1xx
to discard only on severe congestion
Set RED parameters of Precedence B:0xx
to discard early on light congestion

R-pe2

R-pe1

Advertise Customer Premium


Routes with Community Cp in BGP
Clarence Filsfils

R-c1

R-pePremium
Cisco Systems Confidential

R-cPremium

R-c2
97

Selective VIPLL
Enhancement to VIPLL
Allows important traffic of a customer
to use the VIPLL while unimportant
traffic uses best-effort service
Selection may have an impact on
CAR performance
Clarence Filsfils

Cisco Systems Confidential

98

Selective VIPLL
on input interface:
Rate-limit N1 Mb/s Customer1-Routes:
if conforms set Precedence to B:110 , if exceeds set Precedence to B:010
Rate-limit N2 Mb/s Customer2-Routes AND http:
if conforms set Precedence to B:110, if exceeds set Precedence to B:010
For all other traffic set Precedence to B:000

R-us

US
Internet

R-Eu

on output interface:
Activate per-Precedence DWFQ
set Weight of Precedence B:x10 to SUM(Ni) Mb/s
set Weight of Precedence B:x00 to rest
Activate WRED
Set RED parameters of Precedence B:1xx
to discard only on severe congestion
Set RED parameters of Precedence B:0xx
to discard early on light congestion

R-pe1

Clarence Filsfils

R-c1

R-pe2

R-c2
Cisco Systems Confidential

99

VIPLL Design Conclusions


Strong Case in Europe & Asia for Virtual IP Leased Line
service with Bandwidth Guarantee for Downstream ISPs,
Entreprise, Academic
Can be combined with Premium Internet Access
Change in ISP Business Model
Multiple Design Options identified to efficiently support
this service
Fully enabled by IP Qos features
Design options have Pros & Cons and best one depends
on environment

Clarence Filsfils

Cisco Systems Confidential

100