Sie sind auf Seite 1von 56

AT&T Network-Based

Class of Service Features


Customer Router Configuration
Guide
Release 3.1
July 2012

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

Technical Assistance
This is an AT&T proprietary document developed for use by AT&T customers. For additional
technical assistance contact your AT&T sales team. (This document was prepared by AT&T Solution
Center Network Design and Consulting Division.)

Legal Disclaimer
This document does not constitute a contract between AT&T and a customer and may be withdrawn or
changed by AT&T at any time without notice. Any contractual relationship between AT&T and a
customer is contingent upon AT&T and a customer entering into a written agreement signed by
authorized representatives of both parties and which sets forth the applicable prices, terms and
conditions relating to specified AT&T products and services, and/or, to the extent required by law,
AT&T filing a tariff with federal and/or state regulatory agencies and such tariff becoming effective.
Such contract and/or tariff, as applicable, will be the sole agreement between the parties and will
supersede all prior agreements, proposals, representations, statements or understandings, whether
written or oral, between the parties relating to the subject matter of such contract and/or tariff.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

Table of Contents
Changes from Release 3.0 to 3.1................................................................................................................... 4
1. Introduction .......................................................................................................................................... 5
2. COS Systems Approach ....................................................................................................................... 7
2.1
Ordering COS in the Service (for the PE) ................................................................................... 7
2.2
CPE Marking and Queuing.......................................................................................................... 8
2.3
Network Based Class of Service ................................................................................................. 9
2.3.1 PE Ingress Treatment .............................................................................................................. 9
2.3.2 Core Treatment...................................................................................................................... 10
2.3.3 PE Egress Treatment ............................................................................................................. 10
2.4
Network Capacity ...................................................................................................................... 11
3. Configuring Class of Service ............................................................................................................. 12
3.1
Identify Traffic Types................................................................................................................ 12
3.2
Create a Policy........................................................................................................................... 14
3.2.1 Priority Command for COS1 Real-Time Class ..................................................................... 15
3.2.2 Setting the DSCP................................................................................................................... 16
3.3
Configuring the Interface and Appling the Policy..................................................................... 17
3.4
Verify the Policy........................................................................................................................ 19
4. Configuring Interfaces with COS ....................................................................................................... 22
4.1
PPP Interfaces ............................................................................................................................ 22
4.1.1 Sub-Rate PPP Interfaces........................................................................................................ 22
4.1.2 MLPPP Interfaces (for Multiple T1s or E1s) ........................................................................ 22
4.2
Frame Encapsulation Interfaces (Unilink) ................................................................................. 23
4.3
Ethernet Interfaces ..................................................................................................................... 24
4.3.1 Target Rate ............................................................................................................................ 25
4.3.2 Committed Burst (Bc) ........................................................................................................... 26
4.3.3 Excess Burst (Be) .................................................................................................................. 26
4.3.4 802.1Q Encapsulation with Multiple VLANs (Unilink): ...................................................... 26
4.4
Frame Relay Interfaces .............................................................................................................. 27
4.4.1 Sub-rate Frame Relay Interfaces ........................................................................................... 27
4.4.2 Frame Relay Interfaces with Multiple Logical Channels (Unilink) .................................. 28
4.5
ATM Interfaces ......................................................................................................................... 29
4.5.1 Sub-rate ATM Interfaces....................................................................................................... 29
4.5.2 ATM Interfaces with Unilink ................................................................................................ 30
4.5.3 IMA Interfaces ...................................................................................................................... 30
5. Conclusion.......................................................................................................................................... 32
Appendix A - AT&T COS Feature Description ......................................................................................... 33
A.1
Class Markings .......................................................................................................................... 33
A.2
Class Behaviors ......................................................................................................................... 34
A.3
COS Profiles .............................................................................................................................. 35
A.4
COS Packages ........................................................................................................................... 36
Appendix B - Mapping Applications to Classes ......................................................................................... 39
B.1
Application Attributes ............................................................................................................... 39
B.2
Application Mapping Guidelines............................................................................................... 41
Appendix C - Data Queue Scheduling Mechanisms - Tutorial................................................................... 43
C.1
First-In-First-Out (FIFO) ........................................................................................................... 43
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

C.2
Priority Queuing ........................................................................................................................ 43
C.3
Flow-Based Weighted Fair Queuing (WFQ)............................................................................. 43
C.4
Class-Based Weighted Fair Queuing (CBWFQ) ....................................................................... 44
C.5
Modified Deficit Round-Robin (MDRR) .................................................................................. 47
Appendix D - Case Studies: Fragmentation/Interleaving/Header Compression........................................ 48
Appendix E COS for Video ..................................................................................................................... 51
E.1
Introduction ............................................................................................................................... 51
E.2
Video Application Behavior ...................................................................................................... 51
E.3
Video Network Behavior ........................................................................................................... 52
E.4
Video COS Methodologies........................................................................................................ 53
E.4.1 Video in COS1 without VoIP................................................................................................ 53
E.4.2 Video in COS1 with VoIP ..................................................................................................... 54
E.4.3 Video in COS2V ................................................................................................................... 54
E.4.4 Video in COS2 ...................................................................................................................... 54
E.4.5 Video in Other Classes .......................................................................................................... 54
E.5
Call Admission Control ............................................................................................................. 54
References ................................................................................................................................................... 56

Changes from Release 3.0 to 3.1


Below is a list of the more significant changes made since release 3.0.

Section 3.2.1.1 and 3.2.1.2 added to address handling COS with a lost T1 from the MLPPP
bundle
Removed references to Blue B profiles as they are no longer supported
Updated some of the configuration and show command examples
Moved content from some appendices to the main document
Moved references to Unilink connections under their respective interface configuration discussion
Added Appendix E which addresses COS issues with Video
Added references to the 4COS Model

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

1. Introduction
This document has been created to assist AT&T customers in understanding and using the network-based
Class of Service (COS) features of AT&T IP transport services (where the customer manages their own
CPE). The Class of Service feature allows multi-application enterprise networks to optimize response
times for time-critical interactive applications while maintaining high throughput for bulk data
applications. In addition, COS provides low latency queuing capabilities to minimize delays and variance
for real time applications such as voice and interactive video.
This guide does not reflect in any way what AT&T managed services does or does not support when
AT&T manages customer-premises routers. For information on AT&T managed router capabilities see
your sales representative.
Network-based COS capabilities have become increasingly important as businesses move to deploy
mission critical enterprise applications across public IP infrastructures and IP VPN services. These
applications have traditionally been deployed across private, point-to-point networks where application
differentiation could be accomplished entirely by premise based edge equipment. With IP architectures,
many-to-one flows inside the carrier cloud create network bottlenecks that cannot be easily controlled by
edge-based policies alone. In addition, the emergence of Voice over IP and interactive video conferencing
are driving a requirement for tight control over queuing delay and variance. This has driven the need for
IP network services to participate in the enforcement of customer defined COS policies. AT&T COS
features have been designed specifically to meet these needs.
The primary functions of COS features are to differentiate traffic types competing for bandwidth in an
enterprise network and place them into different queues to partition bandwidth in a deterministic fashion.
COS is a powerful set of network design tools to aid in performance engineering for your network. Used
properly, these features often allow you to forgo bandwidth upgrades while maintaining the performance
of mission critical applications. However, COS is NOT a substitute for sufficient bandwidth. COS
features are intended to provide deterministic behavior during periods of network congestion. This
behavior represents a tradeoff, usually favoring time sensitive mission critical applications during
congestion over less time sensitive or less critical applications.
AT&T supports a common set of COS markings and behaviors across all IP services. This document
describes this common feature set and outlines best practices for using these capabilities.
The primary objective of this document is to provide guidance on the basic functions, configuration, and
operation of COS features. This document provides an overview of COS from a systems perspective,
describes the high level attributes of the network-based features, and provides basic guidance for
configuring customer equipment to operate properly as a part of the described system. This information is
believed to be applicable and sufficient for most enterprise network environments.
Several appendices are provided for those who want more in-depth information and/or more specialized
application of COS features. The following appendices provide more detailed descriptions:

Appendix A: the operation of the network features

Appendix B: mapping applications to classes

Appendix C: the operation of COS queue scheduling mechanisms

Appendix D: fragmentation, interleaving and header compressions

Appendix E: COS for video environments

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

The fundamental concepts contained in this document need to be applied to each customer's specific
environment. While basic environments and configurations are addressed, this document is not intended
to be exhaustive. If assistance is needed in more complex scenarios contact your AT&T technical
resources.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

2. COS Systems Approach


COS features are used in a Wide Area Network (WAN) to provide deterministic utilization of congested
links. For IP networks the congested links are typically at the edges of the network on the link between
the Provider Edge (PE) and the Customer Edge (CE) devices. Deterministic utilization is accomplished by
providing separate queuing paths for identified application flows along with a scheduling algorithm to
service the queues in a defined manner. The queue configuration and scheduling algorithm work together
to determine the network performance aspects of a particular application class.
COS features allow multi-application enterprise networks to optimize response times for time-critical
interactive applications while maintaining sufficient throughput for bulk data applications. These features
are one necessary aspect of an overall systems approach to application performance. Effective use of
these features requires looking at every potential bottleneck end-to-end, encompassing the customer
equipment (marking and queuing) as well as the network-based COS features. Figure 1 depicts bidirectional traffic flow across an IP network highlighting each of the COS components along their
respective paths.
CE: M arking,

Queuing

PE Ingress:

PE Egress:

P olicing

Queuing

PE

CE

PE

CE
MPLS Core Multi-Protocol
Label Switched Core
CE Customer Edge Router
PE Provider Edge Router
LSP Label Switched Paths

PE Egress:

Queuing

MPLS Core
(LSPs)

PE Ingress:

P olicing

CE: M arking,
Queuing

Figure 1 Network Based COS Functions

2.1

Ordering COS in the Service (for the PE)

To get the COS treatment that you want from the service, you order COS profiles with your ports. This
puts a policy map on your port on the PE in both the ingress and egress directions. Section 2.3and 2.5
address those PE behaviors. The profile choices are discussed in detail in Appendix A - AT&T COS
Feature Description.
Note that if you place an order to change COS on an existing port you should assume some packet loss or
at worse a hard down for a minute or two.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

2.2

CPE Marking and Queuing

The Customer Premise Equipment (CPE) has two critical functions to perform: (1) identification and
marking of application traffic flows, and (2) differential queuing of traffic into the WAN network.
The marking of packets is done by setting specific Differentiated Services Code Point (DSCP) values
within the Type of Service (TOS) byte of the IP header. The DSCP field in the IP header consists of 6 bits
in the TOS byte, some CPE can only support marking the 3-bit IP precedence (IPP) field of the TOS byte
rather than the full 6-bit DSCP field. Note that there are multiple CPE devices that could accomplish the
DSCP marking (Originating device, CE router, or some intermediate device 1). Each marking indicates the
type of treatment that a packet should receive from the network. The set of flows that share a common
marking and resulting treatment are referred to as a class. AT&T supports up to six user markings (or
classes) as outlined in Table 1.

Class

DSCP and IPP Marking

Behavior

COS1

EF, CS5, IPP5

Priority

COS2V

AF41, CS4, IPP4

Video Conferencing

COS2

AF31, CS3, IPP3

Bursty Data

COS3

AF21, CS2, IPP2

Bursty Data

COS4

Default0, IPP0

Best Effort

COS5

AF11, CS1, IPP1

Scavenger

Table 1 AT&T Class of Service Markings


When COS is ordered for a given site, a COS Bandwidth Allocation Profile is selected for the site from
either the 4-COS model or the 6-COS model. Details of the various choices of COS Profiles that can be
used per site are provided in Appendix A. The COS Profile identifies which of the class markings are
recognized for the site, and how much of the available port bandwidth is guaranteed for the class during
congestion. In some cases, bandwidth allocation profiles are selected that do not utilize all six of the
available class markings.

If a profile is selected that does not include COS2V or COS5, then traffic that has these markings
will be treated as part of the COS4 class.

If a profile is selected that does not have COS1, then traffic with this marking is treated as part of
the next lower class.

Common practice for packet marking is to administer markings at a single common networking device, typically
the CE Router. This prevents a rogue originating system/user from implementing markings that are inappropriate to
the overall COS policy and hence cause undesired effect for all users at that site. In some cases, it is desirable to
trust the marking from the originating system. In these cases it is preferable if the trusted systems can be isolated on
separate/trusted VLANs. Separate VLANs with trusted DSCP markings are a common construct for Voice over IP
(VoIP) deployments where DSCP is typically set at the originating device.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

If a profile is selected that only recognizes a single class, then all traffic is treated as part of this
class, regardless of its marking.

The CPE utilizes queuing and congestion management techniques to provide the desired application
differentiation. This type of differentiated treatment is needed anywhere in the network where congestion
may occur. The CPE queuing behavior specifically addresses the case where there is
contention/congestion for traffic being transmitted from the Local Area Network (LAN) into the WAN.
For these cases the CPE needs to make the intelligent decisions regarding allocating the available
bandwidth to the various application classes and if necessary discarding excess.
Congestion occurs when the traffic demand on a particular link is greater than the capacity. Congestion
events may be sustained or transient. Sustained congestion will typically be observed in the utilization
metrics taken by typical network management systems. Transient congestion may last for just a few
seconds, or even less, and typically is not displayed in network management reports do to the lack of
granularity. COS policies are effective for both sustained and transient congestion.2
2.3

Network Based Class of Service

The AT&T network has three roles in the overall COS systems approach. At ingress to the network a
policing function is used to enforce the COS policy provisioned for the site. Across the MPLS core
several differentiated Label Switched Path (LSP) markings are used to differentiate traffic in the event of
core trunk congestion. And at network egress, the IP header markings are used to provide differentiated
egress queuing in the event of egress congestion at the site. Additional detail on these roles is provided
below.
2.3.1

PE Ingress Treatment

As traffic enters the network, the DSCP markings are inspected at the ingress PE. Priority treatment is
given to applications marked for COS1. When ordering COS for a site, the amount of COS1 bandwidth is
explicitly specified. if the amount of COS1 traffic exceeds this provisioned amount, the excess is
immediately discarded. This prevents excessive COS1 traffic from being carried across the core network
with low latency treatment. Likewise with the default behavior of COS2V, any traffic exceeding the
bandwidth allocation is discarded. The COS1 ingress bandwidth allocation is defined as a percentage of
the access port.
For AT&T MPLS-based services, like AVPN or PNT, the DSCP markings and the PE ingress policing
are used to determine the MPLS headers EXP setting for COS treatment across the core.
There are two out-of-contract concepts, for classes that are not strictly policed (e.g. COS2, 3, 5):
1. The first is when a customer marks a packet with a DSCP of AFx2 or AFx3. This will affect both
the core treatment (lower WRED threshold; see Section 2.3.2) and how the PE at the far end
treats the packets on egress (see section 2.3.3).
2. The second out-of-contract concept is when this ingress policing detects packets that exceed
their class bandwidth allocation, it marks the MPLS headers EXP bit with an out-of-contract

During sustained congestion, COS can help assure that time sensitive interactive application continue to perform
well. However, sustained congestion indicates an ongoing period where all of the available bandwidth is consumed.
This indicates that there are one or more applications that are bandwidth constrained. If these bulk data applications
are not meeting the required performance levels, COS tools will not be sufficient to provide improvement. In this
case, additional WAN bandwidth is warranted to improve the transfer times for bulk data.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

value that gets a lower WRED threshold treatment in the core. Note that AT&T does not change a
customers DSCP, so this does not affect egress PE queuing (which only looks at DSCP).
2.3.2

Core Treatment

The EXP field is used across the core for two fundamental functions. It is used provide
differentiation/protection among service offerings sharing the core, and it is used within VPN services to
provide protection among CoS classes. The EXP field is a 3-bit field providing 7 potential markings for
LSPs. There are 4 queues supported across the core trunks; one dedicated for COS1 traffic (EXP-5), one
for VPN data traffic with two separate drop thresholds (WRED) for in and out of contract traffic (EXP-4
and EXP-3), one for Best Effort/Internet traffic with 3 separate drop thresholds (EXP-2, EXP-1, EXP-0),
and finally, there is a queue reserved for control plane traffic to operate the network (EXP-6).
2.3.3

PE Egress Treatment

As traffic leaves the network the DSCP markings are again inspected at the egress PE. This is the point
along the communication path where congestion is most likely to occur. Congestion occurs at network
egress when the total amount of traffic transmitted toward a customer site exceeds the bandwidth of the
connection to the site. As with all junctions along the communication path, when there is no congestion,
packets are simply forwarded toward the site as soon as they are received in a First-In, First-Out (FIFO)
manner (not queued). During these periods, COS has no impact at all on the behavior of the connection3.
As the rate of traffic increases beyond the port speed, the PE cannot forward all packets immediately so
congestion develops. The egress PE has two primary tools for managing congestion; queuing and
discards. Queuing is simply holding the traffic in a buffer, delaying the packet delivery, until it can be
sent. If the congestion condition persists, the amount of queued traffic and resulting queuing delay will
continue to increase. At some point further delay of packets is no longer reasonable, and the PE discards
packets.
When congestion occurs at an egress port, the PE begins to queue traffic into one of six class queues;
COS1, COS2V, COS2, COS3, COS4, or COS5 when the 6COS model is used and one of four class
queues; COS1, COS2, COS3, or COS4 when the 4COS model is used. Each time the transmitter is ready
to forward a subsequent packet, a packet is taken from one of the queues and transmitted out of the
interface. If there is no traffic arriving with that particular class marking, then that class queue will be
empty.
The delay for each traffic class will be different. The delay will be a function of how much traffic is
arriving in a given class and how often that class queue is chosen for transmission out of the interface (i.e.
how full the queue is, which is a function of Class Arrival Rate and Class Service Rate).
On egress, the class bandwidth allocations control the relative servicing rate out of the queue and on to
the egress port toward the customer site. But this allocation is only in effect when the port is congested
(the total data arrival rate is greater than the port capacity). Traffic in a class will get the full port rate if
the port is not congested. Each COS allocation profile defines the percentage of available bandwidth
during congestion for each COS class (see Appendix A for details about allocation profile choices). For
any class that is not consuming its full allocation, the excess bandwidth for the class becomes available
for the remaining queues in a ratio proportional to their allocation. Note that the COS1 and COS2V
classes are an exception. COS1 is policed to its bandwidth allocation and is always first to go. COS2V is
serviced next and traffic exceeding the allocation is discarded. COS2V can be ordered without the

At network egress, COS1 traffic is always limited to its bandwidth allocation, even when congestion is not present.
Other traffic classes may consume any available bandwidth as long as the connection is not congested.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

10

policing behavior while COS1 is always policed. Unused COS1 and COS2V bandwidth is still available
for consumption by the remaining data classes.
Packets marked with one of the out-of-contract DSCP settings (e.g. AFx2 or AFx3) will be subject to a
lower WRED threshold and thus a higher drop probability than in-contract marked traffic (AFx1). These
out-of-contract markings came from the ingress CE, AT&T does not change a customers DSCP settings
(i.e. short pipe mode).
2.4

Network Capacity

Network capacity planning is the process of determining the appropriate link capacity to meet the
application needs of a specific enterprise. Proper capacity planning by the customer is necessary in
assuring that a WAN deployment can provide acceptable application performance. Once the customer
provisions appropriate link capacity, the COS tools provide a means to use and share the available
capacity in a deterministic fashion.
This document does not specifically address network capacity planningwhere the customer chooses the
best link bandwidth. AT&T has Data Network Analysts (DNAs) available to assist in this step. DNAs
have extensive experience in network performance and application characteristics, as well as
sophisticated modeling and planning tools to aid in this process. Contact your AT&T account team to
engage a DNA.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

11

3. Configuring Class of Service4


The sections that follow detail how to deploy COS features in the CE. The steps are as follows:
1. Identify traffic types using class-maps and ACLs (Access Control Lists).
2. Create a policy for queuing and marking traffic. 5
3. Apply the policy to the WAN interface.
4. Verify the policy is behaving as intended.
Examples provided in this guide are using IOS 12.4(24)T2. The available features and command set
for IOS based COS features are constantly evolving and expanding. For example more recent Cisco
IOS images are phasing in Hierarchical Queueing Framework(HQF) to replace the Modular Quality
of Service Command Line Interface (MQC). Users of this guide are strongly encouraged to refer to
their CPE vendor documentation for more detailed configuration guidance.
Please note that these examples are for illustrative purposes only. The precise examples shown may not
be appropriate for your business needs. Moreover, readers should not infer that AT&T will or will not
follow these examples when AT&T manages customer premises routers.
AT&Ts COS features recognize up to six separate classes for differentiating application types. You do
not need to use all six classes; many enterprise networks operate sufficiently with only 2 levels of
differentiation to support their application requirements. When developing a COS policy for the
enterprise, we recommend using the minimum number of classes that meet the identified need. Refer to
Appendix B for guidelines on mapping applications to the various classes. The network links can be
ordered with a full complement of COS classes on the PE even if there is no intent or need to use all of
the available markings. There is no loss of performance or throughput if a particular class marking in the
network is not used.
3.1

Identify Traffic Types

The first step in router configuration is to uniquely identify each traffic type in the CE. This is done using
class-maps and ACLs.
Some traffic you will identify via an ACL that looks for specific server IP addresses or TCP/IP port
numbers. Other traffic may already be marked with a trusted DSCP, so you simply match on the existing
DSCP.
Class-Map Examples:

This section provides instructions for configuring the Customer Edge Router (CER) for COS. The configuration
illustrations displayed throughout this section are focused on Cisco IOS implementations. AT&T does not require
the use of Cisco equipment but recognizes that the majority of current customer implementations are with Cisco
equipment. The Quality of Service feature set is constantly evolving and expanding. The guidelines outlined here
contain a subset of available features that have been proven to work well with AT&T Network-based COS features.
To gather the most recent information for the latest Cisco IOS releases please consult the Cisco documentation. This
is not intended to be an IOS primer but rather a starting point for what needs to be done to get COS implemented in
the CER.
5

Please refer to the Appendix B Mapping Applications to Classes - for addition guidelines and best practices for
mapping applications into the various queues.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

12

Class-map match-any COS5


match access-group name COS5-Traffic
!
class-map match-any COS3

match access-group name COS3-Traffic


!
class-map match-any COS2V
match access-group name COS2V-Traffic
!
class-map match-any COS2
match access-group name COS2-Traffic
match ip dscp af31
!
class-map match-any COS1
match ip dscp ef
!
class-map match-any COS5-Traffic
match ip dscp af11

!Non-business or scavenger traffic


!See ACL examples below
!Multi-second response time apps
!See ACL examples below
!Video conferencing app
!See ACL examples below
!Sub-second response time apps
!See ACL examples below
!Traffic pre-marked with DSCP AF31
!VoIP
!COS 1 for pre-marked real time traffic
!Video streaming
!COS 5 for pre-marked real time traffic

Class-maps are very flexible and can be much more comprehensive than depicted here. The examples
show two different flavors of the match sub-command for class-maps. Match can be used to recognize
a number of protocols directly; or match can refer to an ACL that defines the traffic to be recognized. It
can also be used to identify traffic based on packet size, existing DSCP marking, input interface, etc.
NOTE: Your platform/release may not support all of the protocol types shown in the examples. If a
protocol is not directly supported, then it can usually be matched using an ACL instead.
IMPORTANT: When using multiple match commands, be sure to set up the class using match-all or
match-any depending on whether you wish to recognize traffic based on all of the specified
conditions, or any one of the specified conditions.
The name assigned to each of the class maps (COS5, COS3, COS2, COS2V, COS1) is used within a
queuing policy to determine the treatment for traffic types matching the class. When using these classmaps in a queuing policy, there is one additional pre-defined class-map available named class-default.
This class matches all traffic that is not matched elsewhere in the queuing policy. This default class must
appear last in the list.
The COS2 and COS3 class-maps in the previous example each reference an ACL. Below are some simple
ACL examples to match the class-map definitions above.
ACL Examples:
ip access-list extended COS2V-Traffic
permit tcp any any range 3230 3231
permit udp any any range 3230 3235
!
ip access-list extended COS2-Traffic
permit tcp host 10.55.64.95 any
permit tcp any eq bgp any

!Defined video conference traffic


!Range of TCP ports used by video 6
!Range of UDP ports used by video
!Time critical applications
!Anything from the PBX (call signaling)
!BGP traffic

These port ranges are for example purposes. Video conferencing applications have a wide range of ports. You
must work with your vendor to isolate the specific ports used in your environment.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

13

permit tcp any any eq bgp


!
ip access-list extended COS3-Traffic
permit tcp host 10.55.64.20 eq www any
permit tcp any host 10.55.64.20 eq www
!
ip access-list extended COS5-Traffic
permit tcp any any eq 554

!Time sensitive applications


!Web enabled enterprise application (server
source)
!Web enabled enterprise application (server
dest)
!Time sensitive applications
!TCP port used for streaming video

NOTE: These access lists are intended as examples. Customers should tailor them to meet the application
mix and performance requirements of your specific enterprise network.
Notice that the COS3 example includes two statements, to recognize traffic based on either source or
destination criteria. For some application types (e.g. telnet), this may be desirable since there is a good
chance that the client server relationship could be set up in either direction across the link. For other
traffic types, even though traffic will normally match in one direction only, you may find it helpful to
define traffic in both directions. Doing so allows you to use the same ACL on all routers in your network,
and also reduces the chance of error in defining the traffic in the wrong direction.
3.2

Create a Policy

There are multiple queuing disciplines available in IOS, some of which are described in Appendix C. In
this guide, we show the use of service policies implementing Class-Based Weighted Fair Queuing
(CBWFQ). The service policy will be used to perform the DSCP marking, and to provide advanced
queuing within the CE. More recent router IOSs support different, but similar, queue scheduling
mechanism so read your vendors documentation.
The queuing policy is configured using a policy-map. Within the policy map, there are several
commands associated with each traffic class recognized by the policy. The class command identifies a
traffic type that was previously defined in a class-map. Beneath each class command are instructions for
allocating bandwidth and setting the DSCP marking for traffic in the class. Note that there are actions
supported within the policy beyond the basics covered here.
The sample policy below illustrates a policy supporting 256 kbps of COS1 traffic for RT applications
such as bearer voice traffic and a bandwidth allocation across the data queues of 30%, 50%, 15%, 5%, 0%
for classes 2, 2V, 3, 4, and 5 respectively.
The policy also marks the appropriate DSCPs so that the PE within AT&Ts network will recognize and
carry the COS policy across the network end-to-end.
The queue-limit command is used in this sample policy to increase the queue-limit for the classes
specified from the default of 64 packets to 600. Increasing the queue limit allows more packets to be
buffered rather than discarded. However, larger queues use more memory and result in greater packet
delay, therefore one should consider the tradeoff between packet loss and packet delay and memory usage
when configuring a queue-limit per class.
Sample Policy:
policy-map COS
class COS1
priority 256
set ip dscp ef

NDC Release 3.1

!Allocate 256K for real time traffic- provides LLQ

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

14

class COS2
bandwidth remaining percent
set ip dscp af31
queue-limit 600 packets
class COS2V
bandwidth remaining percent
set ip dscp af41
queue-limit 600 packets
class COS3
bandwidth remaining percent
set ip dscp af21
queue-limit 600 packets
class COS5
bandwidth remaining percent
set ip dscp af11
queue-limit 600 packets
class class-default
bandwidth remaining percent
set ip dscp default
queue-limit 600 packets

3.2.1

30
! see paragraph above
50

15

!Class-default is pre-defined in IOS; it matches any


!remaining traffic

Priority Command for COS1 Real-Time Class

In the example above, the COS1 bandwidth is expressed directly in kbps, while the remaining classes
are specified using bandwidth remaining percent (alternative syntaxes are available). It is usually a good
idea to directly configure the amount of COS1 bandwidth, as this is an important consideration for
planning voice call volumes and/or similar metrics typical of COS1 applications. Conversely, using the
percentage notation for the remaining classes provides a policy that matches the network based policies,
and does not need to be changed even if the port speed is upgraded.
For multilink interfaces, the available bandwidth is dynamically calculated based on the number of active
T1s in the bundle. Therefore AT&T does not recommend customers to configure a 'bandwidth' statement
on multilink interfaces because it will override the dynamic calculation. The COS1 real-time class is of
most concern since it is most affected (packets are dropped) by reduced bandwidth.
With the understanding that any traffic beyond the COS1 bandwidth allocation will be discarded we need
to examine how COS1 behaves as the aggregate bandwidth is reduced due to lost T1s. The behavior is
different based on which priority statement you use in the policy-map: priority or priority percent. The
next two sub-sections describe the different behavior.
3.2.1.1 Priority kbps command
The COS1 behavior when the policy uses a priority 'kbps' command and no bandwidth statement on the
interface is as follows:

Functioning BW: the bandwidth remaining after the T1 is lost (e.g. 3xT1 connection is
~4.5Mbps, the functioning bandwidth when a single T1 is lost is ~3.0Mbps)
If the COS1 BW < Functioning BW, then the router will service all of the COS1 traffic
presented, however the data traffic may suffer during times of peak COS1 load where there's
a chance the lower priority data queues get starved. (The extent of the starvation is dependent
upon the difference between the COS1 BW allocation and the functioning BW)

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

15

If the COS1 BW > Functioning BW, then the router will service all traffic for all classes in
a FIFO fashion just as if there is no COS policy applied at all. Furthermore, any DSCP
markings that are manipulated by the service policy on the MLPPP interface will NOT occur
during this scenario.

When using this type of configuration for an MLPPP environment, the marking policy should be done
separately on the ingress LAN interface(s) so that it continues to operate regardless of queuing behavior.
3.2.1.2 Priority percent command
The COS1 behavior when the policy uses a priority percent command and no bandwidth statement on the
interface is as follows:

When a T1 fails, the COS1 traffic serviced is reduced based on the percent of the functioning
link bandwidth. For example, if a 4.5Mbps connection is used with a 3.6Mbps COS1
allocation (80% of 4.5M) and when the connection loses a T1, then the COS1 allocation is
reduced to 2.4Mbps (80% of 3.0M). So any COS1 traffic coming into that connection, during
the time of the outage that exceeds 2.4M will be dropped.

Note: The behavior exists in the PE to CE direction as well when the PE is a Cisco GSR. When the PE is
a Juniper then the COS1 bandwidth allocation is "hard configured" as a fixed "kbps" of the total bundle.
3.2.2

Setting the DSCP

There are several syntactical methods of setting DSCP using IOS commands. Table 3 shows the most
common set for using AT&T COS features.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

16

IOS COMMAND
Set ip dscp ef
Set ip dscp 46
Set ip prec 5, Set ip dscp cs5
Set ip prec critical
Set ip dscp af41
Set ip dscp 35
Set ip prec 4, Set ip dscp cs4
Set ip prec flash override
Set ip dscp af31
Set ip dscp 26
Set ip prec 3, Set ip dscp cs3
Set ip prec flash
Set ip dscp af21
Set ip dscp 18
Set ip prec 2, Set ip dscp cs2
Set ip prec immediate
Set ip dscp default
Set ip dscp 0
Set ip prec 0
Set ip prec routine
Set ip dscp af11
Set ip dscp 10
Set ip prec 1, Set ip dscp cs1
Set ip prec priority
Set ip dscp af42
Set ip dscp 36
Set ip dscp af32
Set ip dscp 28
Set ip dscp af22
Set ip dscp 20
Set ip dscp af12
Set ip dscp 12

Class
COS1
COS1
COS1
COS1
COS2V
COS2V
COS2V
COS2V
COS2
COS2
COS2
COS2
COS3
COS3
COS3
COS3
COS4
COS4
COS4
COS4
COS5
COS5
COS5
COS5

Codepoint
101 110
101 110
101 000
101 000
100 010
100 001
100 000
100 000
011 010
011 010
011 000
011 100
010 010
010 010
010 000
010 000
000 000
000 000
000 000
000 000
001101
001 101
001 000
001 000

COS2V out of contract


COS2V out of contract
COS2 out of contract
COS2 out of contract
COS3 out of contract
COS3 out of contract
COS5 out of contract
COS5 out of contract

100 100
100 100
011 100
011 100
010 100
010 100
001 100
001 100

Table 3 Syntax for Setting Codepoints

(Preferred syntax in bold)


3.3

Configuring the Interface and Appling the Policy

The next step is to apply the policy to the WAN interface. But before applying a service policy, there are
some other configurations required that are explained below.
(1) Ensure ip cef is enabled on the router. Some situations require cef be enabled for proper operation
of the policy.
Then under the interface:
(2) Turn off Cisco Discovery Protocol cdp on the interface/sub-interface with the no cdp enable
command.. The network PEs will not respond to CDP. This should be turned off on the interface to

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

17

eliminate unnecessary packets. For configurations that use sub-interfaces, CDP for each sub-interface
should be explicitly disabled.
(3) Ensure the proper bandwidth value is specified for the interface using the bandwidth statement.
This will influence the allocations defined in the service policy. The bandwidth statement is
sometimes used to manipulate routing metrics in the IGP. In these cases, care should be taken to
achieve both the desired COS behavior and the desired routing behavior. When warranted, consider
alternate means of influencing routing decisions, such as cost.
(4) The max-reserved-bandwidth 100 statement should be present in the interface configuration.
Without this statement, policies which allocate more than 75% of the interface bandwidth may not be
accepted.
(5) The tx-ring-limit command is used to specify the size of the buffer on the transmit interface. The
queue scheduling algorithms (e.g. CBWFQ) forwards packets to this interface buffer before finally
going out the interface. For packet interfaces (Frame Relay, PPP, PoS), the tx-ring-limit specifies
the number of frames in the buffer. For ATM interfaces, the tx-ring limit may specify packets, or it
may specify a number of 576 byte particles (refer to the Cisco documentation for you hardware
interface). This buffer should be kept to the smallest practical value. 7
The transmit buffer assures that a packet is ready for the transmitter once the transmitter completes
sending the previous packet out the interface. If set too small, it is possible that the transmitter will be
starved and need to wait for the next packet to arrive from the queue scheduler before . This is
generally not an issue for interfaces at or below T1 speed. For these interfaces, use the minimum
configurable value (1 for Packet Interfaces, 2 or 3 for ATM Interfaces).
For higher speed interfaces, a good rule of thumb is to size the tx-ring for ~ 5ms of data at the
interface speed. The 5ms threshold is estimated by dividing the maximum amount of data in the
transmit ring by the speed of the interface. For packet-based transmit rings it is usually reasonable to
assume 1,500-byte packets. For particle-based transmit rings use 576 bytes per particle. Note that for
some interfaces, the default value may already represent less than 5ms. For these cases, the default
value should be kept.
Example calculations for 5ms transmit buffers:
tx-ring-limit = INT( link speed * 5 ms) / RING-UNIT
For packet interfaces, use 1,500 bytes for the RING UNIT
For particles interfaces, use 576 bytes for the RING UNIT
Example 1: T1 Frame Relay interface
tx-ring-limit = INT(1,536,000 * .005 / (1,500*8 bits/byte)) = INT (.64) = 1

The guidance provided here provides a conservative tx-ring-limit setting that provides reasonable performance.
Some more recent IOS/interface configurations now have default values that are actually smaller than these
recommendations and may provide better performance than these recommendations. Consult your Cisco
documentation for the most up to date information on your hardware/software combination.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

18

Example 2: 4 x T1 IMA interface (using particles)


tx-ring-limit = INT(4* 1,536,000 * .005 / (576*8 bits/byte)) = INT (6.67) = 6
(6) Apply the policy to the interface. The commands to do this vary by interface type so reference
Section 4, Configuring Interfaces with COS, for the specific configuration examples for each
interface type. Those examples also include the commands described above (2 through 5).
3.4

Verify the Policy

It is very important to verify that the service policy has been applied to the interface as expected and is
properly marking packets. We have seen configurations that look correct only to discover that the router
was not applying them as they appeared. Use the show policy-map command to verify the policy.
Verify that the policy is applied to the interface and is marking traffic as expected. To truly verify the
policy is correctly applied you will want to do this show policy-map command and note the packet and
byte counters, send specific traffic that should be marked into the particular queue, then do the show
policy-map command again. Check the packet and byte counters again to determine whether the traffic
was marked and sent within the appropriate queue.
Router# show policy-map interface Multilink3000
Multilink3000
Service-policy output: COS
queue stats for all priority classes:
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: COS1 (match-any)
0 packets, 0 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: ip dscp ef (46)
0 packets, 0 bytes
30 second rate 0 bps
Match: ip dscp cs5 (40)
0 packets, 0 bytes
30 second rate 0 bps
Priority: 256 kbps, burst bytes 6400, b/w exceed drops: 0
QoS Set
dscp ef
Packets marked 0
Class-map: COS2 (match-any)
128 packets, 6400 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: access-group name COS2-Traffic
0 packets, 0 bytes
30 second rate 0 bps
Match: ip dscp af31 (26)
128 packets, 6400 bytes
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

19

30 second rate 0 bps


Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 128/6656
bandwidth remaining 30% (1305 kbps)
QoS Set
dscp af31
Packets marked 128
Class-map: COS2V (match-all)
1262 packets, 64400 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: ip dscp af41 (34)
Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 1262/66924
bandwidth remaining 50% (2176 kbps)
QoS Set
dscp af41
Packets marked 1262
Class-map: COS3 (match-any)
0 packets, 0 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: access-group name COS3-Traffic
0 packets, 0 bytes
30 second rate 0 bps
Match: ip dscp af21 (18)
0 packets, 0 bytes
30 second rate 0 bps
Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth remaining 15% (652 kbps)
QoS Set
dscp af21
Packets marked 0
Class-map: COS5 (match-any)
98 packets, 4900 bytes
30 second offered rate 2000 bps, drop rate 0 bps
Match: ip dscp af11 (10)
98 packets, 4900 bytes
30 second rate 2000 bps
Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 98/5096
bandwidth remaining 1% (43 kbps)
QoS Set
dscp af11
Packets marked 98

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

20

Class-map: class-default (match-any)


86 packets, 12522 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 86/5439
bandwidth remaining 4% (174 kbps)
QoS Set
dscp default
Packets marked 86

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

21

4. Configuring Interfaces with COS


This section shows how to apply the service policy to the different interface types:
1. PPP and MLPPP
2. Frame Encapsulation, Dedicated Access
3. Ethernet
4. Frame Relay
5. ATM, including IMA
4.1

PPP Interfaces

The COS policy for PPP interfaces are applied at the interface level. The configuration example below is
based on a T1 private line access into the MPLS service.
interface Serial0/1
bandwidth 1536
ip address 10.155.60.2 255.255.255.252
max-reserved-bandwidth 100
service-policy output COS
!Service-policy is configured on the main interface
encapsulation ppp
tx-ring-limit 1
no cdp enable

4.1.1

Sub-Rate PPP Interfaces

To accommodate sub-rate port speeds with PPP, the aggregate traffic rate is shaped to the port speed. This
is done with a traffic shaping command in the service policy. The structure of the interface configuration
remains the same except the new policy-map is referenced in the service-policy.
policy-map shape-25M-Port
class class-default
shape average 25000000
service-policy COS

!Shape to the sub-rate speed


!Service-policy referencing COS policy

interface Serial0/
!25M DS3 sub-rate port
bandwidth 25000
ip address 10.155.60.2 255.255.255.252
max-reserved-bandwidth 100
service-policy output shape-25M-Port
!Referencing the appropriate shaping policy
encapsulation ppp
tx-ring-limit 10
no cdp enable

4.1.2

MLPPP Interfaces (for Multiple T1s or E1s)

Multilink PPP provides link aggregation for multiple T1s (between 2 and 8). MLPPP connections are
unique in that the connection bundles together between 2 and 8 T1s to create a larger bandwidth
connection. These connections will still be operational even if one or more T1 in bundle fails (unless of
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

22

course it is the last one in the bundle that fails). Therefore it is prudent to plan how the COS policy
behaves when this scenario occurs. Section 3.2.1 addresses such a scenario and provides contingency
recommendations with regard to how the COS policy behaves.
Note there is another AT&T feature called MLPPP LFI (link fragmentation and interleaving) which is
used with a single fractional T1 interface (768 kbps and less) to break each packet into smaller fragments
to improve latency for real-time (e.g., voice over IP) applications. This capability is described in
Appendix D.
The following is an example of a 2xT1 MLPPP configuration illustrating the service-policy applied to the
multilink interface. Refer to the AT&T VPN Service Customer Router Configuration Guide for additional
details about multilink interface configurations.
interface Multilink3000
!The multilink group number for this example is 3000
ip address 10.70.254.217 255.255.255.252
encapsulation ppp
no peer neighbor-route
no cdp enable
ppp multilink
ppp chap hostname 10.70.254.217ATT
!Create a unique hostname
ppp multilink group 3000
ppp multilink fragment disable
service-policy output COS
!Apply the policy to the interface
!
interface Serial0/0/0
!First T1 in the multilink bundle
no ip address
encapsulation ppp
ppp chap hostname 10.70.254.217ATT
!Create a unique hostname
ppp multilink
ppp multilink group 3000
keepalive
interface Serial0/1/0
!Second T1 in the multilink bundle
no ip address
encapsulation ppp
ppp chap hostname 10.70.254.217ATT
!Create a unique hostname
ppp multilink
ppp multilink group 3000
keepalive

4.2

Frame Encapsulation Interfaces (Unilink)

This method is used for a feature called Unilink which allows the customer to connect to multiple
VPNs with one physical port. Each virtual circuit is bound to a different VPN. Up to twelve logical
channels are allowed as standard. Note that a virtual circuit cannot be used for AT&T Managed Internet
service.
Multiple VPN connections over private line access are typically provided using Frame Relay
encapsulation on the access link to provide L2 differentiation of the connections. The COS for Frame
Encapsulated ports is at the port level, a single COS policy is applied to all connections sharing the port.
COS Frame Encapsulation Example

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

23

interface Serial0/0
bandwidth 44000
max-reserved-bandwidth 100
no ip address
service-policy output COS
encapsulation frame-relay IETF
tx-ring-limit 10
frame-relay lmi-type cisco
!
interface Serial0/0.777 point-to-point
ip address 10.55.254.125 255.255.255.252
no cdp enable
frame-relay interface-dlci 777
!
interface Serial0/0.888 point-to-point
ip address 10.56.254.125 255.255.255.252
no cdp enable
frame-relay interface-dlci 888

4.3

!Apply policy to interface

!Connection for VPN1

!Connection for VPN2

Ethernet Interfaces

There are three speeds that can be associated with Ethernet access:
1. Physical interface speed (10baseT, 100baseT, etc.)
2. Port speed that is purchased. This can be the same as physical interface speed or less. If less then
this is a logical sub-rate speed
3. VLAN sub-interface speed.
Ethernet access is typically delivered using an 802.1q VLAN interface over 10M, 100M, or 1000M
physical access lines. Then, one can purchase a main Ethernet port speed at a committed rate or subrate of something less than the speed of the physical line 8. Then, the port can have one or more VLAN
sub-interfaces (one per VPN) each having a committed rate.
The Ethernet Service Provider (ESP), who provide the Ethernet access to MPLS rate-limits and strictly
polices the port speed at the VLAN sub-interface speeds. Conforming to this committed rate requires the
CE to have a service-policy on the VLAN sub-interface to limit the maximum traffic rate to what was
purchased (otherwise the ESP will drop packets that exceed that rate). If there is no VLAN and the port
speed purchased is less than the physical interface speed, then the main Ethernet interface needs the
service policy to shape the traffic to the purchased speed (limit the rate with queuing as opposed to
dropping).
Each VLAN must be strictly shaped to a maximum rate. The shaping is accomplished within the service
policy using a shape command. A class within the policy matches all traffic for the interface. The
actions for the class are to shape the traffic to the speed which was ordered for the connection. Then a
nested COS service policy is used to provide COS differentiation for the various traffic classes.

In the special cases where the committed rate is the same as the speed of the physical access line, a shaping policy
is not required. Since the physical and committed rates are equal the potential to violate the access network
commitment is absent, removing the potential for access network drops. In these cases a COS policy may be
attached directly to the main Ethernet interface.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

24

The service policy is a nested construct as follows:


policy-map ETHERNETSHAPING
class class-default
shape average <Target Rate> <Bc> <Be>
queue-limit 2048
service-policy COS

!See details below


!See paragraph below
!Nested COS service-policy

interface FastEthernet0/0
description ** 10M AVPN Ethernet **
no ip address
duplex full
speed 100
max-reserved-bandwidth 100
interface FastEthernet0/0.1139
encapsulation dot1Q 1139
ip address 10.64.42.253 255.255.255.252
no cdp enable
service-policy output ETHERNETSHAPING

!Shaping policy applied to the sub-interface

The top level policy has a single class, the class-default, and contains the shaping parameters for the
connection. The queue-limit is added in this example to increase the queue-limit from the default (which
varies depending on the platform used consult the equipment vendors documentation). Remember that
larger queue sizes will generally reduce the number of packet discards, however, at the same time,
increase packet delay and increase memory usage in the router. The designer will need to take this into
account when configuring. This policy also calls out a nested policy which provides Class of Service
(COS) differentiation within the shaped interface.
4.3.1

Target Rate

The target rate represents the speed that was purchased for the access circuit. The shaper implementation
in IOS does count all of the bits transmitted on the interface9, however the shaping function counts the IP
payload and only a portion of the packet overhead, but not all. The Ethernet Service Providers counts all
the bits. Therefore, to accommodate this difference, the customer must configure the target rate in the CE
less than the purchased rate.

In most Cisco IOS platforms, only a portion of the total bits is actually counted by the shaper. The target shaping
rate must be adjusted to account for the difference between the actual forwarded bits on the line and the portion
counted by the shaper. For each frame, there is a total of 42 bytes of protocol overhead. In IOS, the shaper counts
only 18 of the overhead bytes. This means that for every frame, an additional 24 bytes are being transmitted that are
not being counted as part of the target rate. Hence the actual transmit rate is greater than the rate specified by the
shaper. This is particularly significant when the predominant payloads are small, such as in a heavy Voice over IP
(VoIP) environment. In order to match the shaped rate more closely to the desired/contracted forwarding rate, the
shaper rate should be reduced. Since the payload in any environment is variable, there is no absolutely correct
amount of target rate reduction needed. Refer to the AT&T VPN Service Ethernet Access Customer Router
Configuration Guide for additional details.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

25

This parameter is configured in bits per second.


For most environments:
Target Rate = 0.95 * Contracted Rate
For environments where VoIP represents more than 50% of overall traffic:
Target Rate = 0.80 * Contracted Rate
NOTE: The 95% and 80% target rate thresholds are rule of thumb levels. As average packet
size decreases, which is typical of high percentage VoIP environments, the target rate should be
further decreased to account for the larger amount of uncounted per packet overhead traffic.
4.3.2

Committed Burst (Bc)

Bc represents the granularity at which the shaped rate is maintained. This parameter is configured in bits.
Use the minimum configurable value for this parameter:
Typically: 0.004 * Shaped Rate. (i.e. 4ms of data at the shaped rate)
This minimizes the size of transmit bursts and provides the best COS differentiation.
4.3.3

Excess Burst (Be)

Be provides an initial burst after an idle period and is configured in bits. Use the minimum configurable
value for this parameter, typically 0. This minimizes the size of transmit bursts and provides the best COS
differentiation.
Important note: The maximum burst achieved with this configuration has been observed to be
approximately 2x Bc. After an idle period, a traffic burst can immediately consume a full Bc size burst. A
subsequent allocation of Bc is then allocated anywhere from 0 to Tc (4ms in this recommendation) later.
If this allocation is near 0, then the initial burst approaches 2x Bc. This initial burst of 2xBc was only
observed at the minimum shaping interval of 4ms. Testing at longer shaping intervals was not tested, and
may or may not exhibit similar behavior.
4.3.4

802.1Q Encapsulation with Multiple VLANs (Unilink) 10:

This method is used for a feature called Unilink which allows the customer to connect to multiple
VPNs with one physical port. Each logical channel (VLAN) is bound to a different VPN. Up to twelve
logical channels are allowed as standard.
When using Ethernet connecting to multiple VPNs, 802.1q encapsulation is required. The customer
purchases a speed for each VLAN. That speed is a maximumthere is no bursting above that speed. The
PE and the Ethernet access service strictly shapes each VLAN to the purchased rate, and the sum of the
VLAN speeds must be less than the physical Ethernet access speed. Each VLAN gets a portion of the port
bandwidth and no VLAN can burst to above its logical channel speed. Furthermore, each VLAN port on
the PE is configured with its own COS policy. So, the customer should use this same design on the CE
outbound to the service. The CE needs a separate COS policy for each VLAN as defined by the policymap. The traffic on one VLAN cannot use the bandwidth of the other VLAN(s).

10
Unilink examples in this document are intended to address methods for applying COS policies in environments
with multiple WAN connections on a port. The examples are not addressing the mechanisms for keeping the traffic
in separate routing domains such as VRF Lite.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

26

Refer to the AT&T VPN Service Ethernet Access Customer Router Configuration Guide for additional
details about Ethernet interface configurations.
interface GigabitEthernet0/0

Interface GigabitEthernet0/0.101
description **VPN1**
encapsulation dot1Q 101
ip address 192.168.1.1 255.255.255.252
no cdp enable
service-policy output COS-VPN1
!
Interface GigabitEthernet0/0.102
description **VPN2**
encapsulation dot1Q 102
ip address 192.168.2.1 255.255.255.252
no cdp enable
service-policy output COS-VPN2

4.4

!See Ethernet Access Configuration Guide for details


!VLAN for VPN1

!COS policy for VLAN1


!VLAN for VPN2

!COS policy for VLAN2

Frame Relay Interfaces

For Frame Relay interfaces, the customer applies the policy to the main interface if only one logical
channel (sub-interface).
Frame Relay Example:
interface Serial0/0
bandwidth 1536
max-reserved-bandwidth 100
no ip address
service-policy output COS
!Apply policy to interface
encapsulation frame-relay IETF
tx-ring-limit 1
frame-relay lmi-type cisco
!
interface Serial0/0.777 point-to-point
ip address 10.55.254.125 255.255.255.252
no cdp enable
frame-relay interface-dlci 777

4.4.1

Sub-rate Frame Relay Interfaces

The customer can purchase a port speed at a rate less than the DS3 physical interface speed. This is
referred to a sub-rate port.
For sub-rate ports with a single logical channel (single VPN), the rate is controlled within the service
policy on the main interface.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

27

Sub-Rate Port Frame Relay Example:


policy-map shape-20M-Port
class class-default
shape average 20000000
service-policy COS

!Shape connection to 20Mbps


!Nested COS service-policy

interface Serial0/0
!20M subrate DS3 Port
bandwidth 20000
max-reserved-bandwidth 100
no ip address
service-policy output shape-20M-Port !Apply policy to interface
encapsulation frame-relay IETF
tx-ring-limit 10
frame-relay lmi-type cisco
!
interface Serial0/0.777 point-to-point
ip address 10.55.254.125 255.255.255.252
no cdp enable
frame-relay interface-dlci 777

For sub-rate ports with Unilink (i.e. multiple logical channels), the rate is controlled using frame relay
traffic shaping on the individual PVCs. The sum of the shaped rates should not exceed the sub-rate port
speed. This configuration is shown in the next in section 4.4.2.
4.4.2

Frame Relay Interfaces with Multiple Logical Channels (Unilink)

Unilink is the case where there are multiple PVCs sharing the port. Unilink configurations may be used
for connections to multiple VPNs, or for a mix of VPN connections and Point to Point Layer 2 PVCs.
When using Unilink with Frame Relay interfaces, each PVC is traffic shaped so the sum of the PVC
speeds is less than the port speed.
With traffic shaping, the queuing occurs on the sub-interface rather than the main interface. In these
cases, the policy is included in the traffic shaping parameters applied to the sub-interface. Note that the
actual shaping rate (cir) used in the configuration of traffic shaping should be slightly below (2%-3%) the
actual desired rate; this assures that queuing remains at the sub-interface rather than the physical interface.
Note Bc and Be have the same behavior as the Ethernet shaping parameter given in sections 4.3.2 and
4.3.3 follow the same guidelines.
Frame Relay Unilink Example 11:
interface Serial0/0
bandwidth 1536
max-reserved-bandwidth 100
no ip address
encapsulation frame-relay IETF
tx-ring-limit 1
frame-relay lmi-type Cisco
11
Unilink examples in this document are intended to address methods for applying COS policies in environments
with multiple WAN connections on a port. The examples are not addressing the mechanisms for keeping the traffic
in separate routing domains such as VRF Lite (Contact the AT&T account team for information about VRF Lite).

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

28

interface Serial0/0.777 point-to-point


ip address 10.55.254.125 255.255.255.252
bandwidth 768
no cdp enable
frame-relay class shape768Unilink
!Shape to half the T1 bandwidth which includes the
COS policy
frame-relay interface-dlci 777
!
interface Serial0/0.890 point-to-point
ip address 10.55.254.129 255.255.255.252
bandwidth 768
no cdp enable
frame-relay class shape768Unilink
!Shape to half the T1 bandwidth which includes the
COS policy
frame-relay interface-dlci 890
map-class frame-relay shape768Unilink
frame-relay cir 750000
frame-relay bc 7500
frame-relay be 0
service-policy output COS

!Shape to 2-3% less than port

!Apply policy to shaped sub-interface in this nested


arrangement

Note that separate map-class sections could be used for each sub-interface to provide different shaping
rates or different COS policies for each PVC. This example uses the same shaping rate for both PVCs.
4.5

ATM Interfaces

ATM policies are applied at the virtual connection (VC) or sub-interface level. In order to provide
appropriate queuing at the VC level, use VBR-NRT VCs. The rate of the VC should normally be the
same as the port speed (except for Unilink environments).
4.5.1

Sub-rate ATM Interfaces

The configuration example below is based on an 20M sub-rate bandwidth on a DS3 ATM port. The COS
policy is applied to the sub-interface. Notice that the vbr-nrt command is used to shape the connection to
the sub-rate bandwidth, in this example it is 20M. This configuration is identical to the full-rate interface
except the vbr is set lower. Refer to the Customer Router Configuration Guide for more details about
sub-rate interface configurations.

interface ATM1/0
description ** AT&T DS3 ATM port: DNEC.xxxxxx **
no ip address
atm scrambling cell-payload
no atm ilmi-keepalive
!
interface ATM0/1 point-to-point
description ** To AT&T PE VPI/VCI 1/777 **
ip address 192.168.10.1 255.255.255.252
oam-pvc manage 0
encapsulation aal5snap

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

29

pvc att-per 1/777


vbr-nrt 20000 20000 1
service-policy output COS

4.5.2

!Shape the sub-interface to the sub-rate BW = 20M


!Apply the COS policy

ATM Interfaces with Unilink

For Unilink ATM interfaces, simply configure the additional PVCs, each with an appropriate VBR-NRT
rate and COS service policy applied.
interface ATM1/0
description ** AT&T DS3 ATM port: DNEC.xxxxxx **
no ip address
atm scrambling cell-payload
no atm ilmi-keepalive
!
interface ATM0/1 point-to-point
description ** To AT&T PE VPI/VCI 1/777 **
ip address 192.168.10.1 255.255.255.252
oam-pvc manage 0
encapsulation aal5snap
pvc att-per 1/777
vbr-nrt 20000 20000 1
!Shape the sub-interface to the sub-rate BW = 20M
service-policy output COS
!Apply the COS policy
!
interface ATM0/1 point-to-point
description ** To AT&T PE VPI/VCI 1/888 **
ip address 192.168.10.5 255.255.255.252
oam-pvc manage 0
encapsulation aal5snap
pvc att-per 1/888
vbr-nrt 20000 20000 1
!Shape the sub-interface to the sub-rate BW = 20M
service-policy output COS
!Apply the COS policy

Note that each PVC can have unique COS policies applied. This example uses the same COS policy for
both PVCs.
4.5.3

IMA Interfaces

Inverse Multiplexing for ATM (IMA) is the concept of bonding multiple T1/E1 circuits together to
provide an aggregated larger circuit. The configuration example below is based on an IMA port with a
2xT1 implementation that provides an aggregate connection bandwidth of 3072 kbps into the MPLS core.
The COS policy is applied to the IMA sub-interface. Refer to the Customer Router Configuration Guide
for more details about IMA interface configurations.

interface ATM2/0
description ** AT&T T1, DHEC.xxxxxx ** !1st T1 in IMA-group 3
no ip address
ima-group 3
interface ATM2/1
description ** AT&T T1, DHEC.xxxxxx ** !2nd T1 in IMA-group 3
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

30

no ip address
ima-group 3
interface ATM2/IMA3
no ip address
no atm ilmi-keepalive
ima active-links-minimum 2
!
interface ATM2/IMA3.800 point-to-point
description ** ePVC to ATT PE **
bandwidth 3072
ip address 10.51.254.125 255.255.255.252
no cdp enable
pvc 1/800
vbr-nrt 3072 3072 1
!Shape to 3072kbps
tx-ring-limit 3
max-reserved-bandwidth 100
oam-pvc manage 0
service-policy output COS
!Policy applied to the sub-interface

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

31

5. Conclusion
The Class of Service feature set is a powerful tool for assuring application performance and making the
most efficient use of available bandwidth. This guide has covered the basic capabilities and application of
these features. Refer to the Appendices for configuration examples. For more information and assistance
in deploying Class of Service, contact your AT&T account team to engage the support of an AT&T Data
Network Analyst (DNA).

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

32

Appendix A - AT&T COS Feature Description


This section describes the operation of the COS features within AT&Ts IP networks. It provides the
appropriate DSCP markings for each class, and describes the treatment of these classes across the service.
It also describes the available profiles for allocating bandwidth across the classes on a specific port.
A customer can order either the 4COS model or the 6COS model. The 4COS model has COS1, 2, 3 and
4. COS2V and COS5 are only part of the 6COS model.
A.1

Class Markings

Customers may mark traffic for any of six specific behaviors in the network. These behaviors are referred
to as classes. We will describe the class markings using DSCP nomenclature. The markings below are
for all 6 classes. The 4COS model is just a subset of these.

COS1 This class is indicated with DSCP Expedited Forwarding (EF) and is intended
for real time applications such as interactive voice or interactive video.

COS2V This class is indicated with DSCP Assured Forwarding 41 (AF41) and is
intended for delay sensitive applications that exhibit known or desired bandwidth
restrictions.

COS2 This class is indicated with DSCP Assured Forwarding 31 (AF31) and is intended
for time sensitive, mission critical, low bandwidth, bursty data applications.

COS3 This class is indicated with DSCP Assured Forwarding 21 (AF21) and is
intended for time sensitive, mission critical, bursty data applications.

COS4 This class is indicated with DSCP default (default). It is also referred to as the
best effort class and is intended for all bulk data applications and non-time critical
applications.

COS5 This class is indicated with DSCP Assured Forwarding 11 (AF11). It is also
referred to as the scavenger class and is intended for applications that do not support the
business.

The Diff-Serv field in the IP header consists of 6 bits in the IP Type of Service (ToS) byte. Table A-1
shows how each possible marking is mapped into one of the AT&T defined classes. Some CPE can only
support marking the 3-bit precedence field of the Type of Service byte rather than the full 6-bit DSCP
field.

Class
COS1
COS2V
COS2
COS3
COS4
COS5

Marking
EF, CS5, IPP5
AF41, CS4, IPP4
AF31, CS3, IPP3
AF21, CS2, IPP2
Default 0, IPP0
AF11, CS1, IPP1

Behavior
Priority
Policed Data
Bursty Data
Bursty Data
Best Effort
Scavenger

Table A-1. AT&T Class of Service Markings

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

33

A.2

Class Behaviors

The class markings tell the network how to differentiate customer traffic flows. This has an impact at
network egress, network ingress, and across the core of the network.
COS1 is reserved for real time applications that require low delay and low delay variance. Applications
mapped to COS1 can tolerate very little queuing delay. The queuing algorithm always checks the COS1
queue when it is time to transmit a packet. If traffic exists in this queue, it is forwarded before all other
traffic. This assures that the delay for COS1 is kept to an absolute minimum. There is a danger associated
with this sort of queuing behavior. If too much COS1 traffic arrives at a site, it would always be served
above all other traffic and result in the starving of the other classes. To avoid this undesirable behavior,
strict policing is implemented for COS1 traffic. When ordering COS for a site, the amount of bandwidth
allocated to the COS1 queue is specified. If the amount of COS1 traffic exceeds this provisioned amount,
the excess is immediately dropped. The amount of bandwidth allocated to the COS1 queue is calculated
by multiplying the profiles percentage by the port speed. For example if the port is a 10M Ethernet and
the COS1 percentage of the profile is 40% then the bandwidth allocated to COS1 is 4M. Unused COS1
bandwidth is available to the other classes that are configured.12
The default behavior for COS2V is to specify the queues bandwidth allocation and discard traffic
exceeding this allocation. The queuing algorithm is different than COS1 where the scheduler is always
checking the queue when its time to transmit, instead the traffic transmitted is determined by the
bandwidth allocation. A policing function is implemented for COS2V to restrict bursting above the
bandwidth allocation. This policing function differentiates this queue from the other data classes, COS2,
COS3, COS4, and COS5. In the case of the other data classes, they are allowed to use the available
bandwidth when not used by the other queues. Note that COS2V can be configured to behave like the
data classes, this arrangement would be customized, for more information and assistance in deploying this
arrangement, contact your AT&T account team to engage the support of an AT&T Data Network Analyst
(DNA). When using the default behavior for COS2V the amount of bandwidth allocated to the COS2V
queue is calculated similar to the COS1 allocation. With a 10M Ethernet port and the COS2V profile
percentage of 50%, then you can expect to transmit up to 5Mbps of traffic within this class before you
reach the policing function. Unused COS2V bandwidth is available to the other classes that are
configured.
For the data classes, COS2 and COS3, the queuing algorithm allocates packets based on an allocation
profile, which is ordered and provisioned for the port. The allocation determines how quickly and how
often a particular class queue gets the opportunity to transmit on the egress line. If the service rate for a
particular class is more frequent than the arrival rate of packets in that class, then there will be no
congestion at all for that class. Conversely for a class where the service rate is slower than the arrival rate
of traffic, then a queue will build. The intent is that customers will allocate applications to COS2 and
COS3 such that they are never congested. This means that the amount of application traffic mapped to
these classes is never more than the service rate of the class during congestion. If too much traffic is
mapped to these classes, then the goal of maintaining consistent, low delay for these application types is
not realized

12

When a port is combined with a CIR as would be the case with the IP Frame Relay or ATM service then the CIR
is used instead of the port speed for the calculation of the COS1 bandwidth allocation. For example if the port speed
was 1.536Mbps and the CIR for this particular connection was 768kbps and the percentage of COS1 in the profile
was 50% then the bandwidth allocated to COS1 would be 384kbps.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

34

The goal is that any traffic that will generate congestion is mapped to COS4, and that this traffic is bulk
data oriented or does not have critical response time requirements. There may be a number of application
types mapped to COS4.
COS4 is used for any traffic that will generate congestion and is bulk data oriented or does not have
critical response time requirements.13 There may be a number of application types mapped to COS4.
The default behavior for COS5 is that this class receives a very small amount of bandwidth allocation.
This queue is intended to transmit only when the other classes have nothing to transmit. Applications that
are not business related should be mapped into this class. If there is no other traffic on the line then traffic
is this queue will be serviced.
A.3

COS Profiles

For each IP service connection, customers order a pair of COS profiles, one to control ingress policing
and one to control egress queuing (as referenced to WAN). The profile specifies the amount of bandwidth
reserved for each COS class. When ordering COS, the COS profile can be specified as simple or
complex. The simple COS profiles provide a set of the most common COS profiles via a simple dropdown menu selection. Use of these profiles is suitable for the vast majority of enterprise needs and is
highly recommended.
COS2
COS3
COS4
80%
10%
10%
40%
30%
30%
60%
30%
10%
Simple COS Profiles for the 4COS model*

COS2V
COS2
COS3
COS4
COS5
20%
20%
20%
20%
20%
50%
30%
15%
5%
0%
25%
50%
15%
5%
5%
Simple COS Profiles for the 6COS model*
* The simple profile selection menu provides these allocation percentages for the data classes, coupled with any of
the available COS1 allocation values. In addition to these profiles, there are a small set of profiles that map all
traffic to a single class, resulting in no differentiation of traffic types.

In some cases, bandwidth COS profiles are selected that do not utilize all six of the available class
markings.

If a profile is selected that does not include COS2V or COS5, then traffic that has these markings
will be treated as part of the COS4 class.

If a profile is selected that does not have COS1, then traffic with this marking is treated as part of
the next lower class.

If a profile is selected that only recognizes a single class, then all traffic is treated as part of this
class, regardless of its marking.

13

If there are applications that require controlled response time (i.e. COS2 or COS3 treatment) that also generate
sufficient traffic to congest the port, then this suggests a capacity issue for the port.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

35

If a complex COS profile is needed, it can be chosen by assigning specific bandwidth values to each class.
Table A-2 outlines all of the possible bandwidth allocation combinations for the classes using the 6COS
Model. There are approximately 23,500 combinations. Table A-3 provides all 25 possible combinations
of profiles when using the 4COS Model.
A.4

COS Packages

For each IP Service connection customers can order a COS Package. The Packages are divided up based
on the amount of COS1 bandwidth allocated, see the left column of Tables A-2 and A-3 for the Package
delineation. For example 6COS Profiles that have >50% COS1 bandwidth allocation will be consider in
the MultiMedia-High Package. For the 6COS model you cannot order a Profile that is outside of the
chosen Package. Furthermore if there is a Unilink connection using Ethernet, Frame Relay, or ATM
access where each logical channel can have its own unique COS Profile, all COS Profiles of the LCs in
the Unilink must be from the same Package.
For the 4COS model this is not the case. Here you can order any Profile within the Package at the same
level and the Package below. For example if you have MultiMedia-Standard Package you can order
Profiles from the MultiMedia-Standard, Critical Data, and Business Data, but not MultiMedia-High. If
you order MultiMedia-High then you can order any Profile in the list. See Table A-3.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

36

6CoS Profile Bandwidth Allocation Table


Package
MultiMedia High

MultiMedia Standard

COS1
60%, 70%,
80%, 90%

COS2V
5%-30%, 5%
Increments
30%-80%,
10%
Increments

Standard Data

COS4
COS5
5-30%, 5%
0% default (5%Increments
20%, 5%
30%-80%, 10% Increments)
Increments

0%

0%

60%, 70%,
80%

Not Applicable

5%-50%, 5%
Increments

5%-30%, 5%
Increments
30%-80%,
10%
Increments

80%
60%
40%
5%-30%, 5%
Increments
30%-80%, 10%
Increments

10%
30%
30%
5%-30%, 5%
Increments
30%-80%, 10%
Increments

0%

0%

80%
60%
40%
0% 5%-30%, 5%
Increments
30%-80%, 10%
Increments

10%
30%
30%
5%-30%, 5%
Increments
30%-80%, 10%
Increments

0% Not Applicable

100%

0%

0% Not Applicable

0% Not Applicable

80%
60%
40%
0%

10%
30%
30%
100%

10%
10%
30%
0% Not Applicable

0%

90%
50%
0%

10%
50%
100% Not Applicable

5%-50%, 5%
Increments

Business Data

COS3
5-30%, 5%
Increments
30-80%, 10%
Increments

90% Not Applicable

50% Not Applicable

Critical Data

COS2
5%-30%, 5%
Increments
30%-80%, 10%
Increments

0%

Not Applicable

0% Not Applicable

100% Not Applicable


10%
10%
30%
5%-30%, 5%
Increments
30-80%, 10%
Increments

Not Applicable

0% default (5%20%, 5%
Increments)

100% Not Applicable


10%
10%
30%
5%-30%, 5%
0% default (5%Increments
20%, 5%
30%-80%, 10% Increments)
Increments

Table A-2. Available COS Profiles for the 6COS Model

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

37

4CoS Model Profile Bandwidth Allocation Table


Package
MultiMedia High

MultiMedia Standard

Critical Data

Business
Data

Profile ID
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123

COS1
90%
80%
80%
80%
60%
60%
60%
50%
40%
40%
40%
20%
20%
20%
10%
10%
10%
0%
0%
0%
0%
0%
0%

COS2
0%
80%
60%
40%
80%
60%
40%
0%
80%
60%
40%
80%
60%
40%
80%
60%
40%
100%
80%
60%
40%
0%
0%

COS3
0%
10%
30%
30%
10%
30%
30%
0%
10%
30%
30%
10%
30%
30%
10%
30%
30%
0%
10%
30%
30%
100%
90%

COS4
100%
10%
10%
30%
10%
10%
30%
100%
10%
10%
30%
10%
10%
30%
10%
10%
30%
0%
10%
10%
30%
0%
10%

124
125

0%

0%

50%

50%

0%

0%

0%

100%

Table A-3. Available COS Profiles for the 4COS Model

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

38

Appendix B - Mapping Applications to Classes


This section provides guidance on selecting the appropriate COS profile to implement given an enterprise
specific group of applications traversing the WAN. The first step in this process is the non-trivial task of
developing an inventory of applications traversing the WAN and the respective bandwidth consumption
per application. With this information, one can more accurately classify the applications in a manner that
yields the desired behavior of assuring all applications perform satisfactorily.
The following sections describe application characteristics as they relate to response time requirements, as
well as other attributes to consider in classifying an application for COS purposes. A discussion of data
queuing and scheduling follows to give the reader a deeper understanding of why and when an application
is mapped to a particular class. Last, some general axioms are presented to help guide the reader into the
proper COS profile selection and application mapping.
B.1

Application Attributes

The primary differentiation provided by COS features is differential queuing. The main impact of this
differentiation is in network delay during congestion. An application with strict response time
requirements needs the minimum queuing delay to meet those requirements; therefore the desire is to
create a class (or COS) hierarchy based on response time requirements. Figure 4.1 illustrates response
time requirements as a continuum of delay sensitive interactive applications to non-delay sensitive bulk
data applications. 14 The applications listed in the figure are for illustrative purposes only and could be
classified differently based on the specific enterprise environment. While an application for one enterprise
may need a Sub-Second response time, another enterprise using the exact same software could consider
the application response time requirement as Multi-Second or Background, based on its
implementation or criticality to the business mission. Further, user expectations for the response time of a
particular application could differ from enterprise to enterprise. Each enterprise has a unique mixture of
applications traversing the WAN that exhibit differing characteristics throughout the response time
continuum. Therefore one must consider each enterprise on a case-by-case basis when selecting the
appropriate COS profile and application mapping.

14

In general, response time critical applications tend to consume less bandwidth, and non-critical applications tend
to consume more bandwidth.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

39

Figure B-1 Application Continuum

Real Time applications such as voice and interactive video have aggressive 1-way network delay
requirements ranging from milliseconds to 100s of msec. Adequate performance for these
applications requires consistently low delay with little variation.

Sub-Second applications such as Telnet, Citrix thin client and terminal emulators represent the most
delay sensitive interactive applications with response time requirements less than one second. A 1-to1 relationship between user actions (e.g. keystrokes) and network traffic is common. Response time
requirements typically range from ~100 msec to less than 300 msec

Multi-Second applications such as intranet traffic, ERP and credit card authorizations operate by
transferring data in small blocks, with several network round trip times required for a user request to
be completed. Application response time requirements are from 1 to several seconds. But with several
network round trips required for each transaction, the network response times are similar to the subsecond applications response time requirement

Background applications do not have strict response time requirements; either because performance
of the application is not critical to the business, or the application itself is not inherently sensitive to
network delay.

Bulk Transfer applications such as file transfers and database synchronization generally transfer the
largest volume of data and tend to consume the bulk of the actual bandwidth. Because these
applications do not have stringent response time requirements, the applications do not need
prioritization or large guaranteed bandwidth allocations.

The following attributes should also be considered when determining how to classify an application:
(1) Bandwidth consumption. The majority of networked data applications are bursty in nature. Bursty
data applications tend to generate short duration clusters of packets requiring priority service.
However, these bursts do not represent a large proportion of the overall network capacity. Generally
response time requirements are inversely proportional to bandwidth consumption.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

40

(2) Criticality vs. Response Time Requirement. It is important to differentiate between business
criticality and response time requirements. There are certain applications within an enterprises
application suite that are critical to the organizations business goals. Some could require Sub-Second
response and might be mapped to COS1, COS2V, or COS2, but this treatment may not be necessary
for every business critical application. Email is an example of an application that is critical to most
enterprises but doesnt necessarily need stringent network response time.
(3) Number of Turns. This attribute deals with the number of times a client and server communicate
before completing a transaction. Applications with many turns per transaction will usually need more
stringent response time requirements from the network. This is due to the nature of the application
where every turn imposes another round trip of delay, and if one or a few turns get caught up in a
queue then that transaction will perform poorly. Higher number of turns per transaction is also an
indication of flows with small packet sizes. Therefore look to place applications with large number of
turns per transaction in COS2 or COS3.
(4) Packet Delivery Requirements. While the primary function of a COS policy is to control response
time of applications, it can also impact the packet delivery attributes of the environment. Real Time
applications tend to be UDP based and are intolerant of both delay and packet loss. Because of this,
these application types are mapped into the COS-1 class to both minimize latency and assure packet
delivery. TCP based data applications are more tolerant of packet loss as long as it is not excessive.
Under severe congestion, the in-contract/out-of-contract markings can be used for an additional level
of application control in COS2 and COS3.
(5) Packet Size. The average packet size of an application can be a key factor in the determination of
appropriate class mapping. Traffic flows with small packet sizes are more inclined to be interactive
therefore generally will require a more stringent network response. Small packet sizes are a
characteristic of Real Time and Sub-Second applications. Flows with larger average packet sizes tend
to need less stringent network response, like those found toward the bandwidth sensitive end of the
continuum. Care should be taken when placing traffic flows with large average packet sizes into
COS2, this could yield undesirable results for the other apps mapped into this same queue.
B.2

Application Mapping Guidelines

Building upon the concepts described in Section 0 and 0 <ref> we now provide some axioms that will
help guide the engineer toward the most beneficial application mapping and queuing profile selection.
Please keep in mind that these guidelines are just a starting point. There are many arguments to be
made for alternate application classifications and mappings.
(1) First, establish the Real Time requirements. Real Time bandwidth is usually engineered for specific
bandwidth consumption, based on the number of simultaneous voice calls and/or number of video
conference sessions to be supported. This absolute number can then be used to calculate the
percentage of the port bandwidth required for Real Time traffic. Real time bandwidth should be sized
for the peak expected load.
(2) Note that you do not need to use all available classes just because they are there. If the real time
traffic is the only performance concern, then all remaining traffic can be mapped to a single class;
usually COS4. Similarly if two data classes are needed, a profile with allocation for all three data
classes can be used since the unused allocation is divided proportionally across the data classes
needing the bandwidth. For these cases, the service ratio between data classes is more important than
the actual allocations.
(3) For lower port speeds (< ~T1), profiles should be chosen that provide fairly high allocations for
COS2, COS3. It is expected that the traffic mapped to COS2 and COS3 will not fully utilize the
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

41

allocated bandwidth in these situations. This excess allocation provides the desired prioritization of
interactive applications resulting in minimum queuing delay on lower speed ports. The unconsumed
bandwidth remains available for COS4.
(4) For an environment with high-speed connections (~T1 and up), the queuing/insertion delays are not as
significant. COS Profile selection for these ports should be based on a best match for the traffic
volumes to be mapped into each class. It may also be desirable to map some Background type
applications to the COS3 class to assure a particular bandwidth level (30%) rather than COS4
treatment.
(5) It may be desirable to group the more aggressive/more important Multi-Second applications
together with the Sub-Second applications in COS2, leaving the less important Multi-Second
applications alone in COS3.
(6) One might wish to group some of the less important Multi-Second applications together with
Background applications in COS4.
(7) Remember to account for administrative traffic in the network (BGP updates, Network management
traffic, BFD heartbeats 15, etc). Generally these traffic types are low volume. By default, Cisco marks
these traffic types with IP Precedence 6 or 7. This marking should be maintained. AT&T networkbased COS features will recognize these traffic types and provide separate queuing treatment to
maintain this administrative traffic. Customer network management systems should be allocated and
engineered just like any other data application. The best practice is to map the administrative traffic
into a class where there is a low probability of congestion (in most cases this will be the highest
priority data class).
(8) Video applications that have constant bandwidth with control of delay variance requirements can be
mapped into COS2V. Remember the default behavior COS2V to police traffic similar to COS1, so
the bandwidth consumption of the video application is to be considered with defining the bandwidth
allocation for this class.
(9) The COS5 queue, also known as the Scavenger Class is intended for traffic that is less than best
effort, for example traffic that is not needed to run the business but not prohibited. When the link is
fully utilized this queue will not be serviced also known as starving the queue. Two approaches to
mapping traffic into COS5 is to (1) fix a problem (reactive) and (2) prevent a problem (proactive).
Using the COS5 queue to fix a problem one might use the COS4 queue for all best effort traffic then
any non-business applications consuming bandwidth can be mapped into COS5. Using the COS5
queue to prevent a problem one might map all traffic into the COS5 class until explicitly mapped into
another queue.

15

Bidirectional Forwarding Detection (BFD) is a network protocol to detect faults between two devices most
commonly used for Ethernet access.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

42

Appendix C - Data Queue Scheduling Mechanisms - Tutorial


The primary function of COS features is to provide differential queuing (packet scheduling) across
application classes. There are several different queuing disciplines deployed within AT&T services to
support COS features; strict priority, calendar queuing, and weighted round-robin.
To support Real Time applications, COS1 is implemented using strict priority (or low latency) queuing.
This mechanism assures that any arriving COS1 packet will be the next packet forwarded. The remaining
classes are scheduled using either calendar queuing (Class-Based Weighted Fair Queuing CBWFQ) or
weighted round-robin queuing (Modified Deficit Round Robin-MDRR). The particular queuing
mechanism in use varies across the various available service ports and speeds.
This section provides a brief discussion of several packet-scheduling behaviors. The insight provided will
help in understanding appropriate allocation profiles and application/class mapping. In particular, an
understanding of the behavior of weighted fair queuing and calendar queuing is very important to
getting the best results from the available COS capabilities.
C.1

First-In-First-Out (FIFO)

The FIFO queuing discipline is where packets are transmitted in the same order they arrive. This method
does not differentiate among traffic flows in the network and hence cannot prioritize one flow over the
others. A network implemented using FIFO queuing would generally be considered a non-COS
implementation. However, in the cases where either the link speed is very high or the differential rate of
packets entering the queue versus packets leaving is small, no queues will build in the network and the
FIFO scheme will work just fine. It is only when network congestion occurs and queues build that the
FIFO queuing mechanism falls short of the desired behavior.
C.2

Priority Queuing

Priority queuing is at the opposite end of the spectrum from FIFO. In priority queuing, one class of traffic
gets absolute priority over other classes. Any time a packet arrives that is part of the priority class, it is
immediately forwarded, regardless of how many lower priority packets are already queued, or how long
the lower priority packets have been waiting.
Clearly, traffic receiving priority queuing gets the best available COS treatment. This is appropriate for
application types that need minimum network delay for proper operation. VoIP and, to a lesser extent,
video conferencing are the two most common applications for this type of queuing. Priority queuing is the
mechanism used to implement COS1 in AT&T services.
C.3

Flow-Based Weighted Fair Queuing (WFQ)

With flow-based WFQ, packets are differentiated by flow, and mapped into one of 64 queues. 16 Packets
with the same source IP address, destination IP address, source port, destination port, IP Precedence, and
protocol belong to the same flow. A hashing algorithm is used to randomly map each flow to one of the
64 queues. All packets for a given flow will be mapped to the same queue. This is the queuing discipline
enabled by default for physical interfaces whose bandwidth is less than or equal to 2.048 Mbps in Cisco
routers. Packets are scheduled for de-queuing based on the following formula:

16

Cisco WFQ uses 64 queues by default. This can be reconfigured.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

43

Schedule Time = Queue Tail + (Weight * Length)


Weights are based strictly on IP Precedence and WFQ becomes FQ for all practical purposes when all
traffic arriving at the scheduler carries the same IP Precedence value17. Figure 4.2.2 illustrates how traffic
with the same IP Precedence value traverses through an interface with WFQ enabled.

Hash

A1

B2

B1

Pkts. OUT

C
Pkts. IN

C1

D3

E3

D2

E2

D1

E1, F3, F2, D1, F1

E1

F5 F4 F3 F2 F1

800

700

600

500

400

300

200

100

Scheduled Time

Figure C-1. WFQ Schedule time with same IP Precedence.


As packets arrive at the scheduler the hashing algorithm is applied and the flows are randomly mapped to
one of the 64 queues depicted in this example as A through F. Packets are de-queued based on their
arrival time and packet size. The right side of the packet is the packets arrival time to the queue. The left
side of the packet is the scheduled time for transmitting. WFQ behavior helps assure two desirable
behaviors for multi-application environments. First, flows with low bandwidth and small packet sizes
tend to experience much lower queuing delay. These attributes (low bandwidth and small packets) are
exactly the attributes that the most response-time critical applications tend to share. The sequence to the
right shows how the packets would be de-queued at the output of the scheduler. The downside of the
flow-based WFQ approach is that the hashing is non-deterministic. With random assignment into the
queues, it is possible to get response time critical flows and bulk data flows hashed to the same queue,
resulting in poor performance for the interactive application.
C.4

Class-Based Weighted Fair Queuing (CBWFQ)

Ciscos Class-Based Weighted Fair Queuing (CBWFQ) uses a calendar queuing model for packet
scheduling. CBWFQ uses the same formula as WFQ to schedule packets for de-queue. However, instead
of classifying packets by flow, packets are classified based on a service policy. A service policy is used to
explicitly map traffic types to a defined queue rather than depending on the randomizing function of the
hashing algorithm. Also with CBWFQ the weighting function is based on the relative bandwidth
allocation configured for each queue. These factors represent a major departure from the way packets are
scheduled in WFQ, and as a result, create a very different behavior to the traffic flow.
To illustrate this difference Figure 4.2.3.1 depicts an example where packets are classified into three
queues, A, B, and, C with respective weights of 4, 2, and 10. The packets are de-queued based on the

17

WFQ considers IP Precedence when scheduling packets for de-queue. This document assumes that IP Precedence
is the same for all packets.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

44

packet size and the queue weight. Whereas WFQ would schedule the packets in the C queue first based
on their small packet size, CBWFQ schedules these to be serviced later because the weight is higher for
that queue.
Schedule Time = Queue Tail + (1/BW% * Length)
Weight ~ 1/(BW %)
100 bytes

Class-Mapping

A1=400
150 bytes

B 2

B2=800

Pkts. IN

60 bytes

10

B1=300

1400

1200

C2, B2, A1, C1, B1

60 bytes

C2=1200

1600

Pkts. OUT

150 bytes

C1=600

1000

800

600

400

200

Scheduled Time

Figure C-2. CBWFQ Schedule time illustration.


In practice, the more bandwidth assigned to a queue the more often the traffic flow within that queue gets
serviced relative to the other queues. The weight of a queue is approximately the inverse of the bandwidth
% allocated to that queue. Therefore more bandwidth allocated to a particular queue implies that the
schedule time of a packet in the higher bandwidth queues will be shorter than a queue with less bandwidth
allocated.

Applications:
Sub-Second
Bulk Transfer

Bandwidth Allocation
Class A BW 10% Weight ~ 10
Class B BW 90% Weight ~ 1.1

Wt.

Class-Mapping

B 1.1
Pkts. IN

1500 bytes

1500 bytes

B2=
3330

B1=
1666

Pkts. OUT

60 bytes each

B2, A6, A5, B1, A4, A3, A2, A1

A 10

A6=
3600
4000

3500

A5=
3000

3000

A4=
2400

2500

A3=
1800

2000

A2=
1200

1500

1000

A1=
600
500

Scheduled Time

Figure C-3. CBWFQ Realistic bandwidth allocation used with high speed links
For example, suppose we had two queues configured to service traffic through an interface. One queue is
configured with 90% bandwidth and the other with the remaining 10%. The weight for the high
bandwidth queue would be approximately 1.1 and the low bandwidth queue would be 10. As these
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

45

weights are applied to the respective traffic flows within the queues, the schedule time is calculated
giving the traffic flow in the high bandwidth queue a shorter scheduled time than the lower bandwidth
queue.
Say we have a mixture of two applications with different flow structure and network requirements. One
application has small packet sizes with Sub-Second response time requirements, and the other, a bulk
transfer application, has large packet sizes and is more forgiving with respect to response time. If we put
the Sub-Second traffic into the low bandwidth queue and the bulk transfer traffic in the high bandwidth
queue the schedule time would resemble that shown in Figure 4.2.3.2. Over time, each queue is assured
that it will receive its allocated share of bandwidth. Further, if some queues are not fully subscribed, then
the remaining queues will continue to share the available bandwidth proportionally to their allocation.
In this scenario some of the packets within the Sub-Second application would be scheduled behind the
larger packets of the bulk transfer application. On high-speed links, the Sub-Second application can
tolerate the insertion delay when the larger packet sizes are being de-queued. Thus with higher link
speeds the allocation of bandwidth to the queues can be similar to the actual application traffic expected
for each queue. But lower link speeds require a different strategy to achieve the desired de-queuing result
On low speed links, it is important for time sensitive interactive packets to get minimal queuing delay. On
low link speeds, performance of interactive applications degrades noticeably when forced to wait for even
a small number of bulk data packets. Consider the example below:
For this case the strategy would be to put the Sub-Second applications into the high bandwidth queue to
be scheduled ahead of the packets associated with bulk transfer applications. This scenario is illustrated in
Figure 4.2.3.3 using the same bandwidth allocation as before except now the Sub-Second application is
mapped to the high bandwidth queue. The small packets are scheduled ahead of any bulk transfer packets
and do not have to wait behind any of the larger bulk transfer packets unless those packets have already
started to de-queue. This desired effect is maintained as long as the traffic flow in the high bandwidth
queue exhibit bursts short enough in duration to not cause the queue to be congested. A higher bandwidth
is allocated to the queue with no intent to use it, but rather to apply the weight that will give this flow a
lower scheduled time. The profile selections for AT&T Network-based Class of Service are designed to
emphasize this relative behavior of the class queues.

Applications:
Bulk Transfer
Sub-Second

Bandwidth Allocation
Class A BW 10% Weight ~ 10
Class B BW 90% Weight ~ 1.1

Wt.

A1=66
A2=132
A3=198
A4=264
A5=330
A6=396

Class-Mapping

B 1.1
Pkts. IN

60 bytes each

Pkts. OUT

1500 bytes

A 10

B1=
15,000
15000

B1, A6, A5, A4, A3, A2, A1

500

Scheduled Time

Figure C-4. CBWFQ Small bursts into high bandwidth queue for low speed links.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

46

C.5

Modified Deficit Round-Robin (MDRR)

Modified Deficit Round-Robin is specifically designed for provider edge routers, it will not be configured
on the customer edge router. We include it within this tutorial to provide information about the queuing
characteristics from PE to CE. Deficit Round Robin is a scheduling algorithm where each non-empty
queue is served in a round-robin fashion and a counter, referred to as the deficit counter, is used to
determine the number of packets serviced from each queue during its turn to be serviced. This Deficit
Round-Robin is then modified to add a low-latency queue where all queues are serviced in a round-robin
fashion with the exception of the low-latency queue. The low-latency queue is serviced whenever the
queue is nonempty. This allows the lowest possible delay for this traffic.
Queue A Quantum Value = 1500

Class-Mapping

A
Pkts. IN

A1 =
A2 =
A4 =
A3 =
A5 =
1000 Bytes 1000 Bytes 1000 Bytes 1000 Bytes 1000 Bytes

B1 =
B2 =
B3 =
B4 =
B5 =
1000 Bytes 1000 Bytes 1000 Bytes 1000 Bytes 1000 Bytes

Pkts. OUT

etc., B4, A3, B3, B2, B1, A2, A1

Queue B Quantum Value = 3000

Figure C-5. MDRR - two queue example.


To illustrate how Modified Deficit Round Robin (MDRR) works we use an example with two queues,
with Quantum Values of 1500 and 3000 respectively, and with all packets at 1000 bytes in length. This
most likely wont happen in real life, but its useful for this example. MDRR removes packets from a
queue until the quantum value for that queue has been exhausted. The quantum value quantifies a number
of bytes. When a queue first fills, the queues deficit counter is set to the quantum value for that queue,
which for our example is 1500 for Queue A and 3000 for Queue B. MDRR begins by taking one packet
from Queue A, decrementing the deficit counter to 500 (1500 1000), since the deficit counter has not
been decremented to 0 (or less) another packet is serviced from Queue A. At this time the deficit counter
is decremented to -500, MDRR then moves to Queue B. Here it will take 3 packets at 1000 bytes a piece
to decrement the deficit counter to 0. That concludes the first round-robin pass. During the first pass
MDRR has taken 2000 bytes from Queue A and 3000 bytes from Queue B. In the second round-robin
pass the process begins by adding the quantum value to the deficit counter at each queue. Queue As
deficit counter becomes [1500 + (-500)] = 1000, and Queue Bs deficit counter becomes 3000 to begin
the second pass. During the second pass MDRR will take 1000 bytes from Queue A and 3000 bytes from
Queue B. With the deficit feature of MDRR, over time each queue receives a guaranteed bandwidth based
on the following formula:
Guaranteed BW for Queue X = Quantum Value for Queue X / Sum of all Quantum Values

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

47

Appendix D - Case Studies: Fragmentation/Interleaving/Header


Compression
For low speed links (less than or equal to 768k), some VPN connections can be ordered with link
fragmentation/interleaving (LFI); with or without header compression. This functionality is based on
MLPPP capabilities used together with frame-relay interfaces. Below is an example of how to configure
the CE for MLPPP 18 fragmentation and cRTP.
Fragmentation and Interleaving allows large data packets in COS2, COS3, and COS4 to be broken up into
smaller fragments for transmission to the PE. This allows delay sensitive VoIP packets marked COS1 to
be interleaved with the fragments of the larger data packets, resulting in lower, more consistent delay for
COS1. This is often necessary for low speed connections because the transmission delay of a single large
data packet can cause unacceptable performance.
MLPPP LFI requires use of traffic-shaped Frame Relay sub-interfaces. So the first step is to set up the
template with the traffic shaping parameters.

map-class frame-relay shape256


frame-relay cir 256000
frame-relay bc 2560
frame-relay be 0
no frame-relay adaptive-shaping
!

The cir parameter is based on the frame relay port speed. The bc value should be set to 1/100th of the
CIR (i.e. 10mS of traffic).
On low speed ports, MLPPP is required to support fragmentation and header compression (cRTP) over
Frame Relay. MLPPP is turned on via a virtual template that is applied to the sub-interface. The
bandwidth statement in the virtual template is critical for the proper fragment settingthe router will use
the bandwidth statement here to calculate the actual packet size based on the fragment-delay setting.
The bandwidth should be set to the shaped rate of the interface, which is typically port speed. 19 The
fragment-delay is set to ~10 msec. Fragmenting and interleaving are turned on. The IP address statement
configures the IP address of the interface which should be the CE side of the /30 subnet assigned for the
CE/PE link. The service policy is applied to the virtual template.

18

This use of MLPPP should not be confused with using MLPPP for inverse multiplexing several links together as a
single higher speed pipe. While related, this application of MLPPP is solely to accomplish packet fragmentation and
interleaving on low speed links.
19

An exception to this would be if there were multiple PVCs on the same portto ensure that COS works the
shaping rate of the ePVC must be set to something less than port speed. AT&T technical specialists can make
recommendations in these cases.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

48

interface Virtual-Template2
!Create a Virtual Template interface
bandwidth 260
ip address 10.62.254.165 255.255.255.252
service-policy output COSTEST
!Apply the COS policy to the Virtual Template
interface
max-reserved-bandwidth 100
ppp multilink
ppp multilink fragment-delay 10
ppp multilink interleave

On the serial interface and sub-interface, traffic shaping is turned on and the map-class is applied as
shown below. Note also that the virtual template is applied to the sub-interface. In this case, the maxreserved-bandwidth statement set at 100 will ensure that the service policy is properly applied to
the interface. Also note that the map-class defined above is invoked on the sub-interface to provide the
appropriate traffic shaping.

interface Serial0/0
no ip address
encapsulation frame-relay IETF
frame-relay traffic-shaping
!
interface Serial0/0.890 point-to-point
description ePVC to MLPPP VPN
no ip address
no cdp enable
frame-relay class shape256
frame-relay interface-dlci 890 ppp Virtual-Template2

The actual size of fragmented packets is a function of the bandwidth statement and the fragment delay
within the virtual template. Some access trunks in the network use ATM cell transport to reach the PE.
When using small packets, such as in a fragmentation and interleaving configuration, it is important to
make efficient utilization of the underlying ATM cells. To facilitate this, the following settings should be
used for the MLPPP bandwidth and fragment delay.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

49

Port Speed Bandwidth Statement Fragment Delay


56K
57
12
64K
68
10
128K
132
11
192K
202
11
256K
260
10
320K
337
10
384K
414
10
448K
452
10
512K
529
10
576K
606
10
640K
644
10
704K
721
10
768K
798
10
When an MLPPP connection is provisioned, an optional header compression capability is also available.
This provides reduced bandwidth for VoIP calls by compressing the IP,UDP, and RTP protocol overhead
in the VoIP packets. If ordered, the compression should be configured in the service policy for COS1
traffic.
!DEFINE POLICY
policy-map COS
class COS1
priority 60
compression header ip rtp
set ip dscp ef
class COS2
bandwidth remaining percent 60
set ip dscp af31
class COS3
bandwidth remaining percent 30
set ip dscp af21
class class-default
fair-queue
set ip dscp default

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

50

Appendix E COS for Video


E.1

Introduction

Videoconferencing and telepresence (hereafter identified generically as video in this appendix) across
IP networks is finally showing an incredible increase. This is driven by several factors: cheaper
equipment and broader software (and mobile) solutions, higher quality, proliferation of bandwidth, UC
convergence, consumerization of video (Skype, MSN Messenger, iChat/FaceTime, etc.), and richer crossvendor (interoperability) experiences.
The significant added dimension video provides over voice alone comes at a cost. Not only does video
require similar network characteristics as voice (low latency/loss/jitter), but video also requires much
more bandwidth than voice. Because video is punctuated by spurious and instantaneous pulses of traffic,
you always have to design to the peak bandwidth, even though video will average much less during the
duration of the call.
E.2

Video Application Behavior

Fully interactive video applications depend on the interaction between two appliances that independently
receive audio and video inputs, encode them, and send them to another participating appliance. The
human brain rationalizes what was sent from one end with what was received from the other end into an
overall video experience. (Video bridges and multi-codec systems make the interactions more complex,
but the fundamentals of video remain the same.)

Figure E1. How Video Behaves as an Application


In order to provide optimum video experiences, you must minimize any degradation of the elements that
govern the experience for the end users.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

51

E.3

Video Network Behavior

Video is one of the most complex enterprise applications. Each video call has numerous independent
TCP/UDP sessions in flight, all controlled by a delicate signaling mechanism. As a minimum, there will
be the following elements:

Signaling: this could be either H.323 or SIP. Signaling is important to establish the call, manage
the call, monitor the call, and release the call. Signaling defines the parameters of the call and
may change the characteristics or behavior of the call throughout its duration. H.323 and SIP are
considered standardized protocols, governed by the ITU and IETF, respectively. However, the
reality is that interoperability challenges exist within both H.323 and SIP (much less between
them) due to the fact that both standards have a rich heritage of development that is not always
backward compatible; the standards are ambiguous in areas and subject to interpretation; and
vendors deploy variations in advance of standardization for competitive reasons.
Media: there will be numerous media flows for each call. Each endpoint in the call will initiate
one for audio and one for video. The codecs used for each flow will get negotiated during the call
setup. Figure E2 below shows this network relationship. Notice the similarity to how the
application behaves (Figure E1). Newer video endpoints support H.264 video at 1080p or 720p
resolutions. They also support many 64K audio codecs: G.711, G.728, and many variations of
G.722. Of course, they also support a full legacy of older audio and video codecs.

Figure E2. How Video Looks on a Network


It is not uncommon to also see other network flows, related to a single video session, depending on
protocols used and vendor features.
Bandwidth is symmetric and negotiated by signaling during call setup. The endpoints agree to use a
specific call rate, which is controlled by the encoding function of the video endpoint. This call rate
includes audio, video, and content sharing. However, the call rate does not include the protocol overhead
necessary to deliver the media across a network. 20% bandwidth is added to the call rate to account for
this protocol overhead.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

52

384Kbps call rate requires at least 460kbps on the network


1024Kbps call rate requires at least 1229Kbps on the network
2048Kbps call rate requires at least 2458Kbps on the network
4096Kbps call rate requires at least 4916Kbps on the network

In order to preserve the interactive experience, latency, jitter, and loss must be as tightly controlled as
possible. Latency is fundamentally determined by the actual route path the media traverses between
endpoints. The shortest path is always the best. However, not every network allows for this in all cases.
Below 150 ms (one-way) between endpoints, any effects on interactivity are below the level of human
perception. As end-to-end latency increases, more video participants will notice the effect on the
interactivity among participants. As one-way latency exceeds 250 ms (1/2 second round-trip), the
interactive experience begins to diminish with greater effect. However, these degraded video experiences
should be evaluated in light of their business benefits and broader strategic value.
E.4

Video COS Methodologies

COS is the de facto mechanism for addressing jitter and loss. Video frames are paced out to the network
at a controlled rate. Video frames are often large enough to span across multiple packets, and there can be
large variance among packet sizes. Because of the variability of packet sizes, they serialize differently as
they are passed into the network, ensuring that they will not be received as consistently as they were
created by the endpoint. This phenomenon is compounded by the many hops video packets make in the
end-to-end path. This variation between successive video frames as they traverse a network is called jitter.
Jitter is further compounded if other packetsboth large and smallget served by an interface ahead of
video packets. COS is a way to mitigate jitter by getting video packets into the network and ensuring they
move expeditiously through the each individual hop.
COS also minimizes packet loss. In order for video to be viable across WANs, the endpoint applies a
compression algorithm to reduce the bandwidth necessary to recreate the image at the other end. As a
result, each individual packet contains a significant amount of data necessary to recreate the video
experience at the other end. The loss of a single video packet translates to a disruptive video experience
because the receiving side doesnt have the data it needs to fully recreate the image. (Vendors tout
relatively large packet loss tolerances, but that is more marketing and/or controlled test environments than
reality.) As HD video becomes more commonplace, even more data is represented in each video packet,
exacerbating the effect of packet loss. Furthermore, packet loss usually occurs in bursts, not intermittent
drops of single packets across time. Losing several successive video packets further degrades video
quality. COS preserves the video experience by ensuring that video packets will get the best treatment
through a network interface. If there are any sacrifices to be made, COS will defer other packets (of
applications that can sustain the delay or loss) in order to ensure optimal delivery of video.
There are several possible ways to fit video within a COS structure. Certainly, a conservative approach is
best, and use of COS1 makes sense, in light of the real time requirements of video. However, given the
right environment, video can be sustained outside of COS1 with the considerations identified below.
E.4.1 Video in COS1 without VoIP
Customers using IP video without VoIP have free reign of COS1 to fully support video. Apart from
ensuring the COS1 policer is not too aggressive for the total video requirement; the primary remaining
concern is to make sure there is enough servicing for the remaining applications. This will depend on the
number of additional classes and the nature of the applications leveraging those classes. Because video
inherently has distributions of both large and small packets, router PPS exhaustion is diminished. This
allows for generous allocation of COS1 for video, up to the effective (shaped) port speed.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

53

E.4.2 Video in COS1 with VoIP


Mixing VoIP and video together in the same RT class is possible. There are several important elements
of this design. One is to ensure that the combination of VoIP and video must also provide some servicing
for other non-real time applications. Additionally, the total COS1 policing boundary must never exceed
the worst-case combination of VoIP and video together. In the outbound direction (toward the WAN),
you can police both VoIP and video traffic independently so that neither application exceeds a defined
allocation of COS1 and introduces packet loss. Call Admission Control (discussed below) is a better way
to control this phenomenon.
Keep in mind that this does not prioritize either application over the other (the network interface will still
service packets FIFO out of the RT class). Consequently, the network interface needs to clock fast
enough to prevent large video frames from introducing jitter in the voice stream. As a point of reference,
a 10Mbps network interface will introduce more than 50ms of delay between successive VoIP packets
when a large video frame has to get served in COS1 with VoIP.
(65536 Bytes * 8 bits/Byte) / (10,000,000 bits/sec) = 0.0524288 sec = 52.4 ms
The de-jitter buffer of VoIP endpoints can absorb 50ms of jitter, but that is a third of the one-way target
of 150ms for optimal VoIP experiences. Admittedly, large video frames do not occur frequently (when
video sessions start or when screens change as in a multipoint call). However, it is prudent to be
conservative when mixing VoIP and video together in COS1.
E.4.3 Video in COS2V
The use of separate classes is a practical way to provide specific class boundaries for VoIP and video endto-end. By mapping video into COS2V, clearly VoIP leverages COS1 and video leverages COS2V.
Prioritizing VoIP over video has a minor effect on video traffic because VoIP generates relatively small
and consistently-paced packets that interleave very well with video packets. For instances where both
VoIP and video must be supported, the VoIP=COS1/video=COS2V combination is exceedingly common.
E.4.4 Video in COS2
COS2 is an appropriate class for video for locations that use a 4COS model implementations. However,
COS2 should only be used for video, and the class should not be shared with other applications. Such a
condition has the potential to introduce spurious variations in delivery of the video, particularly if the
applications are bursty in nature. Remember that jitter is one of the factors that should be mitigated for
optimal video experiences. In such cases, it would make more sense for video to use COS2 and other
applications to use COS3.
E.4.5 Video in Other Classes
In general, interactive video should not be supported in a non-default/scavenger class below COS2/2V
unless the network is large enough to make priority treatment irrelevant. As rule of thumb, at 10Mbps
COS becomes a bandwidth allocation policy, and the effect of serialization and delay for video (and other
applications, for that matter) becomes negligible. However, there still needs to be enough allocation of
bandwidth to the video class. Video should never go in the default class or scavenger class, which
typically has the greatest delay and drop probability.
E.5

Call Admission Control

By their very nature, VoIP and video are extremely sensitive applications. It would be better to prevent a
call from starting than to allow it to start under conditions that would affect it and other calls in progress.
Call Admission Control (CAC) is a function of the call control and signaling element that acts as an
NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

54

application cop to ensure that VoIP and video do not exceed the network boundaries that have been
designed to support VoIP and video. CAC can do one or more of the following:

Deny a call until network resources are available

Route a call to voice mail

Route a call to an alternate network path


-

Commonly PSTN for VoIP or ISDN for video


Could also be a separate IP path

Reduce the bandwidth of a call (G.711 VoIP becomes G.729, 768K video becomes 384K video,
etc.)

CAC functionality is typically an element of the IP PBX for VoIP or H.323 gatekeeper/SIP server for
video. However, some element of CAC can exist in session border controllers and application
management/monitoring platforms.

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

55

References
(1) AT&T VPN Service Customer Router Configuration Guide Rel. 2.0 - June 2011
(2) AT&T VPN Service Ethernet Access Customer Router Configuration Guide Rel. 1.3 May 2010
(3) Class of Service Data Collection Document For AT&T Managed Data and IP Services
(4) IP Quality of Service Vegesna Cisco Press, 2001
(5) Cisco COS 2nd Edition Odom and Cavanaugh Cisco Press 2005
(6) Ethernet Access Performance Considerations Release 2 January 2010

NDC Release 3.1

2012 AT&T Knowledge Ventures. All rights reserved.


AT&T is a registered trademark of AT&T Knowledge Ventures.

56

Das könnte Ihnen auch gefallen