Beruflich Dokumente
Kultur Dokumente
Overview
The module presents a thorough overview of quality of service models and
mechanisms as implemented in complex service provider and enterprise networks.
It includes the following topics:
n Introduction to IP Quality of Service
n Integrated Services Model
n Differentiated Services Model
n Building Blocks of IP QoS Mechanisms
n Enterprise Network Case Study
n Service Provider Case Study
Objectives
Upon completion of this module, you will be able to perform the following tasks:
n Describe the need for IP QoS
n Describe the Integrated Services model
n Describe the Differentiated Services model
n Describe the building blocks of IP QoS mechanisms (classification, marking,
metering, policing, shaping, dropping, forwarding, queuing)
n List the IP QoS mechanisms available in the Cisco IOS
n Describe what QoS features are supported by different IP QoS mechanisms
Introduction to IP Quality of Service
Objectives
Upon completion of this lesson, you will be able to perform the following tasks:
n Describe different types of applications and services that have special resource
requirements
n List the network components that affect the throughput, delay and jitter in IP
networks
n List the benefits of deploying QoS mechanisms in IP networks
n List QoS mechanisms available in Cisco IOS
n Describe typical enterprise and service provider networks and their QoS-related
requirements
• Application X is slow!
• Video broadcast occasionally stalls!
If the network is empty any application should get enough bandwidth, acceptable
low and fixed delay and not experience any drops. The reality, however, is that
there are multiple users or applications using the network at the same time.
IP IP IP IP
The example above illustrates an empty network with four hops between a server
and a client. Each hop is using different media with a different bandwidth. The
maximum available bandwidth is equal to the bandwidth of the slowest link.
The calculation of the available bandwidth, however, is much more complex in
cases where there are multiple flows traversing the network. The calculation of the
available bandwidth in the illustration is a rough approximation.
IP IP IP IP
Delay = P1 + Q1 + P2 + Q2 + P3 + Q3 + P4 = X ms
• End-to-end delay equals a sum of all propagation, processing
and queuing delays in the path
• Propagation delay is fixed, processing and queuing delays are
unpredictable in best-effort networks
The figure illustrates the impact a network has on the end-to-end delay of packets
going from one end to the other. Each hop in the network adds to the overall delay
because of the following two factors:
1. Propagation (serialization) delay of the media that, for the most part, depends
solely on the bandwidth.
2. Processing and queuing delays within a router, which can be caused by a wide
variety of conditions.
Ping (ICMP echoes and replies) can be used to measure the round-trip time of IP
packets in a network. There are other tools available to periodically measure
responsiveness of a network.
Forwarding
bandwidth
IP IP IP IP
• Processing Delay is the time it takes for a router to take the packet from an
input interface and put it into the output queue of the output interface.
• Queuing Delay is the time a packets resides in the output queue of a router.
• Propagation or Serialization Delay is the time it takes to transmit a packet.
n Processing Delay is the time it takes for a router to take the packet from an
input interface and put it into the output queue of the output interface. The
processing delay depends on various factors, such as:
– CPU speed
– CPU utilization
– IP switching mode
– Router architecture
– Configured features on both input and output interface
n Queuing Delay is the time a packet resides in the output queue of a router. It
depends on the number and sizes of packets already in the queue and on the
bandwidth of the interface. It also depends on the queuing mechanism.
n Propagation or Serialization Delay is the time it takes to transmit a packet. It
usually only depends on the bandwidth of the interface. CSMA/CD media may
add slightly more delay due to the increased probability of collisions when an
interface is nearing congestion.
Forwarding
IP IP IP IP IP
Tail-drop
• Tail-drops occur when the output queue is full. These are the most
common drops which happen when a link is congested.
• There are also many other types of drops that are not as common and
may require a hardware upgrade (input drop, ignore, overrun, no
buffer, ...). These drops are usually a result of router congestion.
The usual packet loss occurs when routers run out of buffer space for a
particular interface (output queue). The figure illustrates a full output queue of an
interface, which causes newly arriving packets to be dropped. The term used for
such drops is simply “output drop” or “tail-drop” (packets are dropped at the tail of
the queue).
Routers might also drop packets for other (less common) reasons, for example:
n Input queue drop - main CPU is congested and cannot process packets (the
input queue is full)
n Ignore - router ran out of buffer space
n Overrun - CPU is congested and cannot assign a free buffer to the new packet
n Frame errors (CRC, runt, giant)—hardware-detected error in a frame
cTCP data
Compress
the Headers
• Upgrade the link. The best solution but also the most expensive.
• Take some bandwidth from less important applications.
• Compress the payload of layer-2 frames.
• Compress the header of IP packets.
cRTP data
Compress
the Headers
• Upgrade the link. The best solution but also the most expensive.
• Guarantee enough bandwidth to sensitive packets.
• Prevent congestion by randomly dropping less important packets
before congestion occurs
Interactive Not
Low Low Low
(e.g. Telnet) Important
Batch (e.g. Not Not
High
High Low
FTP) Important Important
Fragile (e.g. Low Low None Not
SNA) Important
No No No
Silver
Silver Guaranteed Guarantee Guarantee Guarantee
Bronze Guaranteed No No No
Limitted Guarantee Guarantee Guarantee
Best Effort No No No No
Guarantee Guarantee Guarantee Guarantee
... . . .. . . .. . . .. . . ..
By investigating the history of the Internet it can be divided into three QoS-related
periods:
n Best-effort. The Internet was designed for best-effort, no-guarantee delivery
of packets. This behavior is still predominant in today’s Internet.
n Integrated Services model. Introduced to supplement the best-effort delivery
by setting aside some bandwidth for applications that require bandwidth and
delay guarantees. The Integrated Services model expects applications to signal
their requirements to the network. Resource Reservation Protocol (RSVP) is
used to signal QoS requirements to the network.
n Differentiated Services model. Added to provide more scalability in
providing QoS to IP packets. The main difference is that the network
recognizes packets (no signaling is needed) and provides the appropriate
services to them.
Today’s IP networks can use all three models at the same time.
Review Questions
Answer the following questions:
n What are the relevant parameters that define the quality of service?
n What can be done to give more bandwidth to an application?
n What can be done to reduce delay?
n What can be done to prevent packet loss?
n Name the three QoS models?
Objectives
Upon completion of this lesson, you will be able to perform the following tasks:
n Describe the IntServ model
n List the key benefits and drawbacks of the IntServ model
n List some implementations that are based on the IntServ model
n Describe the need for Common Open Policy Service (COPS)
request
reply
Policy Decision
Point (PDP)
Following is a list of some of the IETF standards (RFCs) that describe RSVP,
COPS, the IntServ model and applications:
n Resource ReSerVation Protocol (RSVP), Version 1, Functional Specification
(http://www.ietf.org/rfc/rfc2205.txt)
n RSVP Management Information Base using SMIv2
(http://www.ietf.org/rfc/rfc2206.txt)
n RSVP Extensions for IPSEC Data Flows (http://www.ietf.org/rfc/rfc2207.txt)
n Resource ReSerVation Protocol (RSVP), Version 1, Applicability Statement,
Some Guidelines on Deployment (http://www.ietf.org/rfc/rfc2208.txt)
n Resource ReSerVation Protocol (RSVP), Version 1, Message Processing
Rules (http://www.ietf.org/rfc/rfc2209.txt)
n The Use of RSVP with IETF Integrated Services
(http://www.ietf.org/rfc/rfc2210.txt)
n Specification of the Controlled-Load Network Element Service
(http://www.ietf.org/rfc/rfc2211.txt)
n Specification of Guaranteed Quality of Service
(http://www.ietf.org/rfc/rfc2212.txt)
n Integrated Services Management Information Base using SMIv2
(http://www.ietf.org/rfc/rfc2213.txt)
n Integrated Services Management Information Base, Guaranteed Service
Extensions using SMIv2 (http://www.ietf.org/rfc/rfc2214.txt)
n General Characterization Parameters for Integrated Service Network Elements
(http://www.ietf.org/rfc/rfc2215.txt)
RSVP, as a resource reservation protocol, was designed for use by end devices in
networks (for example, personal computers and servers). It is a protocol that has
to be supported by an application that requires network resources and needs
guarantees.
n Typical examples of applications that would benefit from RSVP are voice
sessions that require a small amount of bandwidth with low-delay propagation.
n Cisco routers that act as voice gateways can use RSVP to request resources
(controlled-load and guaranteed-delay).
n Cisco routers that use Multiprotocol Label Switching (MPLS) Traffic
Engineering (MPLS/TE) use RSVP with extensions to reserve bandwidth and
set up MPLS/TE tunnels through MPLS and RSVP enabled networks.
n Cisco Soft Phone or Microsoft NetMeeting are Windows applications that use
RSVP to get resources for their VoIP sessions.
There are an increasing number of applications that use RSVP to request QoS
guarantees from a network.
RSVP
1) Explicit RSVP on each network node
Class of Service
or
Best Effort
2) RSVP ‘pass -through’ and CoS transport
- map RSVP to CoS at network edge
- pass -through RSVP request to egress
3) RSVP at network edges and ‘pass -through’ with
- best-effort forwarding in the core (if there is
enough bandwidth in the core)
The figure illustrates three options available when implementing QoS mechanisms
via RSVP in a network.
1. The first option is to simply enable RSVP on all interfaces of all the routers in
the network. This approach is mainly used in enterprise networks that have
more predictable RSVP flows (in terms of quantity and direction because they
typically use hub-and-spoke topology). Large service provider networks are
less inclined to use RSVP throughout their networks either because RSVP
would require too many concurrent reservations on a single interface or
because the routers are not capable of providing guarantees to individual flows
on high-bandwidth interfaces.
2. An alternative option is to use RSVP on network edges where there is
typically less bandwidth per interface and congestion is more likely. The edge-
to-core routers (for example, access or distribution layer routers) mark RSVP
flows with IP markers, which can then be used in a DiffServ enabled core—
the Differentiated Services model is covered in the next lesson).
3. Another option is to use RSVP on network edges and rely on best-effort
delivery in a non-congested core.
All Routers
• WFQ applied per flow
based on RSVP requests
In the first scenario, each router in the network processes RSVP messages and
keeps track of the special resource needs for each individual RSVP flow.
Weighted Fair Queuing (WFQ) can be used in the backbone to provide resource
allocation on a flow-by-flow basis.
One concern with this approach is that RSVP is resource intensive on backbone
routers - in terms of the amount of signaling and the amount of special information
that they need to keep on each RSVP flow.
A second issue is that WFQ is a very CPU-intensive algorithm and does not run at
high speed on today’s routers. In the backbone, high speed is a mandatory
requirement.
Precedence
Classifier
WRED
Premium Egress Router
Standard
• RSVP protocol
sent on to destination
Ingress Router • WFQ applied to
• RSVP protocol manage egress flow
Mapped to classes
Passed through to egress Backbone
• WRED applied based
on class
Both RSVP and WFQ have been available for some time and can be used on all
low-end platforms and on high-end platforms that are typically used to concentrate
customer networks.
Newer RSVP mechanisms include:
n Mapping of RSVP to DSCP (the Differentiated Services model with the details
of the DiffServ Code point is covered in the next lesson).
n Mapping of RSVP to ATM SVCs (this technology is covered in the “IP QoS -
IP over ATM” module).
+ RSVP benefits:
• Explicit resource admission control (end to end)
• Per-request policy admission control
(authorization object, policy object)
• Signaling of dynamic port numbers (for example,
H.323)
–RSVP drawbacks:
• Continuous signaling due to stateless architecture
• Not scalable
The Common Open Policy Service (COPS) is an add-on to RSVP. It can be used
to offload certain tasks from network devices to a central server. The result is that
the configuration of individual devices is more standardized (template-based) and
all individual parameters are managed from a centralized location. In addition,
COPS supports admission control of individual flows (the network device
determines the available resources and the central server authorizes the flow).
Review Questions
Answer the following questions:
n What are the two building blocks of the Integrated Services model?
n Which protocol is used to signal QoS requirements to the network?
Objectives
Upon completion of this lesson, you will be able to perform the following tasks:
n Describe the DiffServ model
n List the key benefits of the DiffServ model compared to the IntServ model
n Describe the purpose of the DS field in IP headers
n Describe the interoperability between DSCP-based and IP-precedence-based
devices in a network
n Describe the Expedited Forwarding service
n Describe the Assured Forwarding service
The DiffServ model describes services and allows for more user-defined services
to be used in a DiffServ-enabled network.
Services are provided to classes. A class can be identified as a single application
or, as in most cases, it can be identified based on source or destination IP address.
The idea is for the network to recognize a class without having to receive any
request from applications. This allows the QoS mechanisms to be applied to other
applications that do not have the RSVP functionality, which is the case for 99% of
applications that use IP.
The introduction of the DiffServ Code Point (DSCP) replaces the IP precedence
but maintains interoperability with non-DS compliant devices (those that still use IP
precedence). Because of this backward-compatibility DiffServ can be gradually
deployed in large networks.
A traffic aggregate is a collection of all flows that require the same service. A
service is implemented using different QoS mechanisms (a QoS mechanism
implements a per-hop behavior).
The DiffServ field (DS fie ld) is the former 8-bit Type of Service field. The main
difference is that the DSCP supports more classes (64) than IP precedence (8).
The most important part of designing QoS is to provision services as explained on
the next page.
DS interior node
DS Egress
DS Ingress Boundary node
Boundary node
Boundary link
Upstream
DS domain Downstream
DS domain
DS region
The DiffServ model uses the DS field in the IP header to mark packets according
to their classification into Behavior Aggregates (BAs). The DS field occupies the
same eight bits of the IP header that were previously used for the Type of Service
(ToS) field.
There are three IETF standards describing the purpose of those eight bits:
n RFC 791 includes specification of the ToS field where the high-order three bits
are used for IP precedence. The other bits are used for delay, throughput,
reliability and cost.
n RFC 1812 modifies the meaning of the ToS field by removing any meaning
from the five low-order bits (those bits should all be zero).
n RFC 2474 replaces the ToS field with the DS field where the six high-order bits
are used for the DiffServ Code Point (DSCP). The remaining two bits are
currently not used.
Each DSCP value identifies a Behavior Aggregate (BA). Each BA is assigned a
per-hop behavior (PHB). Each PHB is implemented using the appropriate QoS
mechanism or a set of QoS mechanisms.
• Three pools:
– “xxxxx0” Standard Action
– “xxxx11” Experimental/Local Use
– “xxxx01” EXP/LU (possible std action)
• Default DSCP: “000000”
• Default PHB: FIFO, tail-drop
The history of the eight bits in question (ToS field alias DS field) can be divided
into three periods according to the RFCs describing the purpose of those bits:
RFC 791
RFC 791 defines the Type of Service field with the following components:
n Bits seven, six and five are used for IP precedence
n Bit four is used for delay (0 = Normal Delay, 1 = Low Delay)
n Bit three is used for throughput (0 = Normal Throughput, 1 = High
Throughput)
n Bit two is used for reliability (0 = Normal Reliability, 1 = High Reliability)
n Bits one and zero are not used and should be zero (bit one was later applied a
meaning of monetary-cost by RFC 1349; this RFC also replaces individual bits
with a four-bit ToS value to allow more types of services)
RFC 1812
RFC 1812 loosens the strict representation of the ToS field (obsole tes RFC 795).
RFC 2474
RFC 2474 replaces the ToS field with the DS field where a range of eight values
(Class Selector) is used for backward compatibility with IP precedence. There is
no compatibility with the delay, throughput, reliability and monetary-cost bits.
RFC 1812 simply prioritizes packets according to the precedence value. The PHB
is defined as the probability of timely forwarding. Packets with higher IP
precedence should (on the average) be forwarded in less time than packets with
lower IP precedence.
RFC 2474 adopts this set of PHBs and values by creating the Class Selector PHB
group. Class Selector can be identified by the low-order three bits of the DSCP or
low-order five bits of the DS field: all bits are zero.
• Priority Queuing
• IP RTP Prioritization
• Class-based Low-latency Queuing (CB-LLQ)
• Strict Priority queuing within Modified Deficit
Round Robin (MDRR) on GSR
AF4 100dd0
• Each AF class uses three DSCP values
• Each AF class is independently forwarded with its
guaranteed bandwidth
• Differentiated RED is used within each class to
prevent congestion within the class
© 2001, Cisco Systems, Inc. IP QoS Introduction-51
As the figure illustrates there are three DSCP values assigned to each of the four
AF classes.
Assured Forwarding class Drop Probability DSCP value
AF class 1 Low 001 01 0
Medium 001 10 0
High 001 11 0
AF class 2 Low 010 01 0
Medium 010 10 0
High 010 11 0
AF class 3 Low 011 01 0
Medium 011 10 0
High 011 11 0
AF class 4 Low 100 01 0
Medium 100 10 0
High 100 11 0
As with Expedited Forwarding there are multiple QoS mechanisms in the Cisco
IOS that can accommodate some or all of the requirements of Assured Forwarding
PHB:
n The preferred implementation is to use the Class-based Weighted Fair Queuing
(CB-WFQ) with four classes (four independent queues) and Weighted Random
Early Detection (WRED) within each queue.
n A similar solution can be provided on the Cisco 12000 series routers by using
the Modified Deficit Round Robin (MDRR) queuing with WRED in each
queue. The AF PHB can also be implemented using the old-fashioned IP
precedence. The only restriction is the number of available IP precedence
values.
n Example 1:
n Four classes but no differentiated dropping:
n AF1—IP precedence 1
n AF2—IP precedence 2
n AF3—IP precedence 3
n AF4—IP precedence 4
n Example 2:
n Two classes with differentiated dropping (two drop precedence values):
n AF1—IP precedence 1 for high-drop, IP precedence 2 for low-drop
n AF1—IP precedence 3 for high-drop, IP precedence 4 for low-drop
Review Questions
Answer the following questions:
n What are the benefits of the DiffServ model compared to the IntServ model?
n What is a DiffServ Code Point?
n Name the standard PHBs?
n How was backward compatibility with IP precedence achieved?
n Describe the PHB of Assured Forwarding.
n Describe the PHB of Expedited Forwarding.
Objectives
Upon completion of this lesson, you will be able to perform the following tasks:
n Describe different classification options in IP networks
n Describe different marking options in IP networks
n List the mechanisms that are capable of measuring the rate of traffic
n List the mechanisms that are used for traffic conditioning, shaping and avoiding
congestion
n List the forwarding mechanisms available in Cisco IOS
n List the queuing mechanisms available in Cisco IOS
Input Output
Input I/O Forwarding Output I/O
Processing Processing
Process switching
Fast/optimum switching
Netflow switching
CEF switching
Basic router function takes packets received on the input interface, makes a
forwarding decision and transmits the packet out through the output interface.
Today’s routers, however, can do much more than that. The figure lists a small
subset of features that affect packet processing on input or output interfaces.
Following is a list of some of the features available with Cisco routers:
n Payload compression (Stacker, Predictor)
n Header compression (TCP and RTP header compression)
n BGP-policy marking (CEF-based marking or QoS Policy propagation through
BGP)
n Traffic Policing (CAR, CB Policing)
n Traffic Shaping (GTS, FRTS, CB-Shaping)
n Class-based marking
n Encryption (CET or IPsec)
n WRED
n Policy-based Routing
n Accounting (IP accounting, NetFlow accounting)
n Filtering (access lists)
n Reverse-path checking
n Address and port translation (NAT, PAT)
n Stateful filtering (firewalling)
n Web-cache redirection
IP QoS mechanisms can perform different types of actions. All QoS mechanisms
can be divided into the following QoS actions:
n Classification – most QoS mechanisms support multiple classes. There are
different classification tools available with different QoS mechanisms (for
example, access lists, route maps, class maps and rate-limit access lists). Some
QoS mechanisms have the capability to match directly on certain parameters.
For example:
– CAR (QoS group and DSCP)
– WRED (IP precedence)
– ToS-based dWFQ (IP precedence)
– QoS-group-based dWFQ (QoS group)
– WFQ (flow parameters)
– PQ and CQ (interface, packet size and protocol)
n Some mechanisms require the information about traffic rate of classes (for
example, CAR, GTS, FRTS, CB-Shaping, CB-Policing, CB-WFQ, CB-LLQ,
MDRR and IP RTP Prioritization).
n Some mechanisms are used for dropping purposes. They utilize a dropping
scheme different from the usual tail-drop. WRED is an example of such
mechanism.
n Some mechanisms are used to limit traffic rate by dropping excess traffic
(CAR and CB-Policing).
n Some mechanisms are used to limit traffic rate by delaying excess traffic (GTS,
FRTS and CB-Shaping).
n Some mechanisms have the capability to mark packets with different types of
markers (IP precedence, DSCP, QoS group, MPLS experimental bits, ATM
CLP bit, Frame Relay DE bit and 802.1q or ISL priority/cos bits)
n Some mechanisms are used for queuing on output interfaces (for example,
FIFO, PQ, CQ, WFQ, dWFQ, ToS-based dWFQ, QoS-group-based dWFQ,
CB-WFQ, IP RTP Prioritization and MDRR)
n Cisco IOS also has different types of forwarding mechanisms (Process
Switching, Fast Switching, Optimum Switching, Silicon Switching, Autonomous
Switching, NetFlow Switching, Cisco Express Forwarding and Policy-based
routing)
Meter
Meter
The figure lists QoS mechanisms in the Cisco IOS that have the capability to
measure the rate of traffic by using the Token Bucket model.
Meter
The figure lists markers that can be set using Cisco routers and the queuing
mechanisms that have marking capabilities.
The following table lists all the mechanisms that have marking capabilities and the
markers that are supported by those mechanisms.
QoS Mechanism Available markers
Committed Access Rate (CAR) IP precedence
DSCP
QoS group
MPLS experimental bits
QoS Policy Propagation through BGP IP precedence
(QPPB) QoS group
Policy-based Routing (PBR) IP precedence
QoS group
Class-based Marking IP precedence
DSCP
QoS group
MPLS experimental bits
ATM CLP bit
Frame Relay DE bit
802.1Q/ISL cos/priority
Marker
Marker Preservation Value range
QoS group
group Local to a router 100 values
(0 to 99)
Throughout an MPLS network
MPLS experimental
experimental bits
bits 8 values
(optionally throughout
throughout an
entire IP network)
Frame Relay DE bit Throughout a Frame Relay 2 values
network (0 or 1)
ATM CLP bit Throughout an ATM 2 values
network (0 or 1)
IEEE 802.1Q or
or ISL
ISL CoS
CoS Throughout a LAN 8 values
switched network (0 to 7)
Meter
• Shaping mechanisms:
– Generic Traffic Shaping (GTS)
– Frame Relay Traffic Shaping (FRTS)
– Class-based Shaping
– Hardware shaping on ATM VC
The figure lists four mechanisms that are used for traffic shaping purposes. All of
these mechanisms are implemented in software (Cisco IOS) except for ATM
shaping which is implemented in hardware.
Traffic shaping is used to limit the departure rate of packets, frames or cells by
delaying them if they exceed the contractual rate. A token bucket model is used to
measure the arrival rate and determine when packets can be forwarded.
Meter
• Dropping mechanisms
– Committed Access Rate (CAR) and Class-based
Policing can drop packets that exceed the
contractual rate
– Weighted Random Early Detection (WRED) can
randomly drop packets when an interface is
nearing congestion
© 2001, Cisco Systems, Inc. IP QoS Introduction-66
Another way of enforcing rate limits is to drop excess traffic. Committed Access
Rate (CAR) and Class-based Policing can be used for this purpose.
Weighted Random Early Detection (WRED) is a congestion-avoidance mechanism
that randomly drops packets when interfaces are nearing congestion.
Meter
Meter
The last mechanism that handles packets in the IOS is the queuing mechanism.
The figure lists most of the queuing mechanisms.
Meter
All queuing mechanisms include a drop policy. Most mechanisms use a simple tail-
drop scheme (the last packet to arrive is dropped if there is no room in the queue).
Weighted Fair Queuing (WFQ) uses a more intelligent dropping scheme, which
is discussed in the “IP QoS – Queuing mechanisms” module. Some queuing
mechanisms also include the Weighted Random Early Detection (WRED) to
prevent congestion in their queues.
Review Questions
Answer the following questions:
n Name the QoS building blocks.
n What is the purpose of classification?
n What is the purpose of marking?
n Which markers do you know?
n Which mechanisms can classify and mark packets?
n Which mechanisms have the ability to measure the rate of traffic?
n Which forwarding mechanisms do you know?
n Which queuing mechanisms do you know?
n How, when and where do routers drop packets?
Objectives
Upon completion of this lesson, you will be able to perform the following tasks:
n Describe a typical structure of an enterprise network
n Describe the need for QoS in enterprise networks
n List typical QoS requirements in enterprise networks
n List the QoS mechanisms that are typically used in enterprise networks
Core
(central sites
and
data centres)
This lesson describes typical Enterprise Networks to show the topology and
technologies involved in such networks. Designing IP QoS networks largely
depends on the topology and QoS requirements.
The figure illustrates a three-layered network:
1. The core interconnects the data center(s) with the distribution-layer routers.
2. The distribution layer routers concentrate links towards a number of access-
layer routers.
3. The access-layer routers connect branch offices to the network.
Most traffic in enterprise networks goes between branches and the data center.
Core
(central sites
and
data centres)
MPLS/VPN (new)
Access
(branch offices)
Modern enterprise networks can use MPLS/VPN backbones to get a virtual full
mesh even though most traffic still goes between the data center and the branches.
Implementing QoS in such environments requires QoS guarantees from the service
provider and provisioning in the enterprise part of the network.
The figure shows a case study where relatively low bandwidths are used which
calls for QoS to manage bandwidth according to the needs of the enterprise.
• Core - Distribution
– Custom queuing
• Distribution - Branch
– Priority queuing or
– Custom Queuing with a priority queue
• Options
– Traffic shaping
– Adaptation to Frame Relay congestion notification
The figure lists mechanisms that could be used to accommodate the need of the
enterprise. This solution would normally be used in networks where an old IOS
version is being used and an upgrade is not an option (due to the cost of getting
newer IOS versions, memory upgrade, flash upgrade, etc.). The listed mechanisms
(Priority Queuing and Custom Queuing) have been available since Cisco IOS
version 10.0.
• Core - Distribution
– Class-based Weighted Fair Queuing (CB-WFQ)
– Class-based Low Latency Queuing (CB-LLQ)
• Distribution - Branch
– Class-based Weighted Fair Queuing (CB-WFQ)
– Class-based Low Latency Queuing (CB-LLQ)
• Options
– Class-based Shaping
– Adaptation to Frame Relay congestion notification
– Class-based Policing
– Weighted Random Early Detection (WRED)
© 2001, Cisco Systems, Inc. IP QoS Introduction-79
This figure shows a solution using advanced mechanisms to provide better control
of bandwidth usage. This solution requires newer Cisco IOS software versions
(12.1 or 12.2, depending on the details of the implementation).
Review Questions
Answer the following questions:
n What is the typical enterprise network topology?
n How is resilience achieved?
n Based on which information do typical enterprise networks apply QoS?
Objectives
Upon completion of this lesson, you will be able to perform the following tasks:
n Describe a typical structure of a service provider network
n Describe the need for QoS in service provider networks
n List typical QoS requirements in service provider networks
n List the QoS mechanisms that can be used in service provider networks
Redundant connections
ATM, SONET/SDH, DPT, GE, ... Rings
Distribution
(regional POPs)
Single connections
Frame Relay, ATM, Leased line (analog, TDM), Optional redundant connections
dial-up (PSTN, ISDN, GSM), xDSL, (fast)ethernet, ... Dial backup
Access
(customers)
• Typical service provider networks use a high -speed partially-meshed core (backbone)
• Regional POPs use two or more connections to the core
• There may be another layer of smaller POPs connected to distribution-layer POPs
• Customers are usually connected to the service provide via a single point-to-point link (a
secondary link or a dial line can be used to improve resilience)
As the figure illustrates, Service Provider networks significantly differ from typical
enterprise networks. Enterprise Networks are used as a tool to support the
enterprise whereas with Service Providers the network is the business itself.
Enterprise networks are concerned with providing quality to business-critical
applications and Service Providers tend to broaden their service offering by
introducing QoS.
Service Providers want to offer customers more than plain connectivity. Service
Providers want to establish differentiated levels of service for customers with
incremental pricing and SLA agreements. The customer should not only shop
around among a number of service providers that offer connectivity to the Internet
or provide MPLS/VPNs, but also have a menu of services they can choose from.
Some customers are satisfied with the best-effort service; some want certain
service guarantees.
Service Provider networks would generally use newer Cisco IOS software and
can therefore deploy the latest available mechanisms. The case study is
implemented using CB-WFQ in combination with WRED and CB-LLQ at
networks edges (between access and distribution layer). WRED can be used on
high-speed links (on core links).
Review Questions
Answer the following questions:
n What is the typical topology of service provider networks?
n How is resilience achieved?
n Based on which information do typical service provider networks apply QoS?
Review Questions
Answer the following questions:
n Name the QoS building blocks.
Classification, marking, metering, dropping, policing, shaping and queuing.
n What is the purpose of classification?
Classification is used to assign packets to traffic classes with different
QoS requirements (behavior aggregates).
n What is the purpose of marking?
Marking is used to allow simplified classification on other devices in the
network.
n Which markers do you know?
IP precedence, DSCP, MPLS experimental bits, QoS group, Frame
Relay DE bit, ATM CLP bit, 802.1q CoS bits, ISL priority bits.
n Which mechanisms can classify and mark
packets?
Policy-based Routing (PBR)
Committed Access Rate (CAR)
QoS Policy Propagation through BGP (QPPB)
Class-based Policing
Class-based Marking
n Which mechanisms have the ability to measure
the rate of traffic?
Committed Access Rate (CAR)
Generic Traffic Shaping (GTS)
Frame Relay Traffic Shaping (FRTS)
Class-based Weighted Fair Queuing (CB-WFQ)
Class-based Low Latency Queuing (CB-LLQ)
Class-based Policing
Class-based Shaping
IP RTP Prioritization
n Which forwarding mechanisms do you know?
Process Switching, Fast Switching, Optimum Switching, NetFlow
Switching, CEF switching …
Review Questions
Answer the following questions:
n What is the typical enterprise network topology?
Enterprise networks typically use the hub-and-spoke topology.
n How is resilience achieved?
Resilience is achieved by using redundant links.
n Based on which information do typical enterprise
networks apply QoS?
Enterprise networks typically provide QoS to applications. Applications
are typically identified based on the TCP or UDP port numbers.
Review Questions
Answer the following questions:
n What is the typical topology of service provider
networks?
Typical service provider networks use a partially meshed core with a
redundant hub-and-spoke topology for the POPs.
n How is resilience achieved?
Resilience is achieved by using partial mesh (core) and redundant links
(distribution, access).
n Based on which information do typical service
provider networks apply QoS?
Service providers typically apply QoS to customer traffic. Customer
traffic is identified based on source or destination IP addresses.