Sie sind auf Seite 1von 234

Cisco IP Contact Center Enterprise Edition

Solution Reference Network Design


(SRND)
Cisco IP Contact Center Enterprise Edition Releases 5.0 and 6.0
May 2006

Corporate Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 526-4100

Text Part Number: OL-7279-04


THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL
STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT
WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT
SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE
OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public
domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH
ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO
OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

CCSP, CCVP, the Cisco Square Bridge logo, Follow Me Browsing, and StackWise are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, and
iQuick Study are service marks of Cisco Systems, Inc.; and Access Registrar, Aironet, BPX, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, Cisco, the Cisco Certified
Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Enterprise/Solver, EtherChannel, EtherFast,
EtherSwitch, Fast Step, FormShare, GigaDrive, GigaStack, HomeLink, Internet Quotient, IOS, IP/TV, iQ Expertise, the iQ logo, iQ Net Readiness Scorecard, LightStream,
Linksys, MeetingPlace, MGX, the Networkers logo, Networking Academy, Network Registrar, Packet, PIX, Post-Routing, Pre-Routing, ProConnect, RateMUX, ScriptShare,
SlideCast, SMARTnet, The Fastest Way to Increase Your Internet Quotient, and TransPath are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States
and certain other countries.

All other trademarks mentioned in this document or Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (0601R)

Cisco IP Contact Center Enterprise Edition SRND


Copyright © 2004 Cisco Systems, Inc. All rights reserved.
CONTENTS

Preface xi

Revision History xi

Obtaining Documentation xii


Cisco.com xii
Ordering Documentation xii
Documentation Feedback xii

Obtaining Technical Assistance xiii


Cisco Technical Support Website xiii
Submitting a Service Request xiii
Definitions of Service Request Severity xiv

Obtaining Additional Publications and Information xiv

CHA PTER 1 Architecture Overview 1-1

Cisco CallManager 1-1

Cisco Internet Service Node (ISN) 1-2

Cisco IP Interactive Voice Response (IP IVR) 1-3

Cisco Intelligent Contact Management (ICM) Software 1-3


Basic IPCC Call and Message Flow 1-4
ICM Software Modules 1-5
IPCC Components, Terminology, and Concepts 1-6
IPCC Agent Desktop 1-6
Administrative Workstation 1-7
JTAPI Communications 1-8
ICM Routing Clients 1-10
Device Targets 1-10
Labels 1-11
Agent Desk Settings 1-11
Agents 1-12
Skill Groups 1-12
Directory (Dialed) Numbers and Routing Scripts 1-12
Agent Login and State Control 1-12
IPCC Routing 1-12
Translation Routing and Queuing 1-13
Reroute On No Answer (RONA) 1-14

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 iii
Contents

Combining IP Telephony and IPCC in the Same Cisco CallManager Cluster 1-15

Queuing in an IPCC Environment 1-15

Transfers in an IPCC Environment 1-16


Dialed Number Plan 1-16
Dial Plan Type 1-17
Post Route 1-17
Route Request 1-17
Single-Step (Blind) Transfer 1-17
Consultative Transfer 1-18
Reconnect 1-19
Alternate 1-19
Non-ICM Transfers 1-19
Agent-to-Agent Transfers 1-20
Transferring from an IVR to a Specific Agent 1-20
Transfer Reporting 1-20
Combination or Multiple Transfers 1-20
Transfers of Conferenced Calls 1-21
PSTN Transfers (Takeback N Transfer, or Transfer Connect) 1-21

Call Admission Control 1-21


Gatekeeper Controlled 1-22
Locations Controlled 1-23

CHA PTER 2 Deployment Models 2-1

Single Site 2-2


Treatment and Queuing with IP IVR 2-3
Treatment and Queuing with ISN 2-3
Transfers 2-3
Multi-Site with Centralized Call Processing 2-4
Centralized Voice Gateways 2-4
Treatment and Queuing with IP IVR 2-6
Treatment and Queuing with ISN 2-6
Transfers 2-6
Distributed Voice Gateways 2-7
Treatment and Queuing with IP IVR 2-9
Treatment and Queuing with ISN 2-9
Transfers 2-9

Cisco IP Contact Center Enterprise Edition SRND


iv OL-7279-04
Contents

Multi-Site with Distributed Call Processing 2-9


Distributed Voice Gateways with Treatment and Queuing Using IP IVR 2-10
Treatment and Queuing 2-11
Transfers 2-12
Distributed Voice Gateways with Treatment and Queuing Using ISN 2-12
Treatment and Queuing 2-13
Transfers 2-14
Distributed ICM Option with Distributed Call Processing Model 2-14
Clustering Over the WAN 2-15
Centralized Voice Gateways with Centralized Call Treatment and Queuing Using IP IVR 2-17
Centralized Voice Gateways with Centralized Call Treatment and Queuing Using ISN 2-18
Distributed Voice Gateways with Distributed Call Treatment and Queuing Using ISN 2-19
Site-to-Site ICM Private Communications Options 2-20
ICM Central Controller Private and Cisco CallManager PG Private Across Dual Links 2-20
ICM Central Controller Private and Cisco CallManager PG Private Across Single Link 2-21
Bandwidth and QoS Requirements for IPCC Clustering Over the WAN 2-22
Highly Available WAN 2-22
ICM Private WAN 2-23
Failure Analysis of IPCC Clustering Over the WAN 2-26
Entire Central Site Loss 2-26
Private Connection Between Site 1 and Site 2 2-26
Connectivity to Central Site from Remote Agent Site 2-26
Highly Available WAN Failure 2-26
Remote Agent Over Broadband 2-27
Remote Agent with IP Phone Deployed via the Business Ready Teleworker Solution 2-30

IPCC Outbound (Blended Agent) Option 2-31

Traditional ACD Integration 2-34

Traditional IVR Integration 2-35


Using PBX Transfer 2-35
Using PSTN Transfer 2-36
Using IVR Double Trunking 2-37
Using Cisco CallManager Transfer and IVR Double Trunking 2-38

CHA PTER 3 Design Considerations for High Availability 3-1

Designing for High Availability 3-1

Data Network Design Considerations 3-5

Cisco CallManager and CTI Manager Design Considerations 3-7


Configuring ICM for CTI Manager Redundancy 3-10

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 v
Contents

IP IVR (CRS) Design Considerations 3-11


IP IVR (CRS) High Availability Using Cisco CallManager 3-13
IP IVR (CRS) High Availability Using ICM 3-13
Internet Service Node (ISN) Design Considerations 3-13

Multi-Channel Design Considerations (Cisco Email Manager Option and Cisco Collaboration Server
Option) 3-15
Cisco Email Manager Option 3-17

Cisco Collaboration Server Option 3-18

Cisco IPCC Outbound Option Design Considerations 3-19

Peripheral Gateway Design Considerations 3-20


Cisco CallManager Failure Scenarios 3-22
ICM Failover Scenarios 3-23
Scenario 1 - Cisco CallManager and CTI Manager Fail 3-23
Scenario 2 - Cisco CallManager PG Side A Fails 3-24
Scenario 3 - Only Cisco CallManager Fails 3-25
Scenario 4 - Only CTI Manager Fails 3-26
IPCC Scenarios for Clustering over the WAN 3-28
Scenario 1 - ICM Central Controller or Peripheral Gateway Private Network Fails 3-28
Scenario 2 - Visible Network Fails 3-29
Scenario 3 - Visible and Private Networks Both Fail (Dual Failure) 3-30
Scenario 4 - Remote Agent Location WAN Fails 3-31
Understanding Failure Recovery 3-31
Cisco CallManager Service 3-31
IP IVR (CRS) 3-32
ICM 3-32
Cisco CallManager PG and CTI Manager Service 3-32
ICM Voice Response Unit PG 3-33
ICM Call Router and Logger 3-34
Admin Workstation Real-Time Distributor (RTD) 3-35
CTI Server 3-37
CTI OS Considerations 3-38

Cisco Agent Desktop Considerations 3-39

Other Considerations 3-39

CHA PTER 4 Sizing Call Center Resources 4-1

Call Center Basic Traffic Terminology 4-1

Call Center Resources and the Call Timeline 4-4

Cisco IP Contact Center Enterprise Edition SRND


vi OL-7279-04
Contents

Erlang Calculators as Design Tools 4-4


Erlang-C 4-5
Erlang-B 4-5
Cisco IPC Resource Calculator 4-6
IPC Resource Calculator Input Fields (What You Must Provide) 4-7
IPC Resource Calculator Output Fields (What You Want to Calculate) 4-8

Sizing Call Center Agents, IVR Ports, and Trunks 4-11


Basic Call Center Example 4-11
Call Treatment Example 4-13
After-Call Work Time (Wrap-up Time) Example 4-14
Call Center Sizing With Self-Service IVR Applications 4-15
Call Center Example with IVR Self-Service Application 4-16
Sizing ISN Components 4-20
ISN Server Sizing 4-20
Sizing ISN Licenses 4-23
ISN Software Licenses 4-23
ISN Session Licenses 4-23
Alternative Simplified Method for ISN Capacity Sizing 4-23
Cisco IOS Gateway Sizing 4-24
ICM Peripheral Gateway (PG) Sizing 4-24
Prompt Media Server Sizing 4-24
Agent Staffing Considerations 4-25

Call Center Design Considerations 4-25

CHA PTER 5 Sizing IPCC Components and Servers 5-1

Sizing Considerations for IPCC 5-1


Core IPCC Components 5-1
Minimum Hardware Configurations for IPCC Core Components 5-8
Additional Sizing Factors 5-9
Peripheral Gateway and Server Options 5-10

CTI OS 5-11

Cisco Agent Desktop Component Sizing 5-12


Cisco Agent Desktop Base Services 5-13
Cisco Agent Desktop VoIP Monitor Service 5-13
Cisco Agent Desktop Recording and Playback Service 5-13

Summary 5-14

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 vii
Contents

CHA PTER 6 Sizing Cisco CallManager Servers For IPCC 6-1

Call Processing With IPCC Enterprise 6-1

IPCC Clustering Guidelines 6-2

IPCC Enterprise with Cisco CallManager Releases 3.1 and 3.2 6-3

IPCC Enterprise with Cisco CallManager Releases 3.3 and Later 6-4

Cisco CallManager Platform Capacity Planning with IPCC 6-4

Cisco CallManager Capacity Tool 6-5

Supported Cisco CallManager Server Platforms for IPCC Enterprise 6-7

Call Processing Redundancy with IPCC 6-9

Cluster Configurations for Redundancy 6-10

Load Balancing With IPCC 6-13

Impact of IPCC Application on Cisco CallManager Performance and Scalability 6-13

CHA PTER 7 Agent Desktop and Supervisor Desktop 7-1

Types of IPCC Agent Desktops 7-2


Cisco Agent Desktop and Cisco Supervisor Desktop 7-3
Cisco Agent Desktop 7-5
IP Phone Agent (IPPA) 7-6
Cisco Supervisor Desktop 7-6
CTI Object Server (CTI OS) Toolkit 7-6
Additional Information about Cisco Agent Desktop and Supervisor Desktop 7-8

CHA PTER 8 Bandwidth Provisioning and QoS Considerations 8-1

IPCC Network Architecture Overview 8-2


Network Segments 8-3
UDP Heartbeat and TCP Keep-Alive 8-4
IP-Based Prioritization and QoS 8-5
Traffic Flow 8-6
Public Network Traffic Flow 8-6
Private Network Traffic Flow 8-6
Bandwidth and Latency Requirements 8-7

Network Provisioning 8-8


Quality of Service 8-8
QoS Planning 8-8
Public Network Marking Requirements 8-8
Configuring QoS on IP Devices 8-9
Performance Monitoring 8-10

Cisco IP Contact Center Enterprise Edition SRND


viii OL-7279-04
Contents

Bandwidth Sizing 8-10


IPCC Private Network Bandwidth 8-10
IPCC Public Network Bandwidth 8-11
Bandwidth Requirements for CTI OS Agent Desktop 8-11
CTI OS Client/Server Traffic Flows and Bandwidth Requirements 8-11
Best Practices and Options for CTI OS Server and CTI OS Agent Desktop 8-12
Bandwidth Requirements for Cisco Agent Desktop Release 6.0 8-13
Call Control Bandwidth Usage 8-13
Silent Monitoring Bandwidth Usage 8-16
Service Placement Recommendations 8-24
Quality of Service (QoS) Considerations 8-24
Cisco Desktop Component Port Usage 8-25
Integrating Cisco Agent Desktop Release 6.0 into a Citrix Thin-Client Environment 8-25

CHA PTER 9 Securing IPCC 9-1

Introduction to Security 9-1

Security Best Practices 9-2


Default (Standard) Windows 2000 Server Operating System Installation 9-2
Cisco-Provided Windows 2000 Server Installation (CIPT OS) 9-3
Patch Management 9-3
Default (Standard) Windows 2000 Server Operating System Installation 9-3
Cisco-Provided Windows 2000 Server Installation (CIPT OS) 9-3
Antivirus 9-4

Cisco Security Agent 9-5

Firewalls and IPSec 9-6


Firewalls 9-6
IPSec and NAT 9-6

Security Features in Cisco CallManager Release 4.0 9-8

GL O S S A R Y

INDEX

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 ix
Contents

Cisco IP Contact Center Enterprise Edition SRND


x OL-7279-04
Preface

This document provides design considerations and guidelines for implementing Cisco IP Contact Center
(IPCC) Enterprise Edition solutions based on the Cisco Architecture for Voice, Video, and Integrated
Data (AVVID).
This document builds upon ideas and concepts presented in the Cisco IP Telephony Solution Reference
Network Design (SRND), which are available online at
http://www.cisco.com/go/srnd
This document assumes that you are already familiar with basic contact center terms and concepts and
with the information presented in the Cisco IP Telephony SRND. To review IP Telephony terms and
concepts, refer to the documentation at the preceding URL.

Revision History
Unless stated otherwise, the information in this document applies specifically to Cisco IP Contact Center
Enterprise Edition Releases 5.0 and 6.0.
This document may be updated at any time without notice. You can obtain the latest version of this
document online at
http://www.cisco.com/go/srnd
Visit this Cisco.com website periodically and check for documentation updates by comparing the
revision date (on the front title page) of your copy with the revision date of the online document.
The following table lists the revision history for this document.

Revision Date Comments


May, 2006 Corrected a few errors in the text and updated some of the referenced URL.
December, 2005 Fixed several broken links in the online document.
June, 2005 Corrected various typographical errors in the document.
March, 2005 Initial version of this document for Cisco IPCC Enterprise Edition Releases 5.0
and 6.0.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 xi
Preface
Obtaining Documentation

Obtaining Documentation
Cisco documentation and additional literature are available on Cisco.com. Cisco also provides several
ways to obtain technical assistance and other technical resources. These sections explain how to obtain
technical information from Cisco Systems.

Cisco.com
You can access the most current Cisco documentation at this URL:
http://www.cisco.com/univercd/home/home.htm
You can access the Cisco website at this URL:
http://www.cisco.com
You can access international Cisco websites at this URL:
http://www.cisco.com/public/countries_languages.shtml

Ordering Documentation
You can find instructions for ordering documentation at this URL:
http://www.cisco.com/univercd/cc/td/doc/es_inpck/pdi.htm
You can order Cisco documentation in these ways:
• Registered Cisco.com users (Cisco direct customers) can order Cisco product documentation from
the Ordering tool:
http://www.cisco.com/en/US/partner/ordering/index.shtml
• Nonregistered Cisco.com users can order documentation through a local account representative by
calling Cisco Systems Corporate Headquarters (California, USA) at 408 526-7208 or, elsewhere in
North America, by calling 1 800 553-NETS (6387).

Documentation Feedback
You can send comments about technical documentation to bug-doc@cisco.com.
You can submit comments by using the response card (if present) behind the front cover of your
document or by writing to the following address:
Cisco Systems
Attn: Customer Document Ordering
170 West Tasman Drive
San Jose, CA 95134-9883
We appreciate your comments.

Cisco IP Contact Center Enterprise Edition SRND


xii OL-7279-04
Preface
Obtaining Technical Assistance

Obtaining Technical Assistance


For all customers, partners, resellers, and distributors who hold valid Cisco service contracts, Cisco
Technical Support provides 24-hour-a-day, award-winning technical assistance. The Cisco Technical
Support Website on Cisco.com features extensive online support resources. In addition, Cisco Technical
Assistance Center (TAC) engineers provide telephone support. If you do not hold a valid Cisco service
contract, contact your reseller.

Cisco Technical Support Website


The Cisco Technical Support Website provides online documents and tools for troubleshooting and
resolving technical issues with Cisco products and technologies. The website is available 24 hours a day,
365 days a year, at this URL:
http://www.cisco.com/techsupport
Access to all tools on the Cisco Technical Support Website requires a Cisco.com user ID and password.
If you have a valid service contract but do not have a user ID or password, you can register at this URL:
http://tools.cisco.com/RPF/register/register.do

Note Use the Cisco Product Identification (CPI) tool to locate your product serial number before submitting
a web or phone request for service. You can access the CPI tool from the Cisco Technical Support
Website by clicking the Tools & Resources link under Documentation & Tools. Choose Cisco Product
Identification Tool from the Alphabetical Index drop-down list, or click the Cisco Product
Identification Tool link under Alerts & RMAs. The CPI tool offers three search options: by product ID
or model name; by tree view; or for certain products, by copying and pasting show command output.
Search results show an illustration of your product with the serial number label location highlighted.
Locate the serial number label on your product and record the information before placing a service call.

Submitting a Service Request


Using the online TAC Service Request Tool is the fastest way to open S3 and S4 service requests. (S3
and S4 service requests are those in which your network is minimally impaired or for which you require
product information.) After you describe your situation, the TAC Service Request Tool provides
recommended solutions. If your issue is not resolved using the recommended resources, your service
request is assigned to a Cisco TAC engineer. The TAC Service Request Tool is located at this URL:
http://www.cisco.com/techsupport/servicerequest
For S1 or S2 service requests or if you do not have Internet access, contact the Cisco TAC by telephone.
(S1 or S2 service requests are those in which your production network is down or severely degraded.)
Cisco TAC engineers are assigned immediately to S1 and S2 service requests to help keep your business
operations running smoothly.
To open a service request by telephone, use one of the following numbers:
Asia-Pacific: +61 2 8446 7411 (Australia: 1 800 805 227)
EMEA: +32 2 704 55 55
USA: 1 800 553-2447
For a complete list of Cisco TAC contacts, go to this URL:
http://www.cisco.com/techsupport/contacts

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 xiii
Preface
Obtaining Additional Publications and Information

Definitions of Service Request Severity


To ensure that all service requests are reported in a standard format, Cisco has established severity
definitions.
Severity 1 (S1)—Your network is “down,” or there is a critical impact to your business operations. You
and Cisco will commit all necessary resources around the clock to resolve the situation.
Severity 2 (S2)—Operation of an existing network is severely degraded, or significant aspects of your
business operation are negatively affected by inadequate performance of Cisco products. You and Cisco
will commit full-time resources during normal business hours to resolve the situation.
Severity 3 (S3)—Operational performance of your network is impaired, but most business operations
remain functional. You and Cisco will commit resources during normal business hours to restore service
to satisfactory levels.
Severity 4 (S4)—You require information or assistance with Cisco product capabilities, installation, or
configuration. There is little or no effect on your business operations.

Obtaining Additional Publications and Information


Information about Cisco products, technologies, and network solutions is available from various online
and printed sources.
• Cisco Marketplace provides a variety of Cisco books, reference guides, and logo merchandise. Visit
Cisco Marketplace, the company store, at this URL:
http://www.cisco.com/go/marketplace/
• The Cisco Product Catalog describes the networking products offered by Cisco Systems, as well as
ordering and customer support services. Access the Cisco Product Catalog at this URL:
http://cisco.com/univercd/cc/td/doc/pcat/
• Cisco Press publishes a wide range of general networking, training and certification titles. Both new
and experienced users will benefit from these publications. For current Cisco Press titles and other
information, go to Cisco Press at this URL:
http://www.ciscopress.com
• Packet magazine is the Cisco Systems technical user magazine for maximizing Internet and
networking investments. Each quarter, Packet delivers coverage of the latest industry trends,
technology breakthroughs, and Cisco products and solutions, as well as network deployment and
troubleshooting tips, configuration examples, customer case studies, certification and training
information, and links to scores of in-depth online resources. You can access Packet magazine at
this URL:
http://www.cisco.com/packet
• iQ Magazine is the quarterly publication from Cisco Systems designed to help growing companies
learn how they can use technology to increase revenue, streamline their business, and expand
services. The publication identifies the challenges facing these companies and the technologies to
help solve them, using real-world case studies and business strategies to help readers make sound
technology investment decisions. You can access iQ Magazine at this URL:
http://www.cisco.com/go/iqmagazine

Cisco IP Contact Center Enterprise Edition SRND


xiv OL-7279-04
Preface
Obtaining Additional Publications and Information

• Internet Protocol Journal is a quarterly journal published by Cisco Systems for engineering
professionals involved in designing, developing, and operating public and private internets and
intranets. You can access the Internet Protocol Journal at this URL:
http://www.cisco.com/ipj
• World-class networking training is available from Cisco. You can view current offerings at
this URL:
http://www.cisco.com/en/US/learning/index.html

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 xv
Preface
Obtaining Additional Publications and Information

Cisco IP Contact Center Enterprise Edition SRND


xvi OL-7279-04
C H A P T E R 1
Architecture Overview

The Cisco IP Contact Center (IPCC) solution consists of four primary Cisco software components:
• IP Communications infrastructure: Cisco CallManager
• Queuing and self-service: Cisco IP Interactive Voice Response (IP IVR) or Cisco Internet Service
Node (ISN)
• Contact center routing and agent management: Cisco Intelligent Contact Management (ICM)
Software
• Agent desktop software: Cisco Agent Desktop or Cisco Toolkit Desktop (CTI Object Server)
In addition to these core components, the following Cisco hardware products are required for a complete
IPCC deployment:
• Cisco IP Phones
• Cisco voice gateways
• Cisco LAN/WAN infrastructure
Once deployed, IPCC provides an integrated Automatic Call Distribution (ACD), IVR, and Computer
Telephony Integration (CTI) solution.
The following sections discuss each of the software products in more detail and describe the data
communications between each of these components. For more information on a particular product, refer
to the specific product documentation available online at
http://www.cisco.com

Cisco CallManager
Cisco CallManager, when combined with the appropriate LAN/WAN infrastructure, voice gateways,
and IP phones, provides the foundation for a Voice over IP (VoIP) solution. Cisco CallManager is a
software application that runs on Intel Pentium-based servers running Microsoft Windows Server
operating system software and Microsoft SQL Server relational database management software. The
Cisco CallManager software running on a server is referred to as a Cisco CallManager server. Multiple
Cisco CallManager servers can be grouped into a cluster to provide for scalability and fault tolerance.
For details on Cisco CallManager call processing capabilities and clustering options, refer to the
Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-1
Chapter 1 Architecture Overview
Cisco Internet Service Node (ISN)

A single Cisco CallManager server is capable of supporting hundreds of agents. In a fault-tolerant


design, a Cisco CallManager cluster of servers is capable of supporting thousands of agents. However,
the number of agents and the number of busy hour call attempts (BHCA) supported within a cluster
varies and must be sized according to guidelines defined in the Cisco IP Telephony Solution Reference
Network Design (SRND) guide.
Typically when designing an IPCC solution, you first define the deployment scenario, including arrival
point(s) for voice traffic and the location(s) of the contact center agents. After defining the deployment
scenario, you can determine the sizing of the individual components within the IPCC design for such
things as how many Cisco CallManager servers are needed within a Cisco CallManager cluster, how
many voice gateways are needed for each site and for the entire enterprise, how many servers and what
types of servers are required for the ICM software, how many IP IVR/ISN servers are needed, and so
forth.

Cisco Internet Service Node (ISN)


The Cisco Internet Service Node (ISN) provides prompting, collecting, queuing, and call control
services using standard web-based technologies. The ISN architecture is distributed, fault tolerant, and
highly scalable. With the Cisco ISN system, voice is terminated on Cisco IOS® gateways that interact
with the ISN application server (Microsoft Windows Server) using VXML (speech) and H.323 (call
control). The ISN software is tightly integrated with the Cisco ICM software for application control. The
ICM scripting environment controls the execution of "building block" functions such as play media, play
data, menu, and collect information. The ICM script can also invoke external VXML applications via
ISN. External VXML applications can return results and control to the ICM script when complete.
Advanced load balancing can be provided by Cisco Content Server Switches (CSS).
The Cisco ISN can support multiple grammars for prerecorded announcements. The ISN can optionally
provide automatic speech recognition and text-to-speech capability. The Cisco ISN can also access
customer databases and applications via the Cisco ICM software.
The Cisco ISN also provides a queuing platform for the IPCC solution. Telephone calls can remain
queued on Cisco ISN until they are routed to a contact center agent (or external system). The Cisco ISN
can transfer calls using any of the following methods:
• IP Switching
The ISN directs the ingress Cisco IOS voice gateway to redirect the call's packet voice path to a new
IP-based destination, but the ISN retains signaling control over the call so that it can bring the call
back for additional treatment or transfer it to additional destinations. With this transfer method, a
PSTN port is used on the ingress gateway for the life of the call. An additional PSTN port is also
required on an egress gateway if the call is transferred to a PSTN-based destination.
• Outpulse Transfer
The ISN sends DTMF digits to the PSTN via the ingress gateway to make the PSTN carrier
disconnect the call from the gateway and route it elsewhere via the PSTN. This method eliminates
consumption of ports on the gateway for the rest of the call, but it might not be available with all
carriers.
• IN Transfer
Cisco ICM software temporarily routes PSTN-originated calls to ISN via a Cisco IOS Voice
Gateway. When ISN treatment is finished, the Cisco ICM removes the call from ISN and routes it
elsewhere over the PSTN. This method eliminates consumption of ports on the gateway for the rest
of the call, but it might not be available with all carriers.

Cisco IP Contact Center Enterprise Edition SRND


1-2 OL-7279-04
Chapter 1 Architecture Overview
Cisco IP Interactive Voice Response (IP IVR)

Call reporting uses the IPCC reporting infrastructure, which provides both standard reports and a
development environment for custom reporting via Cisco Consulting Engineering Services or a certified
reseller.

Cisco IP Interactive Voice Response (IP IVR)


The IP IVR provides prompting, collecting, and queuing capability for the IPCC solution. The IP IVR
is under the control of the ICM software via the Service Control Interface (SCI). When an agent becomes
available, the ICM software instructs the IP IVR to transfer the call to the selected agent phone. The
IP IVR then requests Cisco CallManager to transfer the call to the selected agent phone.
Cisco IP IVR is a software application that runs on Intel Pentium-based servers running Microsoft
Windows Server operating system software and Microsoft SQL Server relational database management
software. Each IP IVR server is capable of supporting over 100 logical IP IVR ports, and multiple
IP IVR servers can be deployed in a single Cisco CallManager cluster.
The IP IVR has no physical telephony trunks or interfaces like a traditional IVR. The telephony trunks
are terminated at the voice gateway. Cisco CallManager provides the call processing and switching to
set up a G.711 Real-Time Transport Protocol (RTP) stream from the voice gateway to the IP IVR. The
IP IVR communicates with Cisco CallManager via the Java Telephony Application Programming
Interface (JTAPI), and the IP IVR communicates with the ICM via the Service Control Interface (SCI).
The chapter on Sizing Call Center Resources, page 4-1 discusses how to determine the number of IVR
ports required. For deployments requiring complete fault tolerance, a minimum of two IP IVRs is
required. The chapter on Design Considerations for High Availability, page 3-1, provides details on
IPCC fault tolerance.
A lower-cost licensing option of the IP IVR is called the IP Queue Manager. The IP Queue Manager
provides a subset of the IP IVR capability. The database, Java, and HTTP subsystems are not included
the IP Queue Manager software license. The IP Queue Manager provides an integrated mechanism for
prompting and collecting input from callers and for playing queuing announcements. The sizing for IP
Queue Manager and IP IVR are the same.

Note Because the IP IVR and IP Queue Manager are so similar, the remainder of this document refers to the
IP IVR only.

Cisco Intelligent Contact Management (ICM) Software


The Cisco ICM software provides contact center features in conjunction with Cisco CallManager.
Features provided by the ICM software include agent state management, agent selection, call routing and
queue control, IVR control, screen pops, and contact center reporting. ICM software runs on Intel
Pentium servers running Microsoft Windows 2000 operating system software and Microsoft SQL Server
database management software. The supported Pentium servers can be single, dual, or quad Pentium
CPU servers with varying amounts of RAM. This variety of supported servers allows the ICM software
to scale and to be sized to meet the needs of the deployment requirements. The chapter on Sizing IPCC
Components and Servers, page 5-1, provides details on server sizing.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-3
Chapter 1 Architecture Overview
Cisco Intelligent Contact Management (ICM) Software

Basic IPCC Call and Message Flow


Figure 1-1 shows the flow of a basic IPCC call. In this scenario, all of the agents are assumed to be "not
ready" when the call arrives, so the call is routed by the ICM to the IP IVR. While the call is connected
to the IP IVR, call queuing treatment (announcements, music, and so on) is provided. When an agent
becomes available, the ICM directs the IP IVR to transfer the call to that agent's phone. At the same time
the call is being transferred, the ICM sends the caller data such as Automatic Number Identification
(ANI) and Directory Number (DN) to the agent desktop software.

Figure 1-1 Basic IPCC Call Flow

IP IVRs ICM
7 9

Public
network 7 6 3
10 4 7
5
1
5 10 9
M
M 5
V
2 CallManager 8
cluster

10

8 Agent available IP IP IP
IP voice
9 Screen pop TDM voice
Call control and
11 Call answered CTI data

76583
IP phones and agent desktops

The call flow in Figure 1-1 is as follows:


1. Call delivered from PSTN to voice gateway.
2. MGCP or H.323 Route Request sent to Cisco CallManager.
3. JTAPI Route Request sent to ICM.
4. ICM runs routing script. No available agent found, so IP IVR label returned from routing script.
5. ICM instructs Cisco CallManager to transfer call to IP IVR, and Cisco CallManager does as
instructed.
6. IP IVR notifies ICM that call has arrived.
7. ICM instructs IP IVR to play queue announcements.
8. Agent becomes ready (completed previous call or just went ready).
9. ICM sends call data to selected agent screen and instructs the IP IVR to transfer the call to the agent
phone.
10. IP IVR transfers the VoIP voice path to selected agent phone.
11. Call is answered by agent.

Cisco IP Contact Center Enterprise Edition SRND


1-4 OL-7279-04
Chapter 1 Architecture Overview
Cisco Intelligent Contact Management (ICM) Software

ICM Software Modules


The Cisco ICM software is a collection of modules that can run on multiple servers. The amount of
software that can run on one server is primarily based upon busy hour call attempts (BHCA) and the size
of the server being used (single, dual, or quad CPU). Other factors that impact the hardware sizing are
the number of agents, the number of skills per agent, the number of IP IVR ports, the number of VRU
Script nodes in the ICM routing script, and which statistics agents need at their desktops.
The core ICM software modules are:
• Router
• Logger
• Cisco CallManager Peripheral Interface Manager (PIM)
• IP IVR PIM
• CTI Server
• CTI OS Server
• Administrative Workstation (AW)
The Router is the module that makes all routing decisions on how to route a call or customer contact.
The Logger is the database module that stores contact center configuration and reporting data. The
Cisco CallManager PIM is the module that interfaces to a Cisco CallManager cluster via the JTAPI
protocol. The IP IVR PIM is the module that interfaces to the IP IVR/ISN via the Service Control
Interface (SCI) protocol. The CTI OS Server is the module that interfaces to the IPCC Agent Desktop
application.
Each ICM software module can be deployed in a redundant fashion. When a module is deployed in a
redundant fashion, we refer to the two sides as side A and side B. For example, Router A and Router B
are redundant instances of the Router module (process) running on two different servers. This redundant
mode is referred to as a "duplex" configuration, whereas a non-redundant process is said to be running
in "simplex" mode. When processes are running in a duplex configuration, they are not load balanced.
The A and B sides are both executing the same set of messages and, therefore, producing the same result.
In this configuration, logically, there appears to be only one Router.
Most of the ICM software modules support multiple logical instances for scalability. The only software
modules that do not support multiple instances are the Router and Logger. The Router and Logger
combined are often referred to as the Central Controller. The Central Controller is the hub of the
hub-and-spoke architecture for IPCC. By having only one central controller, the IPCC system has only
one "enterprise view" of all the agents and contacts in queue. This enterprise view enables IPCC to
provide a single logical ACD controlling contacts and agents that can be distributed among many sites.
When the Router and Logger modules run on the same server, the server is referred to as a Rogger.
For each Cisco CallManager cluster in your IPCC environment, you need a Cisco CallManager PIM. For
each Cisco CallManager PIM, you need one CTI Server and one CTI OS Server to communicate with
the desktops associated with the phones for that Cisco CallManager cluster. For each IP IVR, you need
one IP IVR PIM. The server that runs the Cisco CallManager PIM, the CTI Server, the CTI OS Server,
and the IP IVR PIMs, is referred to as a Peripheral Gateway (PG). Often, the Cisco CallManager PIM,
the CTI Server, the CTI OS Server, and multiple IP IVR PIMs will run on the same server. Internal to
the PG is a process called the PG Agent, which communicates from the PG to the Central Controller.
Another internal PG process is the Open Peripheral Controller (OPC), which enables the other processes
to communicate with each other and is also involved in synchronizing PGs in redundant PG
deployments. Figure 1-2 shows the communications among the various PG software processes.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-5
Chapter 1 Architecture Overview
IPCC Components, Terminology, and Concepts

Figure 1-2 Communications Among Peripheral Gateway Software Processes

ICM central controller IPCC Agent


desktops
PG server
PG 1
IP phones
PG Agent
IP
CTI OS server IP
IP
CallManager
Cluster IP
CTI server
M
JTAPI
CCM PIM M
IP IVR 1 JTAPI
M
OPC SCI JTAPI
IVR 1 PIM

IP IVR 2 PSTN
V
SCI
IVR 2 PIM IP voice
TDM Voice

132072
CTI/Call
control data

In larger, multi-site (multi-cluster) environments, multiple PGs are usually deployed. Each
Cisco CallManager cluster requires a co-located PG. When multiple Cisco CallManager clusters are
deployed, the ICM software makes them all appear to be part of one logical enterprise-wide contact
center with one enterprise-wide queue.

IPCC Components, Terminology, and Concepts


This section describes the major components and concepts employed in an IPCC solution.

IPCC Agent Desktop


The IPCC agent desktop application is the agent interface that enables the agent to perform agent state
control (login, logout, ready, not ready, wrap up, and so forth) and call control (answer, release, hold,
retrieve, transfer, conference, make call, and so forth). All call control must be done via the agent
desktop application.
The agent desktop has a media-terminated "softphone" option that allows you to eliminate the need for
a hardware IP Phone. (See Figure 1-3. Do not confuse the agent desktop softphone option with other
softphone applications such as the Cisco IP SoftPhone.) When using the agent desktop softphone option,
a headset is connected to the PC, which will encode/decode the VoIP packets and send/receive those
packets to/from the LAN.

Cisco IP Contact Center Enterprise Edition SRND


1-6 OL-7279-04
Chapter 1 Architecture Overview
IPCC Components, Terminology, and Concepts

Figure 1-3 Cisco Agent Desktop

There are three kinds of agent and supervisor desktop options available:
• Cisco Agent Desktop, an out-of-the-box agent desktop (shown in Figure 1-3).
• Cisco Toolkit Desktop, which is a software development toolkit built on the CTI Object Server
(CTI OS). The Cisco Toolkit Desktop is implemented for custom desktops or desktops integrated
with other applications (for example, a customer database application).
• Embedded CRM desktops such as the Cisco Siebel Desktop. The Cisco Siebel Desktop is an IPCC
agent desktop that is fully embedded within the Siebel desktop application. The Cisco Siebel
Desktop is offered by Cisco, and a number of other embedded CRM desktops are available from
Cisco partners.
In addition to an agent desktop, a supervisor desktop is available with each of these options.
The chapter on Agent Desktop and Supervisor Desktop, page 7-1, covers details on desktop selection
and design considerations.

Administrative Workstation
The Administrative Workstation (AW) provides a collection of administrative tools for managing the
ICM software configuration. The two primary configuration tools on the AW are the Configuration
Manager and the Script Editor. The Configuration Manager tool is used to configure the ICM database
to add agents, add skill groups, assign agents to skill groups, add dialed numbers, add call types, assign
dialed numbers to call types, assign call types to ICM routing scripts, and so forth. The Script Editor tool
is used to build ICM routing scripts. ICM routing scripts specify how to route and queue a contact (that
is, the script identifies which agent should handle a particular contact).
For details on the use of these tools, refer to the IP Contact Center Administration Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/ipccente/index.htm
The AW is the only software module that must run on a separate server from all of the other IPCC
software modules. An ICM installation supports an unlimited number of AWs and can be deployed
co-located with or remote from the ICM Central Controller. Each AW is independent of other AWs, and
redundancy is provided by deploying multiple AWs.
Some AWs communicate directly with the ICM Central Controller, and they are called distributor AWs.
An ICM deployment must have at least one distributor AW. Additional AWs (distributors or clients) are
also allowed for redundancy (primary and secondary distributors) or for additional access by the AW
clients in a site. At any additional site, at least one distributor and any number of client AWs can be
deployed; however, client AWs should always be local to their AW distributor.
Client AWs communicate with a distributor AW to view and modify the ICM Central Controller
database and to receive real-time reporting data. Distributor AWs relieve the Central Controller (the
real-time call processing engine) from the task of constantly distributing real-time contact center data to
the client AWs.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-7
Chapter 1 Architecture Overview
IPCC Components, Terminology, and Concepts

AWs can be installed with two software options:


• Historical Data Server (HDS)
• WebView Server
The Historical Data Server (HDS) option provides a replicated copy of the historical reporting data. The
HDS must be installed with a distributor AW. The WebView Server option provides browser-based
reporting. This option enables reporting to be done from any computer with a browser. The WebView
Server must be installed on a distributor AW with HDS. The net effect of adding the WebView Server
is to turn the distributor AW into a web server.
The reason for requiring the AW to run on a separate server for production systems is to ensure that
complex reporting queries do not interrupt the real-time call processing of the Router and Logger
processes. For lab or prototype systems, the AW (with the WebView Server option) can be installed on
the same server as the Router and Logger. If the AW is installed on the same server as the Logger, then
HDS is no longer required because a complete copy of the Logger database is already present on the
server.
For more details on the design and configuration of the AWs, refer to the ICM product documentation
available online at Cisco.com.

JTAPI Communications
In order for JTAPI communications to occur between Cisco CallManager and external applications such
as the IPCC and IP IVR, a JTAPI user ID and password must be configured within Cisco CallManager.
Upon startup of the Cisco CallManager PIM or upon startup of the IP IVR, the JTAPI user ID and
password are used to log in to Cisco CallManager. This login process by the application (Cisco
CallManager PIM or IP IVR) establishes the JTAPI communications between the Cisco CallManager
cluster and the application. A single JTAPI user ID is used for all communications between the entire
Cisco CallManager cluster and the ICM. A separate JTAPI user ID is also required for each IP IVR
server. In an IPCC deployment with one Cisco CallManager cluster and two IP IVRs, three JTAPI user
IDs are required: one JTAPI user ID for the ICM application and two JTAPI user IDs for the two IP
IVRs.
The Cisco CallManager software includes a module called the CTI Manager, which is the layer of
software that communicates via JTAPI to applications such as the ICM and IP IVR. Every node within
a cluster can execute an instance of the CTI Manager process, but the Cisco CallManager PIM on the
PG communicates with only one CTI Manager (and thus one node) in the Cisco CallManager cluster.
The CTI Manager process communicates CTI messages to/from other nodes within the cluster. For
example, suppose a deployment has a voice gateway homed to node 1 in a cluster, and node 2 executes
the CTI Manager process that communicates to the ICM. When a new call arrives at this voice gateway
and needs to be routed by the ICM, node 1 sends an intra-cluster message to node 2, which will send a
route request to the ICM to determine how the call should be routed.
Each IP IVR also communicates with only one CTI Manager (or node) within the cluster. The Cisco
CallManager PIM and the two IP IVRs from the previous example could each communicate with
different CTI Managers (nodes) or they could all communicate with the same CTI Manager (node).
However, each communication uses a different user ID. The user ID is how the CTI Manager keeps track
of the different applications.
When the Cisco CallManager PIM is redundant, only one side is active and in communication with the
Cisco CallManager cluster. Side A of the Cisco CallManager PIM communicates with the CTI Manager
on one Cisco CallManager node, and side B of the Cisco CallManager PIM communicates with the CTI
Manager on another Cisco CallManager node. The IP IVR does not have a redundant side, but the IP IVR

Cisco IP Contact Center Enterprise Edition SRND


1-8 OL-7279-04
Chapter 1 Architecture Overview
IPCC Components, Terminology, and Concepts

does have the ability to fail over to another CTI Manager (node) within the cluster if its primary CTI
Manager is out of service. For more information on failover, refer to the chapter on Design
Considerations for High Availability, page 3-1.
The JTAPI communications between the Cisco CallManager and IPCC include three distinct types of
messaging:
• Routing control
Routing control messages provide a way for Cisco CallManager to request routing instructions from
IPCC.
• Device and call monitoring
Device monitoring messages provide a way for Cisco CallManager to notify IPCC about state
changes of a device (IP phone) or a call.
• Device and call control
Device control messages provide a way for Cisco CallManager to receive instructions from IPCC
on how to control a device (IP phone) or a call.
A typical IPCC call includes all three types of JTAPI communication within a few seconds. When a new
call arrives, Cisco CallManager requests routing instructions from the ICM. For example, when Cisco
CallManager receives the routing response from the ICM, Cisco CallManager attempts delivery of the
call to the agent phone by instructing the phone to begin ringing. At that point, Cisco CallManager
notifies the ICM that the device (IP phone) has started ringing, and that notification enables the agent’s
answer button on the desktop application. When the agent clicks the answer button, the ICM instructs
Cisco CallManager to make the device (IP phone) go off-hook and answer the call.
In order for the routing control communication to occur, Cisco CallManager requires the configuration
of a CTI Route Point. A CTI Route Point is associated with a specific JTAPI user ID, and this association
enables Cisco CallManager to know which application provides routing control for that CTI Route Point.
Directory (Dialed) Numbers (DNs) are then associated with the CTI Route Point. A DN is associated to
a CTI Route Point that is associated with the ICM JTAPI user ID, and this enables Cisco CallManager
to generate a route request to the ICM when a new call to that DN arrives.
In order for the IP phones (devices) to be monitored and controlled, they also must be associated in Cisco
CallManager with a JTAPI user ID. In an IPCC environment, the IP phones are associated with the ICM
JTAPI user ID. When an agent logs in from the desktop, the Cisco CallManager PIM requests Cisco
CallManager to allow the PIM to begin monitoring and controlling that device (IP phone). Until the
login has occurred, Cisco CallManager does not allow the ICM to monitor or control that IP phone. If
the device has not been associated with the ICM JTAPI user ID, then the agent login request will fail.
Because the IP IVR also communicates with Cisco CallManager using the same JTAPI protocol, these
same three types of communication also occur with the IP IVR. Unlike the ICM, the IP IVR provides
both the application itself and the devices to be monitored and controlled.
The devices that the ICM monitors and controls are the physical IP phones. The IP IVR does not have
real physical ports like a traditional IVR. Its ports are logical ports (independent software tasks or
threads running on the IP IVR application server) called CTI Ports. For each CTI Port on the IP IVR,
there needs to be a CTI Port device defined in Cisco CallManager.
Unlike a traditional PBX or telephony switch, Cisco CallManager does not select the IP IVR port to
which it will send the call. Instead, when a call needs to be made to a DN that is associated with a CTI
Route Point that is associated with an IP IVR JTAPI user, Cisco CallManager asks the IP IVR (via JTAPI
routing control) which CTI Port (device) should handle the call. Assuming the IP IVR has an available
CTI Port, the IP IVR will respond to the Cisco CallManager routing control request with the Cisco
CallManager device identifier of the CTI Port that is going to handle that call.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-9
Chapter 1 Architecture Overview
IPCC Components, Terminology, and Concepts

When an available CTI Port is allocated to the call, an IP IVR workflow is started within the IP IVR.
When the IP IVR workflow executes the accept step, a JTAPI message is sent to Cisco CallManager to
answer the call on behalf of that CTI Port (device). When the IP IVR workflow wants the call transferred
or released, it again instructs the Cisco CallManager what it would like done with that call. These
scenarios are examples of device and call control performed by the IP IVR.
When a caller releases the call while interacting with the IP IVR, the voice gateway detects the caller
release and notifies Cisco CallManager via H.323 or Media Gateway Control Protocol (MGCP), which
then notifies the IP IVR via JTAPI. When DTMF tones are detected by the voice gateway, it notifies
Cisco CallManager via H.245 or MGCP, which then notifies the IP IVR via JTAPI. These scenarios are
examples of device and call monitoring performed by the IP IVR.
In order for the CTI Port device control and monitoring to occur, the CTI Port devices on Cisco
CallManager must be associated with the appropriate IP IVR JTAPI user ID. If you have two 150-port
IP IVRs, you would have 300 CTI ports. Half of the CTI ports (150) would be associated with JTAPI
user IP IVR #1, and the other 150 CTI ports would be associated with JTAPI user IP IVR #2.
While Cisco CallManager can be configured to route calls to IP IVRs on its own, routing of calls to the
IP IVRs in an IPCC environment should be done by the ICM (even if you have only one IP IVR and all
calls require an initial IVR treatment). Doing so will ensure proper IPCC reporting. For deployments
with multiple IP IVRs, this routing practice also allows the ICM to load-balance calls across the multiple
IP IVRs.

ICM Routing Clients


An ICM routing client is anything that can generate a route request to the ICM Central Controller. The
Cisco CallManager PIM (representing the entire Cisco CallManager cluster) and each IP IVR/ISN PIM
are routing clients. Routing clients generate route requests to the ICM Central Controller. The ICM
Central Controller then executes a routing script and returns a routing label to the routing client. A
redundant PIM is viewed as a single logical routing client, and only one side of a PIM is active at any
point in time. In an IPCC deployment with one Cisco CallManager cluster (with any number of nodes)
and two IP IVRs, three routing clients are required: the Cisco CallManager PIM and the two IP IVR/ISN
PIMs.
The public switched telephone network (PSTN) can also function as a routing client. The ICM supports
a software module called a Network Interface Controller (NIC), which enables the ICM to control how
the PSTN routes a call. Intelligently routing a call before the call is delivered to any customer premise
equipment is referred to as pre-routing. Only certain PSTNs have NICs supported by the ICM. A detailed
list of PSTN NICs and details on ICM pre-routing can be found in the standard ICM product
documentation available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/
Other applications such as the Cisco Media Blender, the Cisco Collaboration Server, and the Cisco
E-Mail Manager can also function as routing clients to allow the ICM to become a multi-channel contact
routing engine. Details of currently available multi-channel routing are available on Cisco.com.

Device Targets
Each IP phone must be configured in the ICM Central Controller database as a device target. Only one
extension on the IP phone may be configured as an ICM device target. Additional extensions may be
configured on the IP phone, but those extensions will not be known to the ICM software and, thus, no
monitoring or control of those additional extensions is possible. The ICM provides call treatment for
Reroute On No Answer (RONA), therefore it is not necessary to configure call forwarding on

Cisco IP Contact Center Enterprise Edition SRND


1-10 OL-7279-04
Chapter 1 Architecture Overview
IPCC Components, Terminology, and Concepts

ring-no-answer in the Cisco CallManager configuration for the IP phones. Unless call center policy
permits warm (agent-to-agent) transfers, the IPCC extension also should not be published or dialed by
anyone directly, and only the ICM software should route calls to this IPCC phone extension.
At agent login, the agent ID and IP phone extension are associated, and this association is released when
the agent logs out. This feature allows the agent to log in at another phone and allows another agent to
log in at that same phone. At agent login, the Cisco CallManager PIM requests Cisco CallManager to
allow it to begin monitoring that IP phone and to provide device and call control for that IP phone. As
mentioned previously, each IP phone must be mapped to the ICM JTAPI user ID in order for the agent
login to be successful.

Labels
Labels are the response to a route request from a routing client. The label is a pointer to the destination
where the call is to be routed (basically, the number to be dialed by the routing client). Many labels in
an IPCC environment correspond to the IPCC phone extensions so that Cisco CallManager and IP IVR
can route or transfer calls to the phone of an agent who has just been selected for a call.
Often, the way a call is routed to a destination depends upon where the call originated and where it is
being terminated. This is why IPCC uses labels. For example, suppose we have an environment with two
regionally separated Cisco CallManager clusters, Site 1 and Site 2. An IP phone user at Site 1 will
typically just dial a four-digit extension to reach another IP phone user at Site 1. In order to reach an IP
phone user at Site 2 from Site 1, users might have to dial a seven-digit number. To reach an IP phone
user at either site from a PSTN phone, users might have to dial a 10- digit number. From this example,
we can see how a different label would be needed, depending upon where the call is originating and
terminating.
Each combination of device target and routing client must have a label. For example, a device target in
an IPCC deployment with a two-node Cisco CallManager cluster and two IP IVRs will require three
labels. If you have 100 device targets (IP phones), you would need 300 labels. If there are two regionally
separated Cisco CallManager clusters, each with two IP IVRs and 100 device targets per site, then we
would need 1200 labels for the six routing clients and 200 device targets (assuming we wanted to be able
to route a call from any routing client to any device target). If calls are to be routed to device targets only
at the same site as the routing client, then we would need only 600 labels (three routing clients to 100
device targets, and then doubled for Site 2).
Labels are also used to route calls to IP IVR CTI Ports. Details on configuring labels are provided in the
IPCC Installation Guide, available on Cisco.com. A bulk configuration tool is also available to simplify
the configuration of the labels.

Agent Desk Settings


Agent Desk Settings provide a profile that specifies parameters such as whether auto-answer is enabled,
how long to wait before rerouting a call for Ring No Answer, what DN to use in the rerouting, and
whether reason codes are needed for logging out and going not-ready. Each agent must be associated
with an agent desk setting profile in the ICM configuration. A single agent desk setting profile can be
shared by many agents. Changes made to an agent’s desk setting profile while the agent is logged in are
not activated until the agent logs out and logs in again.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-11
Chapter 1 Architecture Overview
IPCC Routing

Agents
Agents are configured within the ICM and are associated with one specific Cisco CallManager PIM (that
is, one Cisco CallManager cluster). Within the ICM configuration, you also configure the password for
the agent to use at login.

Skill Groups
Skill groups are configured within the ICM so that agents with similar skills can be grouped together.
Agents can be associated with one or more skill groups. Changes made to an agent’s skill group
association while the agent is logged in are not activated until the agent logs out and logs in again.
Skill groups are associated with a specific Cisco CallManager PIM. Skill groups from multiple PIMs
can be grouped into Enterprise Skill Groups. Creating and using Enterprise Skill Groups can simplify
routing and reporting in some scenarios.

Directory (Dialed) Numbers and Routing Scripts


In order for Cisco CallManager to generate a route request to the ICM, Cisco CallManager must
associate the DN with a CTI Route Point that is associated with the ICM JTAPI User. The DN must also
be configured in the ICM. Once the ICM receives the route request with the DN, that DN is mapped to
an ICM Call type, which is then mapped to an ICM routing script.

Agent Login and State Control


Agents log in to IPCC from their IPCC agent desktop application. When logging in, the agent is
presented with a dialog box that prompts for agent ID, password, and the IPCC phone extension to be
used for this login session. It is at login time that the agent ID, phone extension (device target), agent
desk setting profile, skills, and desktop IP address are all dynamically associated. The association is
released upon agent logout.

IPCC Routing
The example routing script in Figure 1-4 illustrates how IPCC routes calls. In this routing script, the
Cisco CallManager PIM (or cluster) is the routing client. Upon receipt of the route request, the ICM
maps the DN to a call type and then maps the call type to this routing script. In this routing script, the
ICM router first uses a Select node to look for the Longest Available Agent (LAA) in the BoatSales skill
group on the CCM_PG_1 peripheral (or cluster). The ICM router determines that agent 111 is the LAA.
Agent 111 is currently logged in from device target 1234 (Cisco CallManager phone extension 1234 in
this scenario). The ICM router then determines the label to be returned, based upon the device target and
routing client combination. The appropriate label is then returned to the routing client (Cisco
CallManager cluster) so that the call can be routed properly to that IP Phone (device target).

Cisco IP Contact Center Enterprise Edition SRND


1-12 OL-7279-04
Chapter 1 Architecture Overview
IPCC Routing

Figure 1-4 Routing Script Example

Route request (DN, ANI, CED) CallManager


Cluster

Agent ID Dev Target


111 1234

Dev Target Rtg Client Label


1234 CM Cluster 1234
1234 IPIVR 1 1234
1234 IPIVR 2 1234

Route response returned


to CallManager Cluster

76581
Translation Routing and Queuing
If no agents are available, then the router exits the Select node and transfers the call to an IP IVR to begin
queuing treatment. The transfer is completed using the Translation Route to VRU node. The Translation
Route to VRU node returns a unique translation route label to the original routing client, the Cisco
CallManager cluster. The translation route label will equal a DN configured in Cisco CallManager. In
Cisco CallManager, that DN is mapped to a CTI Route Point that is associated with the JTAPI user for
the IP IVR to which the call is being transferred.
Cisco CallManager and IP IVR will execute the JTAPI routing control messaging to select an available
CTI Port.
When the call is successfully transferred to the IP IVR, the IP IVR translation routing application first
sends a request instruction message to the ICM via the SCI between the IP IVR and the ICM. The ICM
identifies the DN as being the same as the translation route label and is then able to re-associate this call
with the call that was previously being routed. The ICM then re-enters the routing script that was
previously being run for this call. The re-entry point is the successful exit path of the Translation Route
to VRU node. (See Figure 1-5.) At this point, the routing client has changed from the Cisco CallManager
cluster to IPIVR1.
While the call was being transferred, the routing script was temporarily paused. After the transfer to the
IP IVR is successfully completed, the IP IVR becomes the routing client for this routing script. Next the
routing script queues the call to the BoatSales skill group and then instructs the IP IVR to run a specific
queue treatment via the Run VRU Script node. Eventually agent 111 becomes available, and as in the
previous example, the label to be returned to the routing client is identified based upon the combination

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-13
Chapter 1 Architecture Overview
IPCC Routing

of device target and routing client. Note that the routing client is now the IP IVR. The label returned
(1234) when agent 111 becomes available causes the IP IVR to transfer the call to agent 111 (at
extension 1234).

Figure 1-5 Translation Routing and Queuing

Original
Original route request routing client
CallManager
Cluster

New routing client


IPIVR 1

Agent ID Dev Target


111 1234

Dev Target Rtg Client Label


1234 CM Cluster 1234
1234 IPIVR 1 1234
1234 IPIVR 2 1234

Route response returned


to IPIVR 1

76582
For each combination of Cisco CallManager cluster and IP IVR, a translation route and a set of labels is
required. For example, if a deployment has one Cisco CallManager cluster and four IP IVRs, then four
translation routes and sets of labels are required.
For deployments with multiple IP IVRs, the ICM routing script should select the IP IVR with the greatest
number of idle IP IVR ports and then translation-route the call to that specific IP IVR. If no IP IVR ports
are available, then the script should execute a Busy node. If a high number of calls are executing Busy
nodes, then it is important to resize your IP IVR port capacity.

Reroute On No Answer (RONA)


When a call is routed to an agent but the agent fails to answer the call within a configurable amount of
time, the Cisco CallManager PIM for the agent who did not answer will change that agent’s state to "not
ready" (so that the agent does not get more calls) and launch a route request to find another agent. Any
call data is preserved and popped onto the next agent's desktop. If no agent is available, the call can be
sent back to the IP IVR for queuing treatment again. Again, all call data is preserved. The routing script
for this RONA treatment should set the call priority to “high” so that the next available agent is selected
for this caller. In the agent desk settings, you can set the RONA timer and the DN used to specify a
unique call type and routing script for RONA treatment.

Cisco IP Contact Center Enterprise Edition SRND


1-14 OL-7279-04
Chapter 1 Architecture Overview
Combining IP Telephony and IPCC in the Same Cisco CallManager Cluster

Combining IP Telephony and IPCC in the Same


Cisco CallManager Cluster
It is possible for a Cisco CallManager cluster to support IP phones with both normal IP Telephony
(office) extensions and IPCC (call center) extensions. When running dual-use Cisco CallManager
clusters with both IP Telephony and IPCC extensions, it is important to realize that sometimes the most
recent Cisco CallManager software release will not immediately be supported in IPCC deployments until
testing is completed later. It is also important to note that many contact center environments have very
stringent maintenance windows. Because of these software and environmental limitations, it might
sometimes be advantageous to separate Cisco CallManager clusters for IP Telephony extensions from
the Cisco CallManager clusters for IPCC extensions. It is important to consider the environment where
IPCC is being deployed to determine whether a separate Cisco CallManager cluster is advantageous.

Combining IP Telephony and IPCC Extensions on the Same IP Phone


It is possible to have multiple extensions on an IP phone. In an IPCC environment, at least one of those
extensions must be dedicated to IPCC and be configured with only a single line appearance, no voice
mail, and no call forwarding. Cisco recommends that you use the bottom-most extension on the IP phone
as the IPCC extension so that, when the user lifts the handset, the IPCC extension is not selected by
default. Other extensions on the IP phone may have multiple lines or appearances and voice mail.
However, it is important to note that there is no control over, or visibility of, those IP Telephony
extensions from the IPCC Agent Desktop. Cisco recommends that any IP Telephony extensions be
forwarded to voice mail prior to logging into the IPCC, so that agents are not interrupted by IP
Telephony calls while they are working on IPCC calls. Also, prior to making any outbound calls on the
IP Telephony extension, Cisco recommends that the agents change to a "not ready" state so that IPCC
calls are not routed to them while they are on the phone.

Queuing in an IPCC Environment


Call queuing can occur in three distinct scenarios in a contact center:
• New call waiting for handling by initial agent
• Transferred call waiting for handling by a second (or subsequent) agent
• Rerouted call due to ring-no-answer, waiting for handling by an initial or subsequent agent
When planning your IPCC deployment, it is important to consider how queuing and requeuing are going
to be handled.
Call queuing in an IPCC deployment requires use of an IVR platform that supports the SCI interface to
the ICM. The Cisco IP IVR is one such platform. Cisco also offers another IVR platform, called Internet
Service Node (ISN), that can be used as a queuing point for IPCC deployments. The chapter on
Deployment Models provides considerations for deployments with ISN. Traditional IVRs can also be
used in IPCC deployments, and the chapter on Deployment Models, page 2-1, also provides
considerations for deployments with traditional IVRs.
In an IPCC environment, an IVR is used to provide voice announcements and queuing treatment while
waiting for an agent. The control over the type of queuing treatment for a call is provided by the ICM
via the SCI interface. The Run VRU Script node in an ICM routing script is the component that causes
the ICM to instruct the IVR to play a particular queuing treatment.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-15
Chapter 1 Architecture Overview
Transfers in an IPCC Environment

While the IVR is playing the queuing treatment (announcements) to the caller, the ICM waits for an
available agent of a particular skill (as defined within the routing script for that call). When an agent
with the appropriate skill becomes available, the ICM reserves that agent and then instructs the IVR to
transfer the voice path to that agent's phone.

Transfers in an IPCC Environment


Transfers are a commonly used feature in contact centers, therefore it is very important to consider all
of the possible transfer scenarios desired for your IPCC installation. This section explains basic transfer
concepts, and the transfer scenarios themselves are discussed in the chapter on Deployment Models,
page 2-1.
Transfers involve three parties: the original caller, the transferring agent, and the target agent. The
original caller is the caller that made the original call that was routed to the transferring agent. The
transferring agent is the agent requesting the transfer to the target agent. The target agent is the agent
receiving the transfer from the transferring agent. This terminology is used throughout this document
when referring to the different parties.

Note Cisco recommends that all call control (answer, release, transfer, conference, and so on) be done from
the agent desktop application.

When a transferring agent wants to transfer a call to another skill group or agent, the transferring agent
clicks on the transfer button on the IPCC Agent Desktop. A dialog box allows the transferring agent to
enter the dialed number of a skill group or agent. An alphanumeric dialed number string (such as "sales"
or "service") is also valid. The transferring agent also selects whether this transfer is to be a single-step
(blind) transfer or a consultative transfer. (Single-step transfer is the default.) The transferring agent then
clicks OK to complete (single-step) or initiate (consultative) the transfer. The transfer request message
flows from the transferring agent desktop to the CTI Server and then to the Cisco CallManager PIM.
Any call data that was delivered to the transferring agent or added by the transferring agent is sent along
with the transfer request to the Cisco CallManager PIM.

Dialed Number Plan


The Cisco CallManager PIM then attempts to match the dialed number with an entry in the Dialed
Number Plan. The ICM Dialed Number Plan (DNP) is currently administered via the Bulk Configuration
tool on the ICM Administrative Workstation (AW). Entries in the DNP are entered per peripheral (PIM),
and all DNP entries for a particular PIM are downloaded to the PIM upon PIM startup. Updates and
additions to the DNP are also sent to the PIM dynamically, and they take effect immediately and are used
for the next call being transferred. In order for the ICM to route the transfer and have all call data move
with the transfer and be saved for cradle-to-grave reporting, a match for the dialed number must be found
in the DNP for the PIM where the agent is currently logged in.
Within the DNP, fuzzy (wildcard) matching of dialed number strings is allowed. The DNP is not the
same as the Dialed Number table used by the ICM router and managed via the AW Configuration
Manager tool. The ICM router maps dialed numbers to call types, and call types are mapped to ICM
routing scripts. This is how a specific dialed number is mapped to a routing script in the ICM router. For
administration details on editing dialed numbers, call types, and routing scripts, refer to the IP Contact
Center Administration Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/ipccente/index.htm

Cisco IP Contact Center Enterprise Edition SRND


1-16 OL-7279-04
Chapter 1 Architecture Overview
Transfers in an IPCC Environment

For help with designing a dial plan for your IPCC deployment, consult your Cisco Systems Engineer
(SE).

Dial Plan Type


Entries in the Dialed Number Plan must be configured with a dial plan type. There are six predefined
(via a list box) DNP types that correspond to the types specified in the agent desk settings profile. In
order for a call or transfer to proceed any further, the DNP type for that call must be allowed in the agent
desk setting profile used by the transferring agent. Because the Cisco CallManager calling search spaces
override any desk settings, it is best to allow all dial plan types in the agent desk settings.

Note Changes to the agent desk settings profile do not take effect until the agent logs out and logs in again.

Post Route
Entries in the Dialed Number Plan must also be configured to indicate whether a post-route is required.
For dialed numbers to be used in transfer scenarios, Cisco recommends that the post-route option be set
to Yes for transfers. When this field is set to Yes, the dialed number to be used for the route request must
be supplied in the Dialed Number column of the Dialed Number Plan Editor.

Route Request
Assuming a match is found in the DNP for the transfer, the DNP type is allowed for the transferring
agent, and the post-route option is set to Yes, the PIM logic will then generate a route request to the ICM
central controller using the dialed number specified in this same DNP entry.
Upon receipt of the route request, the ICM router matches the dialed number to a call type and executes
the appropriate routing script to find an appropriate target agent for the call. Within the routing script,
any of the call data collected so far could be used in the intelligent routing of the call. The ICM router
will determine which device target (phone extension and desktop) the agent is logged into and will then
return the label that points to that device target to the Cisco CallManager PIM.
At this point there are numerous scenarios that can occur, depending upon the type of transfer being
performed, as described in the following sections:
• Single-Step (Blind) Transfer, page 1-17
• Consultative Transfer, page 1-18

Single-Step (Blind) Transfer


A blind transfer is used when the transferring agent does not need to speak with the target agent. After
specifying a blind transfer in the transfer dialog box on the agent desktop, the transferring agent enters
a DN and clicks the Initiate Transfer button. The desktop then sends the transfer request to the Cisco
CallManager PIM. Assuming a match is found in the DNP, the DNP type is valid, and post-route is
selected, the Cisco CallManager PIM generates the route request to get a routing label and then instructs
the Cisco CallManager to perform a single-step transfer (without any further action from the transferring
agent). The transferring agent will see the call disappear from their desktop and they will transition to
the next agent state (wrap-up, ready, or not ready), depending on the agent desk settings for the
transferring agent. While the call is being placed to the target agent, the original caller is temporarily

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-17
Chapter 1 Architecture Overview
Transfers in an IPCC Environment

placed on hold. When the target agent's phone begins ringing, the original caller hears the ringing
(assuming auto-answer is not enabled). The target agent receives a screen pop with all call data, and the
Answer button on their agent desktop is enabled when the phone begins ringing. Upon answering the
call, the target agent is speaking with the original caller and the transfer is then complete. If the target
agent does not answer, then RONA (reroute on no answer) call rerouting logic will take over.
If auto-answer is enabled, the original caller and the target agent do not hear any ringing; the call is just
connected between the original caller and the target agent.
If the agent is transferring the call to a generic (skill-group) DN to find an available agent with a
particular skill, but no such agent is currently available, then the ICM routing script should be configured
to translation-route the call to an IP IVR for queuing treatment. The call would still be released from the
transferring agent desktop almost immediately. Any call data collected by the transferring agent would
automatically be passed to the IVR. The caller will not hear any ringback tones because the IP IVR CTI
Port will answer immediately. When the target agent becomes ready, the ICM will instruct the IVR to
transfer the call, and the ICM will pop the agent desktop with all call data.
If the agent has transferred the call to a number that is not within the ICM Dialed Number Plan, then the
caller will be transferred anyway. The destination for the transferred call depends upon the number that
was dialed and what is configured in the Cisco CallManager dial plan. Transfers not using the dialed
number plan are not recommended because of agent roaming restrictions, call data not following the call,
and reporting limitations.

Consultative Transfer
Some parts of the message flow for a consultative transfer are similar to the message flow for a blind
transfer. When the Cisco CallManager PIM receives the label from the ICM router indicating where to
transfer the call, the Cisco CallManager PIM tells Cisco CallManager to initiate a consultative transfer
to the number specified in the label. Cisco CallManager places the original caller on hold and makes a
consultative call to the number specified in the label. The caller generally hears tone on hold while the
transfer is being completed.
When the target agent phone begins ringing, Cisco CallManager generates a Consult Call Confirmation
message and a Device Ringing message.
The consult call confirmation message causes the Cisco CallManager PIM to notify the transferring
agents desktop that the call is proceeding, and it enables the Transfer Complete button. The transferring
agent can hear the target agent's phone ringing (assuming auto-answer is not enabled for the target
agent). At any time after this, the agent can click the Transfer Complete button to complete the transfer
(before or after the target answers their phone).
The Device Ringing message causes the Cisco CallManager PIM to pop the target agent's desktop with
call data and to enable their Answer button (assuming auto-answer is not enabled). When the target agent
clicks the Answer button (or auto-answer is invoked), a voice path between the transferring agent and
target agent is established (assuming the transferring agent has not clicked the Transfer Complete
button).
Generally the transferring agent will not click the Transfer Complete button before the target agent
answers because the probable reason they used consultative transfer was that they wanted to talk with
the target agent before completing the transfer. However, the transferring agent can click on the Transfer
Complete button at any time after it is enabled.
If the agent is transferring the call to a generic DN to find an available agent with a particular skill, but
no such agent is currently available, then the ICM routing script should be configured to route the call
to an IVR for queuing. In this scenario, the transferring agent would hear the IP IVR queue
announcements. The transferring agent could press the Transfer Complete button at any time to complete

Cisco IP Contact Center Enterprise Edition SRND


1-18 OL-7279-04
Chapter 1 Architecture Overview
Transfers in an IPCC Environment

the transfer. The caller would then begin hearing the IP IVR queuing announcements. Upon availability
of an appropriately skilled agent, the IP IVR transfers the call to this target agent and pops any call data
onto their screen.
If the agent is transferring the call to a number that is not in the ICM Dialed Number Plan and a number
that is not valid on the Cisco CallManager, the transferring agent will hear the failed consultation call
and will be able to reconnect with the original caller, as explained in the section on Reconnect, page
1-19.

Reconnect
During the consultation leg of a consultative transfer, the transferring agent can reconnect with the caller
and release the consult call leg. To do so, the agent simply clicks the Reconnect button. This action
causes the agent desktop to instruct the Cisco CallManager PIM to instruct Cisco CallManager to release
the consultation call leg and to reconnect the agent with the original caller.
This is basically the process an agent should use when they want to make a consultation call but do not
plan to complete the transfer. After a call is successfully reconnected, the transferring agent’s desktop
functionality will be exactly the same as before they requested the transfer. Therefore, the transferring
agent can later request another transfer, and there is no limit to the number of consultation calls an agent
can make.
Consultative transfers and reconnects are all done from the agent desktop and use the single Cisco
CallManager extension that is associated with the IPCC. The IPCC system does not support allowing the
transferring agent to place the original caller on hold and then use a second extension on their hardware
phone to make a consultation call. The hardware phone offers a button to allow this kind of transfer, but
it is not supported in an IPCC environment. If an agent transfers a call in this way, any call data will
be lost because the ICM did not route the call.

Alternate
Alternate is the ability for the agent to place the consultation call leg on hold and then retrieve the
original call leg while in the midst of a consultative transfer. The agent can then alternate again to place
the original caller back on hold and retrieve the consultation call leg. An agent can alternate a call as
many times as they would like.
When the transferring agent has alternated back to the original caller, the only call controls (buttons) that
are enabled are Release and Alternate. The Transfer (Complete) and Reconnect controls will be disabled.
The Alternate control will alternate the transferring agent back to talking with the consulted party. When
the agent has alternated back to the consultation leg, the Release, Alternate, Transfer, and Reconnect call
controls will be enabled. The Alternate control will alternate the transferring agent back to talking with
the original caller. The Transfer control will complete the transfer, and the Reconnect button will drop
the consulted party and reconnect the agent with the original caller.

Non-ICM Transfers
Transfers to numbers not in the DNP or to numbers configured in the DNP with post-route set to No are
allowed but do not result in an ICM-routed call. In these scenarios, the PIM simply sends a call transfer
request directly to Cisco CallManager and uses the dialed number from the transfer dialog on the agent
desktop. Call data is lost if the ICM does not route the call. Cisco recommends that any dialed number
for a transfer should have a match in the DNP, that it be marked for post-route, and that it have a DNP
type that is allowed for the transferring agent (based on their agent desk settings).

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-19
Chapter 1 Architecture Overview
Transfers in an IPCC Environment

Agent-to-Agent Transfers
If the transfer is to a specific agent, then the agent requesting the transfer must enter the agent ID into
the transfer dialog box. The DNP entry matching the dialed number (agent ID) must have DNP type
equal to PBX. This causes the PIM to place the dialed number (agent ID) into the CED field before it
sends the route request to the ICM router. In the script editor, use the agent-to-agent routing node and
specify the CED field as the location of the agent ID so that the ICM router will route this call properly.
Agent IDs should not match any of the extensions on the Cisco CallManager cluster. If you begin all
agent IDs with the same number and they all have the same length, you could set up a generic wildcard
string that matches all agent IDs so that you need only one entry in the DNP for agent-to-agent routing.
If your environment has multiple PIMs, then you must use an agent ID number plan to determine which
PIM contains this agent. Agent IDs by themselves are not unique. Agent IDs are associated with a
specific PIM and can be reused on other PIMs. By not repeating agent IDs across the enterprise and by
setting up a consistent agent ID assignment plan (such as all PIM 1 agent IDs begin with a 1, all PIM 2
agent IDs begin with a 2, and so on), you can parse the CED field in the script editor to determine which
PIM contains the agent. The parsing may be done via a series of "if" nodes in the script editor or via a
"route select" node. The agent-to-agent node requires the PIM to be specified.
In the event that the target agent is not in a ready state, the agent-to-agent script editor node allows
alternative routing for the call.

Transferring from an IVR to a Specific Agent


Many contact centers often wish to prompt the caller for information such as an order number and then
route the call to the agent who is working on that order. This can be done on IPCC by having the router
do a database lookup, placing the agent ID in any of the peripheral variable fields, and then using the
agent-to-agent script editor node to route the call to that particular agent. The agent-to-agent node must
be configured to look for the agent ID value in the peripheral variable field where the IVR placed the
agent ID. If multiple PIMs exist, then the same configuration as discussed in the previous section would
be required.
This type of scenario could also be used to prompt a caller for a specific agent ID and then route the
caller to that agent ID.

Transfer Reporting
After a call transfer is completed, a call detail record for the original call leg will exist and a new call
detail record will be opened for the new call leg. The two call records are associated with one another
via a common call ID assigned by the ICM. The time during the consultation call leg, before the transfer
is completed, is considered as talk time for the transferring agent.
For more details, refer to the IPCC Reporting Guide, available online at Cisco.com.

Combination or Multiple Transfers


During a consultative transfer, the consulted target agent may transfer the consultation call to another
target agent. Then, when the transferring agent presses the Transfer Complete button, the original caller
will be connected to the second consulted target agent.
After a call has been successfully transferred, it can be transferred again. Each call leg generates a call
detail record in the ICM, and the talk time during that call leg is associated with the agent who received
that call. All call detail records are associated with one another via a common call ID assigned by the
ICM. This allows complete cradle-to-grave reporting for the call.

Cisco IP Contact Center Enterprise Edition SRND


1-20 OL-7279-04
Chapter 1 Architecture Overview
Call Admission Control

Transfers of Conferenced Calls


After a conference has been set up by an agent, transfer is no longer a valid operation, even if the
conferenced party has released.

PSTN Transfers (Takeback N Transfer, or Transfer Connect)


Many PSTN service providers offer a network-based transfer service. These services are generally
invoked by the customer premises equipment (CPE) outpulsing a series of DTMF tones. The PSTN is
provisioned to detect these tones and perform some specific logic based upon the tones detected. A
typical outpulse sequence might be something like *827500. This DTMF string could mean, "transfer
this call to site 2 and use 7500 as the DNIS value when delivering the call to site 2." IPCC has the ability
to invoke these types of transfers.

Call Admission Control


Quality of Service (QoS) is a necessary part of a Voice over IP (VoIP) environment. QoS has various
mechanisms to give voice traffic priority over data traffic, but QoS alone is not enough to guarantee good
voice quality. What is needed is a way to make sure that the bandwidth allocated on the WAN link is not
exceeded. Call admission control is a methodology for ensuring voice quality by limiting the number of
active calls allowed on the network at one time.
When voice is enabled as an application on a data network, a certain amount of bandwidth should be
allocated for voice traffic. This total voice bandwidth must be able to support the voice call itself plus
any call control traffic. For information on how to calculate the required bandwidth for voice traffic,
refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
For IPCC, the call center should be able to determine its busy hour call completions (BHCC) within the
WAN and use the information to determine the bandwidth that is needed for its calls. This bandwidth
should be added to the data traffic and any other voice traffic that is on the network. The sum of all these
applications should not exceed 75% of the available WAN bandwidth. The capacity of the WAN depends
on the network infrastructure. For details, refer to the Cisco AVVID Network Infrastructure Quality of
Service design guide, available at
http://www.cisco.com/go/srnd
Call admission control makes sure that the active calls do not exceed the voice bandwidth allocation.
When a voice call is made, the bandwidth needed for that call is subtracted from the available voice
bandwidth pool. When a call disconnects, the bandwidth that was used for that call is returned to the
voice bandwidth pool. If the voice bandwidth pool is exhausted, then the next call request will be
rejected due to insufficient bandwidth. The entity that controls and manages this bandwidth pool is
called a gatekeeper. It is the gatekeeper's job to make sure that voice calls stay within the bandwidth
allotment.
In a Cisco CallManager environment, there are two types of call admission control:
• Gatekeeper Controlled, page 1-22
• Locations Controlled, page 1-23

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-21
Chapter 1 Architecture Overview
Call Admission Control

Gatekeeper Controlled
Gatekeeper control means that there is an independent entity acting as a gatekeeper. The
gatekeeper-controlled model is used for distributed call processing deployments. Before sending the call
out the gateway or intercluster trunk, Cisco CallManager will ask the gatekeeper if there is enough
bandwidth for the call to go through the WAN to another site. (See Figure 1-6.)

Figure 1-6 Area of Concern for Distributed Call Processing Models

Cisco CallManager
cluster
Voice mail M
server
Gatekeeper

IP

IP Router/gateway
V

Areas where bandwidth


IP WAN must be provisioned

Router/gateway Router/gateway
V V

IP Cisco CallManager IP Cisco CallManager


cluster cluster

IP M IP M

Voice mail Voice mail


76584

server server

If the gatekeeper rejects the call, then Cisco CallManager can perform digit manipulation on the dialed
digits and send this call transparently out the PSTN.
For IPCC, it is important to define this alternate route and digit manipulation within the dialing plan if
the gatekeeper does not allow the call to go on the WAN. The reason this is important is that calls are
sent to agents and IVRs via routing clients (CTI Desktop, IVR, or CTI Route Point), which are not able
to hang up and redial the call. Therefore, the caller would receive busy tone and not be routed to its
peripheral target (agent or IVR).
The ramification of having calls go out to the PSTN is that two ports are consumed because calls would
have to come into and go out of the main or branch sites via voice gateway ports, which stay up if the
call then gets transferred to another agent or IVR port at another site within the network.
The gatekeeper should be configured to allow enough bandwidth for call center traffic to go through.
The total amount of bandwidth needed would depend on whether incoming traffic from the PSTN is
routed through the WAN or if the WAN is used for inter-site transfers and conferences between agents.

Cisco IP Contact Center Enterprise Edition SRND


1-22 OL-7279-04
Chapter 1 Architecture Overview
Call Admission Control

Locations Controlled
For centralized call processing deployments, the locations-controlled model is used. In this model, Cisco
CallManager (not the gatekeeper) decides if there is enough bandwidth available on the WAN to send
the call. If there is not enough bandwidth, the call will fail. Transparent failover to the PSTN is not
available with locations-based call admission control. (See Figure 1-7.)

Figure 1-7 Area of Concern for Centralized Call Processing Models

Cisco CallManager
cluster
Voice mail M
server Router/ Router/
gateway gateway

IP IP WAN IP
V V
Remote site
IP

76585
Central site Areas where bandwidth
must be provisioned.

For IPCC, if the call fails due to insufficient bandwidth, the caller receives busy tone because the call is
routed by IVR or the CTI Desktop application, and there is no mechanism for the routing client to
disconnect the call and then dial again.
Therefore, it is important to calculate the bandwidth allocation for each branch office properly. The
number of simultaneous calls to each branch should be calculated. Inter-site transfer and conference
situations as well as normal office traffic should also be taken into account. Ideally, agent phones should
be allocated as one "location" within the location configuration of Cisco CallManager to make sure that
traffic generated to and from office workers' phones does not interfere with the bandwidth allocated to
call center traffic.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 1-23
Chapter 1 Architecture Overview
Call Admission Control

Cisco IP Contact Center Enterprise Edition SRND


1-24 OL-7279-04
C H A P T E R 2
Deployment Models

There are numerous ways that IPCC can be deployed, but the deployments can generally be categorized
into the following major types or models:
• Single Site
• Multi-Site Centralized Call Processing
• Multi-Site Distributed Call Processing
• Clustering over the WAN
Many variations or combinations of these deployment models are possible. The primary factors that
cause variations within these models are as follows:
• Locations of IPCC servers
• Locations of voice gateways
• Choice of inter-exchange carrier (IXC) or local exchange carrier (LEC) trunks
• Pre-routing availability
• IVR queuing platform and location
• Transfers
• Traditional ACD, PBX, and IVR integration
• Sizing
• Redundancy
This chapter discusses the impact of these factors (except for sizing) on the selection of a design. With
each deployment model, this chapter also lists considerations and risks that must be evaluated using a
cost/benefit analysis. Scenarios that best fit a particular deployment model are also noted.
A combination of these deployment models is also possible. For example, a multi-site deployment may
have some sites that use centralized call processing (probably small sites) and some sites that use
distributed call processing (probably larger sites). Examples of scenarios where combinations are likely
are identified within each section.
Also in this chapter is a section on integration of traditional ACD and IVR systems into an IPCC
deployment, with considerations on hybrid PBX/ACD deployments. Sizing and redundancy are
discussed in later chapters of this IPCC design guide. For more information on the network infrastructure
required to support an IPCC solution, refer to the Cisco Network Infrastructure Quality of Service
Design guide, available at
http://www.cisco.com/go/srnd

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-1
Chapter 2 Deployment Models
Single Site

For more information on deployment models for IPCC and IP Telephony, refer to the Cisco IP Telephony
Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Single Site
A single-site deployment refers to any scenario where all voice gateways, agents, desktops, IP Phones,
and call processing servers (Cisco CallManager, Intelligent Contact Management (ICM), and IP IVR or
Internet Service Node (ISN)) are located at the same site and have no WAN connectivity between any
IPCC software modules. Figure 2-1 illustrates this type of deployment.

Figure 2-1 Single-Site Deployment

Signaling/CTI
PSTN IP Voice
TDM Voice

CallManager Cluster
M

V
M M

ICM M M

IVR/ISN AW/HDS

IP
126019

Agent

Figure 2-1 shows two IP IVRs, a Cisco CallManager cluster, redundant ICM PROGGERS, an
Administrative Workstation (AW) and Historical Data Server (HDS), and a direct connection to the
PSTN from the voice gateways. The ICM PROGGER in this scenario is running the following major
software processes:
• Router
• Logger
• Cisco CallManager Peripheral Interface Manager (PIM)
• Two IVR or ISN PIMs
• CTI Server
• CTI Object Server (CTI OS) or Cisco Agent Desktop Servers
Within this model, many variations are possible. For example, the ICM Central Controller and
Peripheral Gateways (PGs) could be split onto separate servers. For information on when to install the
ICM Central Controller and PG on separate servers, refer to the chapter on Sizing IPCC Components
and Servers, page 5-1.

Cisco IP Contact Center Enterprise Edition SRND


2-2 OL-7279-04
Chapter 2 Deployment Models
Single Site

The ICM could also be deployed in a simplex fashion instead of redundantly. For information on the
benefits and design for IPCC redundancy, refer to the chapter on Design Considerations for High
Availability, page 3-1.
The number of Cisco CallManager nodes and the hardware model used is not specified along with the
number of IP IVRs or ISN servers. For information on determining the number and type of servers
required, refer to the chapter on Sizing IPCC Components and Servers, page 5-1.
Also not specified in this model is the specific data switching infrastructure required for the LAN, the
type of voice gateways, or the number of voice gateways and trunks. Cisco campus design guides and
IP Telephony design guides are available to assist in the design of these components. The chapter on
Sizing Call Center Resources, page 4-1, discusses how to determine the number of gateway ports.
Another variation in this model is to have the voice gateways connected to the line side of a PBX instead
of the PSTN. Connection to multiple PSTNs and a PBX all from the same single-site deployment is also
possible. For example, a deployment can have trunks from a local PSTN, a toll-free PSTN, and a
traditional PBX/ACD. For more information, see Traditional ACD Integration, page 2-34, and
Traditional IVR Integration, page 2-35.
This deployment model also does not specify the type of signaling (ISDN, MF, R1, and so on) to be used
between the PSTN and voice gateway or the specific signaling (H.323 or MGCP) to be used between the
voice gateway and Cisco CallManager.
The amount of digital signal processor (DSP) resources required for placing calls on hold, consultative
transfers, and conferencing is also not specified in this model. For information on sizing of these
resources, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available
at
http://www.cisco.com/go/srnd
The main advantage of the single-site deployment model is that there is no WAN connectivity required.
Given that there is no WAN in this deployment model, there is generally no need to use G.729 or any
other compressed Real-Time Transport Protocol (RTP) stream, so transcoding would not be required.

Treatment and Queuing with IP IVR


In this deployment model, all initial and subsequent queuing is done on the IP IVR. If multiple IP IVRs
are deployed, the ICM should be used to load-balance calls across those IP IVRs.

Treatment and Queuing with ISN


In this deployment model, all initial and subsequent queuing is done using ISN. A single server may be
used, with all ISN processes co-located on that server. Multiple servers, on the other hand, allow scaling
and redundancy. More information can be found in the sections on Sizing ISN Components, page 4-20,
and Design Considerations for High Availability, page 3-1

Transfers
In this deployment model (as well as in the multi-site centralized call processing model), both the
transferring agent and target agent are on the same PIM. This also implies that both the routing client
and the peripheral target are the same peripheral (or PIM). The transferring agent generates a transfer to
a particular dialed number (for example, looking for any specialist in the specialist skill group).

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-3
Chapter 2 Deployment Models
Multi-Site with Centralized Call Processing

Assuming that a match is found in the Dialed Number Plan (DNP) for the transfer request, that the DNP
type is allowed for the transferring agent, and that the post-route option is set to yes, the Cisco
CallManager PIM logic will generate a route request to the ICM router. The ICM router will match the
dialed number to a call type and activate the appropriate routing script. The routing script looks for an
available specialist.
If a target agent (specialist) is available to receive the transferred call, then the ICM router will return
the appropriate label to the routing client (the Cisco CallManager PIM). In this scenario, the label is
typically just the extension of the phone where the target agent is currently logged in. Upon receiving
the route response (label), the Cisco CallManager PIM will then initiate the transfer by sending a JTAPI
transfer request to the Cisco CallManager.
At the same time that the label is returned to the routing client, pre-call data (which includes any call
data that has been collected for this call) is delivered to the peripheral target. In this scenario, the routing
client and peripheral target are the same Cisco CallManager PIM. This is because the transferring agent
and the target agent are both associated with the same PIM. In some of the more complex scenarios to
be discussed in later sections, sometimes the routing client and peripheral target are not the same.
If a target agent is not available to receive the transferred call, then the ICM routing script is typically
configured to transfer the call to an IVR so that queue treatment can be provided. In this scenario, the
label is a dialed number that will instruct the Cisco CallManager to transfer the call to an IVR. Also in
this scenario, the routing client and peripheral target are different. The routing client is the Cisco
CallManager PIM, while the peripheral target is the specific IVR PIM to which the call is being
transferred.

Multi-Site with Centralized Call Processing


A multi-site deployment with centralized call processing refers to any scenario where call processing
servers (Cisco CallManager, ICM, and IP IVR or ISN) are located at the same site, while any
combination of voice gateways, agents, desktops, and IP Phones are located remotely across a WAN link
or centrally. Figure 2-2 illustrates this type of deployment.
There are two variations of this model:
• Centralized Voice Gateways, page 2-4
• Distributed Voice Gateways, page 2-7

Centralized Voice Gateways


If an enterprise has small remote sites or offices in a metropolitan area where it is not efficient to place
call processing servers or voice gateways, then this model is most appropriate. As sites become larger
or more geographically dispersed, use of distributed voice gateways might be a better option.
Figure 2-2 illustrates this model.

Cisco IP Contact Center Enterprise Edition SRND


2-4 OL-7279-04
Chapter 2 Deployment Models
Multi-Site with Centralized Call Processing

Figure 2-2 Multi-Site Deployment with Centralized Call Processing and Centralized Voice Gateways

Signaling/CTI
PSTN IP Voice
TDM Voice

CallManager Cluster
M
V
M M

ICM M M

PG/CTI
PG/CTI AW/HDS

IVR/ISN

VoIP WAN

IP IP

126020
Agent Agent

Advantages
• Only a small data switch and router, IP Phones, and agent desktops are needed at remote sites where
only a few agents exist, and only limited system and network management skills are required at
remote sites.
• No PSTN trunks are required directly into these small remote sites and offices, except for local
POTS lines for emergency services (911) in the event of a loss of the WAN link.
• PSTN trunks are used more efficiently because the trunks for small remote sites are aggregated.
• IPCC Queue Points (IP IVR or ISN) are used more efficiently because all Queue Points are
aggregated.
• No VoIP WAN bandwidth is used while calls are queuing (initial or subsequent).
As with the single-site deployment model, all the same variations exist. For example, multi-site
deployments can run the ICM software all on the same server or on multiple servers. The ICM software
can be installed as redundant or simplex. The number of Cisco CallManager and IP IVR or ISN servers
is not specified by the deployment model, nor are the LAN/WAN infrastructure, voice gateways, or
PSTN connectivity. For other variations, see Single Site, page 2-2.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-5
Chapter 2 Deployment Models
Multi-Site with Centralized Call Processing

Best Practices
• VoIP WAN connectivity is required for RTP traffic to agent phones at remote sites.
• RTP traffic to agent phones at remote sites may require compression to reduce VoIP WAN
bandwidth usage. It may be desirable for calls within a site to be uncompressed, so transcoding
might also be required depending upon how the IP Telephony deployment is designed.
• Skinny Client Control Protocol (SCCP) call control traffic from IP Phones to the Cisco CallManager
cluster flows over the WAN.
• CTI data to and from the IPCC Agent Desktop flows over the WAN. Adequate bandwidth and QoS
provisioning are critical for these links.
• Because there are no voice gateways at the remote sites, customers might be required to dial a
long-distance number to reach what would normally be a local PSTN phone call if voice gateways
with trunks were present at the remote site. This situation could be mitigated if the business
requirements are to dial 1-800 numbers at the central site. An alternative is to offer customers a
toll-free number to dial, and have those calls all routed to the centralized voice gateway location.
However, this requires the call center to incur toll-free charges that could be avoided if customers
had a local PSTN number to dial.
• The lack of local voice gateways with local PSTN trunks can also impact access to 911 emergency
services, and this must be managed via the Cisco CallManager dial plan. In most cases, local trunks
are configured to dial out locally and for 911 emergency calls.
• Cisco CallManager locations-based call admission control failure will result in a routed call being
disconnected. Therefore, it is important to provision adequate bandwidth to the remote sites. Also,
an appropriately designed QoS WAN is critical.

Treatment and Queuing with IP IVR


As in the single-site deployment, all call queuing is done on the IP IVR at a single central site. While
calls are queuing, no RTP traffic flows over the WAN. If requeuing is required during a transfer or
reroute on ring-no-answer, the RTP traffic flow during the queue treatment also does not flow over the
WAN. This reduces the amount of WAN bandwidth required to the remote sites.

Treatment and Queuing with ISN


In this model, ISN is used in the same way as IP IVR.

Transfers
In this scenario, the transferring agent and target agent are on the same Cisco CallManager cluster and
Cisco CallManager PIM. Therefore, the same call and message flows will occur as in the single-site
model, whether the transferring agent is on the same LAN as the target or on a different LAN. The only
differences are that QoS must be enabled and that appropriate LAN/WAN routing must be established.
For details on provisioning your WAN with QoS, refer to the Cisco Network Infrastructure Quality of
Service Design guide, available at
http://www.cisco.com/go/srnd
During consultative transfers where the agent (not the caller) is routed to an IP IVR port for queuing
treatment, transcoding is required because the IP IVR can generate only G.711 media streams.

Cisco IP Contact Center Enterprise Edition SRND


2-6 OL-7279-04
Chapter 2 Deployment Models
Multi-Site with Centralized Call Processing

Distributed Voice Gateways


A variation of the centralized call processing model can include multiple voice gateway locations. This
distributed voice gateway model may be appropriate for a company with many small sites, each
requiring local PSTN trunks for incoming calls. This model provides local PSTN connectivity for local
calling and access to local emergency services. Figure 2-3 illustrates this model.

Figure 2-3 Multi-Site Deployment with Centralized Call Processing and Distributed Voice Gateways
with IP IVR

PSTN

CallManager Cluster
M
V
M M

ICM M M

PG/CTI
PG/CTI AW/HDS

Signaling/CTI
IVR/ISN IP Voice
TDM Voice

VoIP WAN

IP IP
126021

Agent Agent

In this deployment model, shown with IP IVR for queuing and treatment, it might be desirable to restrict
calls arriving at a site to be handled by an agent within that site, but this is not required. By restricting
calls to the site where it arrived:
• VoIP WAN bandwidth is reduced for calls going to agents.
• Customer service levels for calls arriving into that site might suffer due to longer queue times and
handle times.
• Longer queue times can occur because, even though an agent at another site is available, the IPCC
configuration may continue to queue for an agent at the local site only.
• Longer handle times can occur because, even though a more qualified agent exists at another site,
the call may be routed to a local agent to reduce WAN bandwidth usage.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-7
Chapter 2 Deployment Models
Multi-Site with Centralized Call Processing

It is important for deployment teams to carefully assess the trade-offs between operational costs and
customer satisfaction levels to establish the right balance on a customer-by-customer basis. For example,
it may be desirable to route a specific high-profile customer to an agent at another site to reduce their
queue time and allow the call to be handled by a more experienced representative, while another
customer may be restricted to an agent within the site where the call arrived.
An IPCC deployment may actually use a combination of centralized and distributed voice gateways. The
centralized voice gateways can be connected to one PSTN carrier providing toll-free services, while the
distributed voice gateways can be connected to another PSTN carrier providing local phone services.
Inbound calls from the local PSTN could be both direct inward dial (DID) and contact center calls. It is
important to understand the requirements for all inbound and outbound calling to determine the most
efficient location for voice gateways. Identify who is calling, why they are calling, where they are calling
from, and how they are calling.
In multi-site deployments with distributed voice gateways, the ICM's pre-routing capability can also be
used to load-balance calls dynamically across the multiple sites. A list of PSTN carriers that offer ICM
pre-routing services can be found in the ICM product documentation available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/
In multi-site environments where the voice gateways have both local PSTN trunks and separate toll-free
trunks delivering contact center calls, the ICM pre-routing software can load-balance the toll-free
contact center calls around the local contact center calls. For example, suppose you have a two-site
deployment where Site 1 currently has all agents busy and many calls in queue from locally originated
calls, and Site 2 has only a few calls in queue or maybe even a few agents currently available. In that
scenario, you could have the ICM instruct the toll-free provider to route most or all of the toll-free calls
to Site 2. This type of multi-site load balancing provided by the ICM is dynamic and automatically
adjusts as call volumes change at all sites.
Just as in the two previous deployment models, much variation exists in the number and type of ICM,
Cisco CallManager, and IP IVR or ISN servers; LAN/WAN infrastructure; voice gateways; PSTN
connectivity; and so forth.

Advantages
• Only limited systems management skills are needed for the remote sites because most servers,
equipment, and system configurations are managed from a centralized location.
• The ICM pre-routing option can be used to load-balance calls across sites, including sites with local
PSTN trunks in addition to toll-free PSTN trunks.
• No WAN RTP traffic is required for calls arriving at each remote site that are handled by agents at
that remote site.

Best Practices
• The IP IVR or ISN, Cisco CallManager, and PGs (for both Cisco CallManager and IVR/ISN) are
co-located. In this model, the only IPCC communications that can be separated across a WAN are
the following:
– ICM Central Controller to ICM PG
– ICM PG to IPCC Agent Desktops
– Cisco CallManager to voice gateways
– Cisco CallManager to IP Phones
• If calls are not going to be restricted to the site where calls arrive, or if calls will be made between
sites, more RTP traffic will flow across the WAN. It is important to determine the maximum number
of calls that will flow between sites or locations. Cisco CallManager locations-based call admission

Cisco IP Contact Center Enterprise Edition SRND


2-8 OL-7279-04
Chapter 2 Deployment Models
Multi-Site with Distributed Call Processing

control failure will result in a routed call being disconnected (rerouting within Cisco CallManager
is not currently supported). Therefore, it is important to provision adequate bandwidth to the remote
sites, and appropriately designed QoS for the WAN is critical.
• H.323 or MGCP signaling traffic between the voice gateways and the centralized Cisco
CallManager servers will flow over the WAN. Proper QoS implementation on the WAN is critical,
and signaling delays must be within tolerances listed in the Cisco IP Telephony Solution Reference
Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Treatment and Queuing with IP IVR


WAN bandwidth must be provisioned to support all calls that will be treated and queued at the central
site.
Centralized IP IVRs provide efficiency of IP IVR ports when compared with smaller deployments of IP
IVRs at each remote site.

Treatment and Queuing with ISN


Using ISN for treatment and queuing allows you to reduce the amount of voice bearer traffic traveling
across the WAN. ISN queues and treats calls on the remote gateways, eliminating the need to terminate
the voice bearer traffic at the central site. WAN bandwidth must still be provisioned for transfers and
conferences that involve agents at other locations.

Transfers
Intra-site or inter-site transfers using the VoIP WAN to send the RTP stream from one site to another
will occur basically the same way as a single-site transfer or a transfer in a deployment with centralized
voice gateways.
An alternative to using the VoIP WAN for routing calls between sites is to use a carrier-based PSTN
transfer service. These services allow the IPCC voice gateways to outpulse DTMF tones to instruct the
PSTN to reroute (transfer) the call to another voice gateway location. Each site can be configured within
the ICM as a separate peripheral. The label then indicates whether a transfer is intra-site or inter-site,
using Takeback N Transfer (TNT).

Multi-Site with Distributed Call Processing


Enterprises with multiple medium to large sites separated by large distances tend to prefer a distributed
call processing model. In this model, each site has its own Cisco CallManager cluster, treatment and
queue points, PGs, and CTI Server. However, as with the centralized call processing model, sites could
be deployed with or without local voice gateways. Some deployments may also contain a combination
of distributed voice gateways (possibly for locally dialed calls) and centralized voice gateways (possibly
for toll-free calls) as well as centralized or distributed treatment and queue points.
Regardless of how many sites are being deployed, there will still be only one logical ICM Central
Controller. If the ICM Central Controller is deployed with redundancy, side A and B can be deployed
side-by-side or geographically separated (remote redundancy). For details on remote redundancy, refer
to the ICM product documentation available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-9
Chapter 2 Deployment Models
Multi-Site with Distributed Call Processing

Distributed Voice Gateways with Treatment and Queuing Using IP IVR


This deployment model is a good choice if the company has multiple medium to large sites. In this
model, voice gateways with PSTN trunks terminate into each site. Just as in the centralized call
processing model with distributed voice gateways, it may be desirable to limit the routing of calls to
agents within the site where the call arrived (to reduce WAN bandwidth). An analysis of benefits from
customer service levels versus WAN costs is required to determine whether limiting calls within a site
is recommended. Figure 2-4 illustrates this model.

Figure 2-4 Multi-Site Deployment with Distributed Call Processing and Distributed Voice Gateways with IP IVR

Signaling/CTI
AW/HDS
IP Voice
ICM TDM Voice

PG/CTI PG/CTI
PG/CTI IVR IVR PG/CTI

CallManager Cluster 1 CallManager Cluster 2


M M

VoIP WAN
M M V V M M

M M M M

IP IP

126022
PSTN
Agent Agent

As with the previous models, many variations are possible. The number and type of ICM Servers, Cisco
CallManager servers, and IP IVR servers can vary. LAN/WAN infrastructure, voice gateways, PSTN
trunks, redundancy, and so forth are also variable within this deployment model. Central processing and
gateways may be added for self-service, toll-free calls, and support for smaller sites. In addition, the use
of a pre-routing PSTN Network Interface Controller (NIC) is also an option.

Advantages
• Each independent site can scale to support up to 2000 agents per Cisco CallManager cluster, and
there is no software limit (you can have up to 80 PGs) to the number of sites that can be combined
by the ICM Central Controller to produce a single enterprise-wide contact center.
• All or most VoIP traffic can be contained within the LAN of each site, if desired. The QoS WAN
shown in Figure 2-4 would be required for voice calls to be transferred across sites. Use of a PSTN
transfer service (for example, Takeback N Transfer) could eliminate that need. If desired, a small
portion of calls arriving at a particular site can be queued for agent resources at other sites to
improve customer service levels.
• ICM pre-routing can be used to load-balance calls to the best site to reduce WAN usage for VoIP
traffic.

Cisco IP Contact Center Enterprise Edition SRND


2-10 OL-7279-04
Chapter 2 Deployment Models
Multi-Site with Distributed Call Processing

• Failure at any one site has no impact on operations at another site.


• Each site can be sized according to the requirements for that site
• The ICM Central Controller provides centralized management for configuration of routing for all
calls within the enterprise.
• The ICM Central Controller provides the capability to create a single enterprise-wide queue.
• The ICM Central Controller provides consolidated reporting for all sites.

Best Practices
• The PG, Cisco CallManager cluster, and IP IVR must be co-located.
• The communication link from the ICM Central Controller to the PG must be sized properly and
provisioned for bandwidth and QoS. (For details, refer to the chapter on Bandwidth Provisioning
and QoS Considerations, page 8-1.)
• Gatekeeper-based call admission control could be used to reroute calls between sites over the PSTN
when WAN bandwidth is not available. It is best to ensure that adequate WAN bandwidth exists
between sites for the maximum amount of calling that can occur.
• If the communication link between the PG and the ICM Central Controller is lost, then all contact
center routing for calls at that site is also lost. Therefore, it is important to implement a fault-tolerant
WAN. Even when a fault-tolerant WAN is implemented, it is important to identify contingency
plans for call treatment and routing when communication is lost between the ICM Central Controller
and PG. For example, in the event of a lost ICM Central Controller connection, the Cisco
CallManager CTI route points could send the calls to IP IVR ports to provide basic announcement
treatment or to invoke a PSTN transfer to another site. Another alternative is for the Cisco
CallManager cluster to route the call to another Cisco CallManager cluster that may have a PG with
an active connection to the ICM Central Controller.
• While two inter-cluster call legs for the same call will not cause unnecessary RTP streams, two
separate call signaling control paths will remain intact between the two clusters (producing a logical
hair-pinning and reducing the number of inter-cluster trunks by two).
• Latency between ICM Central Controllers and remote PGs cannot exceed 200 ms one way (400 ms
round-trip).

Treatment and Queuing


Initial call queuing is done on an IP IVR co-located with the voice gateways, so no transcoding is
required. When a call is transferred and subsequent queuing is required, the queuing should be done on
an IP IVR at the site where the call is currently being processed. For example, if a call comes into Site 1
and gets routed to an agent at Site 2, but that agent needs to transfer the call to another agent whose
location is unknown, the call should be queued to an IP IVR at Site 2 to avoid generating another
inter-cluster call. A second inter-cluster call would be made only if an agent at Site 1 was selected for
the transfer. The RTP flow at this point would be directly from the voice gateway at Site 1 to the agent's
IP Phone at Site 1. However, the two Cisco CallManager clusters would still logically see two calls in
progress between the two clusters.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-11
Chapter 2 Deployment Models
Multi-Site with Distributed Call Processing

Transfers
Transfers within a site function just like a single-site transfer. Transfers between Cisco CallManager
clusters use either the VoIP WAN or a PSTN service.
If the VoIP WAN is used, sufficient inter-cluster trunks must be configured. An alternative to using the
VoIP WAN for routing calls between sites is to use a PSTN transfer service. These services allow the
IPCC voice gateways to outpulse DTMF tones to instruct the PSTN to reroute (transfer) the call to
another voice gateway location. Another alternative is to have the Cisco CallManager cluster at Site 1
make an outbound call back to the PSTN. The PSTN would then route the call to Site 2, but the call
would use two voice gateway ports at Site 1 for the remainder of the call.

Distributed Voice Gateways with Treatment and Queuing Using ISN


This deployment model is the same as the previous model, except that ISN is used instead of IP IVR for
call treatment and queuing. In this model, voice gateways with PSTN trunks terminate into each site.
Just as in the centralized call processing model with distributed voice gateways, it may be desirable to
limit the routing of calls to agents within the site where the call arrived (to reduce WAN bandwidth).
Call treatment and queuing may also be achieved at the site where the call arrived, further reducing the
WAN bandwidth needs. Figure 2-5 illustrates this model.

Figure 2-5 Multi-Site Deployment with Distributed Call Processing and Distributed Voice Gateways with ISN

Signaling/CTI
AW/HDS
IP Voice
ICM TDM Voice
PG
ISN PG
PG/CTI PG/CTI
PG/CTI PG/CTI

CallManager Cluster 1 CallManager Cluster 2


M M

VoIP WAN
M M V V M M

M M M M

IP IP
126023

PSTN
Agent Agent

As with the previous models, many variations are possible. The number and type of ICM Servers, Cisco
CallManager servers, and ISN servers can vary. LAN/WAN infrastructure, voice gateways, PSTN
trunks, redundancy, and so forth are also variable within this deployment model. Central processing and
gateways may be added for self-service, toll-free calls, and support for smaller sites. In addition, the use
of a pre-routing PSTN Network Interface Controller (NIC) is also an option.

Cisco IP Contact Center Enterprise Edition SRND


2-12 OL-7279-04
Chapter 2 Deployment Models
Multi-Site with Distributed Call Processing

Advantages
• ISN Servers can be located either centrally or remotely. Call treatment and queuing will still be
distributed, executing on the local gateway, regardless of ISN server location. ISN is shown
centrally located in Figure 2-5.
• Each independent site can scale to support up to 2000 agents per Cisco CallManager cluster, and
there is no software limit to the number of sites that can be combined by the ICM Central Controller
to produce a single enterprise-wide contact center.
• All or most VoIP traffic can be contained within the LAN of each site, if desired. The QoS WAN
would be required for voice calls to be transferred across sites. Usage of a PSTN transfer service
(for example, Takeback N Transfer) could eliminate that need. If desired, a small portion of calls
arriving at a particular site can be queued for agent resources at other sites to improve customer
service levels.
• ICM pre-routing can be used to load-balance calls to the best site to reduce WAN usage for VoIP
traffic.
• Failure at any one site has no impact on operations at another site.
• Each site can be sized according to the requirements for that site.
• The ICM Central Controller provides centralized management for configuration of routing for all
calls within the enterprise.
• The ICM Central Controller provides the capability to create a single enterprise-wide queue.
• The ICM Central Controller provides consolidated reporting for all sites.

Best Practices
• The Cisco CallManager PG and Cisco CallManager cluster must be co-located. The ISN PG and ISN
servers must be co-located.
• The communication link from the ICM Central Controller to PG must be properly sized and
provisioned for bandwidth and QoS. Cisco provides a partner tool called the VRU Peripheral
Gateway to ICM Central Controller Bandwidth Calculator to assist in calculating the VRU
PG-to-ICM bandwidth requirement. This tool is available online at
http://www.cisco.com/partner/WWChannels/technologies/resources/IPCC_resources.html
• If the communication link between the PG and the ICM Central Controller is lost, then all contact
center routing for calls at that site is lost. Therefore, it is important that a fault-tolerant WAN is
implemented. Even when a fault-tolerant WAN is implemented, it is important to identify
contingency plans for call treatment and routing when communication is lost between the ICM
Central Controller and PG.
• Latency between ICM Central Controllers and remote PGs cannot exceed 200 ms one way (400 ms
round-trip)

Treatment and Queuing


ISN queues and treats calls on the remote gateways, eliminating the need to terminate the voice bearer
traffic at the central site. ISN servers may be located at the central site or distributed to remote sites.
WAN bandwidth must still be provisioned for transfers and conferences that involve agents at other
locations.
Unlike IP IVR, with ISN the call legs are torn down and reconnected, avoiding signaling hairpins. With
IP IVR, two separate call signaling control paths will remain intact between the two clusters (producing
a logical hairpinning and reducing the number of intercluster trunks by two).

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-13
Chapter 2 Deployment Models
Multi-Site with Distributed Call Processing

Transfers
Transfers within a site function just like a single-site transfer. Transfers between Cisco CallManager
clusters use either the VoIP WAN or a PSTN service.
If the VoIP WAN is used, sufficient intercluster trunks must be configured. An alternative to using the
VoIP WAN for routing calls between sites is to use a PSTN transfer service. These services allow the
IPCC voice gateways to outpulse DTMF tones to instruct the PSTN to reroute (transfer) the call to
another voice gateway location. Another alternative is to have the Cisco CallManager cluster at Site 1
make an outbound call back to the PSTN. The PSTN would then route the call to Site 2, but the call
would use two voice gateway ports at Site 1 for the remainder of the call.

Distributed ICM Option with Distributed Call Processing Model


Figure 2-6 illustrates this deployment model.

Figure 2-6 Distributed ICM Option Shown with IP-IVR

Dedicated Signaling/CTI
Private Link
IP Voice
TDM Voice

ICM ICM AW/HDS


AW/HDS
A B

PG/CTI PG/CTI
PG/CTI IVR IVR PG/CTI

CallManager Cluster 1 CallManager Cluster 2


M M

VoIP WAN
M M V V M M

M M M M

IP IP
126024

PSTN
Agent Agent

Advantages
The primary advantage of the distributed ICM option is the redundancy gained from splitting the ICM
Central Controller between two redundant sites.

Cisco IP Contact Center Enterprise Edition SRND


2-14 OL-7279-04
Chapter 2 Deployment Models
Clustering Over the WAN

Best Practices
• ICM Central Controllers (Routers and Loggers) must have a dedicated link to carry the private
communication between the two redundant sites. In a non-distributed ICM model, the private traffic
usually traverses an Ethernet crossover cable or LAN connected directly between the side A and
side B ICM Central Controller components. In the distributed ICM model, the private
communication between the A and B ICM components must travel across a dedicated link such as
a T1.
• Latency across the private dedicated link cannot exceed 100 ms one way (200 ms round-trip).
• Latency between ICM Central Controllers and remote PGs cannot exceed 200 ms one way (400 ms
round-trip).
• The private link cannot traverse the same path as public traffic. The private link must have path
diversity and must reside on a link that is completely path-independent from ICM public traffic.
• The redundant centralized model is explored in the next section on Clustering over the WAN

Clustering Over the WAN


Clustering over the WAN for IPCC allows full agent redundancy in the case of a central-site outage.
Implementation of clustering over the WAN for IPCC does have several strict requirements that differ
from other models. Bandwidth between central sites for ICM public and private traffic, Cisco
CallManager intra-cluster communications (ICC), and all other voice-related media and signaling must
be properly provisioned with QoS enabled. The WAN between central sites must be highly available
(HA) with separate ICM (PG and Central Controller) private links.

Advantages
• No single point of failure, including loss of an entire central site
• Remote agents require no reconfiguration to remain fully operational in case of site or link outage.
When outages occur, agents and agent devices dynamically switch to the redundant site.
• Central administration for both ICM and Cisco CallManager
• Reduction of servers for distributed deployment

Best Practices
• The highly available (HA) WAN between the central sites must be fully redundant with no single
point of failure. (For information regarding site-to-site redundancy options, refer to the WAN
infrastructure and QoS design guides available at http://www.cisco.com/go/srnd.) In case of partial
failure of the highly available WAN, the redundant link must be capable of handling the full
central-site load with all QoS parameters. For more information, see the section on Bandwidth and
QoS Requirements for IPCC Clustering Over the WAN, page 2-22.
• A highly available (HA) WAN using point-to-point technology is best implemented across two
separate carriers, but this is not necessary when using a ring technology.
• Latency requirements across the highly available (HA) WAN must meet the current Cisco IP
Telephony requirements for clustering over the WAN. Currently a maximum latency of 20 ms one
way (40 ms round-trip) is allowed. This equates to a transmission distance of approximately
1860 miles (3000 km) under ideal conditions. The transmission distance will be lessened by
network conditions that cause additional latency. For full specifications, refer to the Cisco IP
Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-15
Chapter 2 Deployment Models
Clustering Over the WAN

• IPCC latency requirements can be met by conforming to IP Telephony requirements. However, the
bandwidth requirements for Cisco CallManager intra-cluster communications differ between IPCC
and IP Telephony. For more information, see the section on Bandwidth and QoS Requirements for
IPCC Clustering Over the WAN, page 2-22.
• Bandwidth requirements across the highly available (HA) WAN include bandwidth and QoS
provisioning for (see Bandwidth and QoS Requirements for IPCC Clustering Over the WAN, page
2-22):
– Cisco CallManager intra-cluster communications (ICC)
– Communications between Central Controllers
– Communications between Central Controller and PG
– Communications between CTI Object Server (CTI OS) and CTI Server, if using CTI OS
• Separate dedicated link(s) for ICM private communications are required between ICM Central
Controllers Side A and Side B and between PGs Side A and Side B to ensure path diversity. Path
diversity is required due to the architecture of ICM. Without path diversity, the possibility of a dual
(public communication and private communication) failure exists. If a dual failure occurs even for
a moment, ICM instability and data loss may occur, including the corruption of one logger database.
• Dedicated private link(s) may be two separate dedicated links, one for Central Controller private and
one for Cisco CallManager PG private, or one converged dedicated link containing Central
Controller and PG private. See Site-to-Site ICM Private Communications Options, page 2-20, for
more information.
• Separate paths must exist from agent sites to each central site. Both paths must be capable of
handling the full load of signaling, media, and other traffic if one path fails. These paths may reside
on the same physical link from the agent site, with a WAN technology such as Frame Relay using
multiple permanent virtual circuits (PVCs).
• The minimum cluster size using IP IVR as the treatment and queuing platform is 5 nodes (publisher
plus 4 subscribers). This minimum is required to allow IP IVR at each site to have redundant
connections locally to the cluster without traversing the WAN. JTAPI connectivity between Cisco
CallManager and IP IVR is not supported across the WAN. Local gateways also will need local
redundant connections to Cisco CallManager.
• The minimum cluster size using ISN as the treatment and queuing platform is 3 nodes (publisher
plus 2 subscribers). However, Cisco recommends 5 nodes, especially if there are IP Phones (either
contact center or non-contact center) local to the central sites, central gateways, or central media
resources that would require local failover capabilities.

Cisco IP Contact Center Enterprise Edition SRND


2-16 OL-7279-04
Chapter 2 Deployment Models
Clustering Over the WAN

Centralized Voice Gateways with Centralized Call Treatment and Queuing


Using IP IVR
In this model, the voice gateways are located in the central sites. IP IVR is centrally located and used
for treatment and queuing on each side. Figure 2-7 illustrates this model.

Figure 2-7 Centralized Voice Gateways with Centralized Call Treatment and Queuing Using IP IVR

Site 1 Site 2
PG 2A PG 2B

ICM ICM
A B
IVR 1 IVR 2
Highly
1 2 3 Available 4 5
WAN
M M M M M

PG 1A PG 1B
V CTIOS 1A CTIOS 1B V
WAN

ICC

126025
PSTN ICM Public IP PSTN
ICM Private Remote Agent Site

Advantages
• Component location and administration are centralized.
• Calls are treated and queued locally, eliminating the need for queuing across a WAN connection.

Best Practices
• WAN connections to agent sites must be provisioned with bandwidth for voice as well as control
and CTI. See Bandwidth and QoS Requirements for IPCC Clustering Over the WAN, page 2-22, for
more information.
• Local voice gateway may be needed at remote sites for local out-calling and 911. For more
information, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide,
available at
http://www.cisco.com/go/srnd
• Central site outages would include loss of half of the ingress gateways, assuming a balanced
deployment. Gateways and IVRs must be scaled to handle the full load in both sites if one site fails.
• Carrier call routing must be able to route calls to the alternate site in the case of a site or gateway
loss. Pre-routing may be used to balance the load, but it will not be able to prevent calls from being
routed to a failed central site. Pre-routing is not recommended.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-17
Chapter 2 Deployment Models
Clustering Over the WAN

Centralized Voice Gateways with Centralized Call Treatment and Queuing


Using ISN
In this model, the voice gateways are Voice XML (VXML) gateways located in the central sites. ISN is
centrally located and used for treatment and queuing. Figure 2-8 illustrates this model.

Figure 2-8 Centralized Voice Gateways with Centralized Call Treatment and Queuing Using ISN

Site 1 Site 2
PG 2A PG 2B

ICM ICM
A B
ISN 1 ISN 2
Highly
1 2 3 Available 4 5
WAN
M M M M M

Gatekeeper 1 Gatekeeper 2

PG 1A PG 1B
V CTIOS 1A CTIOS 1B V
WAN

ICC

126026
PSTN ICM Public IP PSTN
ICM Private Remote Agent Site

Advantages
• Component location and administration are centralized.
• Calls are treated and queued locally, eliminating the need for queuing across a WAN connection.
• There is less load on Cisco CallManager because ISN is the primary routing point. This allows
higher scalability per cluster compared to IP IVR implementations. See Sizing IPCC Components
and Servers, page 5-1, for more information.

Best Practices
• WAN connections to agent sites must be provisioned with bandwidth for voice as well as control
and CTI. See Bandwidth and QoS Requirements for IPCC Clustering Over the WAN, page 2-22, for
more information.
• A local voice gateway might be needed at remote sites for local out-calling and 911.

Cisco IP Contact Center Enterprise Edition SRND


2-18 OL-7279-04
Chapter 2 Deployment Models
Clustering Over the WAN

Distributed Voice Gateways with Distributed Call Treatment and Queuing


Using ISN
In this model, the voice gateways are VXML gateways distributed to agent locations. ISN is centrally
located and used for treatment and queuing on the remote gateways. Figure 2-9 illustrates this model.

Figure 2-9 Distributed Voice Gateways with Distributed Call Treatment and Queuing Using ISN

Site 1 Site 2
PG 2A PG 2B

ICM ICM
A B
ISN 1 ISN 2
Highly
1 2 3 Available 4 5
WAN
M M M M M

Gatekeeper 1 Gatekeeper 2

PG 1A PG 1B
CTIOS 1A CTIOS 1B

WAN

ICC

126027
IP ICM Public
PSTN V
ICM Private
Remote Agent Site

Advantages
• No or minimal voice RTP traffic across WAN links if ingress calls and gateways are provisioned to
support primarily their local agents. Transfers and conferences to other sites would traverse the
WAN.
• Calls are treated and queued at the agent site, eliminating the need for queuing across a WAN
connection.
• Local calls incoming and outgoing, including 911, can share the local VXML gateway.
• There is less load on Cisco CallManager because ISN is the primary routing point. This allows
higher scalability per cluster compared to IP IVR implementations. See Sizing IPCC Components
and Servers, page 5-1, for more information.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-19
Chapter 2 Deployment Models
Clustering Over the WAN

Best Practices
• Distributed gateways require minimal additional remote maintenance and administration over
centralized gateways.
• The media server for ISN may be centrally located or located at the agent site. Media may also be
run from gateway flash. Locating the media server at the agent site reduces bandwidth requirements
but adds to the decentralized model.

Site-to-Site ICM Private Communications Options


ICM private communications must travel on a separate path from the public communications between
ICM components. There are two options for achieving this path separation: dual and single links.

ICM Central Controller Private and Cisco CallManager PG Private Across Dual Links
Dual links, shown in Figure 2-10, separate ICM Central Controller Private traffic from VRU/CM PG
Private traffic.

Figure 2-10 ICM Central Controller Private and Cisco CallManager PG Private Across Dual Links

Site 1 Site 2

ICM A ICM B

PG A PG B
126028

Advantages
• Failure of one link does not cause both the ICM Central Controller and PG to enter simplex mode,
thus reducing the possibility of an outage due to a double failure.
• The QoS configuration is limited to two classifications across each link, therefore links are simpler
to configure and maintain.
• Resizing or alterations of the deployment model and call flow may affect only one link, thus
reducing the QoS and sizing changes needed to ensure proper functionality.
• Unanticipated changes to the call flow or configuration (including misconfiguration) are less likely
to cause issues across separate private links.

Best Practices
• The links must be across separate dedicated circuits. The links, however, do not have to be
redundant and must not be redundant against each other.
• Link sizing and configuration must be examined before any major change to call load, call flow, or
deployment configuration

Cisco IP Contact Center Enterprise Edition SRND


2-20 OL-7279-04
Chapter 2 Deployment Models
Clustering Over the WAN

• The link must be a dedicated circuit and not be tunneled across the highly available (HA) WAN. See
Best Practices, page 2-15, at the beginning of the section on Clustering Over the WAN, page 2-15,
for more information on path diversity.

ICM Central Controller Private and Cisco CallManager PG Private Across Single Link
A single link, shown in Figure 2-11, carries both ICM Central Controller Private traffic and VRU/CM
PG Private traffic. Single-link implementations are more common and less costly than dual-link
implementations.

Figure 2-11 ICM Central Controller Private and Cisco CallManager PG Private Across Single Link

Site 1 Site 2

ICM A ICM B

PG A PG B

126029

Advantages
• Less costly than separate-link model
• Fewer links to maintain, but more complex

Best Practices
• The link does not have to be redundant. If a redundant link is used, however, latency on failover
must not exceed 500 ms.
• Separate QoS classifications and reserved bandwidth are required for Central Controller
high-priority and PG high-priority communications. For details, see Bandwidth Provisioning and
QoS Considerations, page 8-1.
• Link sizing and configuration must be examined before any major change to call load, call flow, or
deployment configuration. This is especially important in the single link model.
• Link must be a dedicated circuit fully isolated from, and not tunneled across, the highly available
(HA) WAN. See Best Practices, page 2-15, at the beginning of the section on Clustering Over the
WAN, page 2-15, for more information on path diversity.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-21
Chapter 2 Deployment Models
Clustering Over the WAN

Bandwidth and QoS Requirements for IPCC Clustering Over the WAN
Bandwidth must be provisioned to properly size links and set reservations within those links. The
following sections detail bandwidth requirements for ICM Private, ICM Public, Cisco CallManager, and
CTI traffic.

Highly Available WAN


Bandwidth must be guaranteed across the highly available (HA) WAN for all ICM public, CTI, and
Cisco CallManager intra-cluster communications (ICC). Additionally, bandwidth must be guaranteed
for any calls going across the highly available WAN. Minimum total bandwidth required across the
highly available WAN for all IPCC signaling is 2 Mbps.
Additional information regarding QoS design and provisioning can be found in the QoS design guides
available at
http://www.cisco.com/go/srnd
Signaling communication must be guaranteed for the following connections:
• ICM Central Controller to Cisco CallManager PG, page 2-22
• ICM Central Controller to IP IVR or ISN PG, page 2-22
• IP IVR or ISN PG to IP IVR or ISN, page 2-22
• PG to PG Test Other Side, page 2-23
• CTI Server to CTI OS, page 2-23
• Cisco CallManager Intra-Cluster Communications (ICC), page 2-23

ICM Central Controller to Cisco CallManager PG


Cisco provides a partner tool called the ACD/CallManager Peripheral Gateway to ICM Central
Controller Bandwidth Calculator to assist in calculating the Cisco CallManager PG-to-ICM bandwidth
requirement. This tool is available online at
http://www.cisco.com/partner/WWChannels/technologies/resources/IPCC_resources.html

ICM Central Controller to IP IVR or ISN PG


Cisco also provides a partner tool to compute bandwidth needed between ICM Central Controller and
the IP IVR PG. This tool is available to Cisco partners and Cisco employees at the following link:
http://www.cisco.com/partner/WWChannels/technologies/resources/IPCC_resources.html
At this time, no tool exists that specifically addresses communication between the ICM Central
Controller and the ISN PG. Testing has shown, however, that using the tool for the link between the ICM
Central Controller and IP IVR will produce accurate measurements. To achieve accurate measurements,
you have to make a substitution in one field: the field "Average number of RUN VRU script nodes"
should be treated as "Number of ICM script nodes that interact with ISN."

IP IVR or ISN PG to IP IVR or ISN


At this time, no tool exists that specifically addresses communication between the IP IVR or ISN PG
and the IP IVR or ISN. However, the tool mentioned in the two previous sections produces a fairly
accurate measurement of bandwidth needed for this communication. Bandwidth consumed between the
ICM Central Controller and IP IVR or ISN PG is very similar to the bandwidth consumed between the
IP IVR or ISN PG and the IP IVR or ISN.

Cisco IP Contact Center Enterprise Edition SRND


2-22 OL-7279-04
Chapter 2 Deployment Models
Clustering Over the WAN

The tool is available to Cisco partners and Cisco employees at


http://www.cisco.com/partner/WWChannels/technologies/resources/IPCC_resources.html
If the IP IVR or ISN PGs are split across the WAN, total bandwidth required would be double what the
tool reports: once for ICM Central Controller to IP IVR or ISN PG and once for IP IVR or ISN PG to
IP IVR ISN.

PG to PG Test Other Side


PG-to-PG public communication consists of minimal "Test other Side" messages and consumes
approximately 10 kbps of bandwidth per PG pair.

CTI Server to CTI OS


The worst case for bandwidth utilization across the WAN link between the CTI OS and CTI Server
occurs when the CTI OS is remote from the CTI Server. A bandwidth queue should be used to guarantee
availability for this worst case.
For this model, the following simple formula can be used to compute worst-case bandwidth
requirements:
• With no Extended Call Context (ECC) or Call Variables:
BHCA ∗ 20 = bps
• With ECC and/or Call Variables
BHCA ∗ (20 + ((Number of Variables ∗ Average Variable Length) / 40) = bps
Example: With 10,000 BHCA and 20 ECC variables of average length 40:
10,000 ∗ (20 + ((20 ∗ 40) / 40) = 10,000 ∗ 40 = 400,000 bps = 400 kbps

Cisco CallManager Intra-Cluster Communications (ICC)


The Cisco IP Telephony Solution Reference Network Design (SRND) guide states that 900 kbps must be
reserved for every 10,000 BHCA. This amount is significantly higher with IPCC due to the number of
call redirects and additional CTI/JTAPI communications encompassed in the intra-cluster
communications.
The bandwidth that must be reserved is approximately 2,000 kbps (2 Mbps) per 10,000 BHCA. This
requirement assumes proper design and deployment based on the recommendations in this IPCC design
guide. Inefficient design (such as "ingress calls to Site 1 are treated in Site 2") will cause additional
intra-cluster communications, possibly exceeding the defined requirements.
More specifically, you can use the following formula to calculate the required bandwidth:
Link Size BHCA ∗ 200 = bps

ICM Private WAN


Table 2-1 is a worksheet to assist with computing the link and queue sizes. Definitions and examples
follow the table.

Note Minimum link size in all cases is 1.5 Mbps (T1).

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-23
Chapter 2 Deployment Models
Clustering Over the WAN

Table 2-1 Worksheet for Calculating Private Bandwidth

Multiplication Recommended Multiplication Recommended


Component Effective BHCA Factor Link Factor Queue
Router + ∗ 30 ∗ 0.8 Total
Logger Router+Logger
High-Priority
Queue Size
Cisco ∗ 100 ∗ 0.9 Add these three
CallManager numbers
PG together and
IP IVR PG ∗ 60 ∗ 0.9 total in the
shaded box
ISN PG ∗ 120 ∗ 0.9 below to get
IP IVR or ISN ∗ ((Number of ∗ 0.9 the PG
Variables Variables ∗ Average High-Priority
Variable Length) / Queue size.
40)
Total Link Size Total PG
High-Priority
Queue Size

If one dedicated link is used between sites for private communication, add all link sizes together and use
the Total Link Size at the bottom of Table 2-1. If separate links are used, one for Router/Logger Private
and one for PG Private, use the first row for Router/Logger requirements and the bottom three (out of
four) rows added together for PG Private requirements.
Effective BHCA (effective load) on all similar components that are split across the WAN is defined as
follows:
• Router + Logger
This value is the total BHCA on the call center, including conferences and transfers. For example,
10,000 BHCA ingress with 10% conferences or transfers would be 11,000 Effective BHCA.
• Cisco CallManager PG
This value includes all calls that come through ICM Route Points controlled by Cisco CallManager
and/or that are ultimately transferred to agents. This assumes that each call comes into a route point
and is eventually sent to an agent. For example, 10,000 BHCA ingress calls coming into a route
point and being transferred to agents, with 10% conferences or transfers, would be 11,000 effective
BHCA.
• IP IVR PG
This value is the total BHCA for call treatment and queuing. For example, 10,000 BHCA ingress
calls, with all of them receiving treatment and 40% being queued, would be 14,000 effective BHCA.
• ISN PG
This value is the total BHCA for call treatment and queuing coming through an ISN. 100% treatment
is assumed in the calculation. For example, 10,000 BHCA ingress calls, with all of them receiving
treatment and 40% being queued, would be 14,000 effective BHCA.
• IP IVR or ISN Variables
This value represents the number of Call and ECC variables and the variable lengths associated with
all calls routed through the IP IVR or ISN, whichever technology is used in the implementation.

Cisco IP Contact Center Enterprise Edition SRND


2-24 OL-7279-04
Chapter 2 Deployment Models
Clustering Over the WAN

Example of a Private Bandwidth Calculation


Table 2-2 shows an example calculation for a combined dedicated private link with the following
characteristics:
• BHCA coming into the contact center is 10,000.
• 100% of calls are treated by IP IVR and 40% are queued.
• All calls are sent to agents unless abandoned. 10% of calls to agents are transfers or conferences.
• There are four IP IVRs used to treat and queue the calls, with one PG pair supporting them.
• There is one Cisco CallManager PG pair for a total of 900 agents.
• Calls have ten 40-byte Call Variables and ten 40-byte ECC variables.

Table 2-2 Example Calculation for a Combined Dedicated Private Link

Multiplication Recommended Multiplication Recommended


Component Effective BHCA Factor Link Factor Queue
Router + 11,000 ∗ 30 330,000 ∗ 0.8 297,000 Total
Logger Router+Logger
High-Priority
Queue Size
Cisco 11,000 ∗ 100 1,100,000 ∗ 0.9 880,000 Add these three
CallManager numbers
PG together and
IP IVR PG 14,000 ∗ 60 840,000 ∗ 0.9 756,000 total in the
shaded box
ISN PG 0 ∗ 120 0 ∗ 0.9 0 below to get
IP IVR or ISN 14,000 ∗ ((Number of 280,000 ∗ 0.9 252,000 the PG
Variables Variables ∗ Average High-Priority
Variable Length) / Queue size.
40)
Total Link Size 2,550,000 1,888,000 Total PG
High-Priority
Queue Size

For the combined dedicated link in this example, the results are as follows:
• Total Link = 2,550,000 bps
• Router/Logger high-priority bandwidth queue of 297,000 bps
• PG high-priority bandwidth queue of 1,888,000 bps
If this example were implemented with two separate links, Router/Logger private and PG private, the
link sizes and queues would be as follows:
• Router/Logger link of 330,000 bps (actual minimum link is 1.5 Mb, as defined earlier), with
high-priority bandwidth queue of 297,000 bps
• PG link of 2,220,000 bps, with high-priority bandwidth queue of 1,888,000 bps

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-25
Chapter 2 Deployment Models
Clustering Over the WAN

Failure Analysis of IPCC Clustering Over the WAN


This section describes the behavior of clustering over the WAN for IPCC during certain failure
situations. The stability of the highly available (HA) WAN is extremely critical in this deployment
model, and failure of the highly available WAN is considered outside the bounds of what would
normally happen.
For illustrations of the deployment models described in this section, refer to the figures shown
previously for Clustering Over the WAN, page 2-15.

Entire Central Site Loss


Loss of the entire central site is defined as the loss of all communications with a central site, as if the
site were switched off. This can result from natural disasters, power issues, major connectivity issues,
and human error, among other things. If a central site retains some but not all connectivity, it is not
considered a site loss but rather a partial connectivity loss, and this scenario is covered in subsequent
sections.
When an entire central site has completely lost IPCC clustering over the WAN, remote agents will
fail-over properly to the redundant site. Failover times can range from 1 to 60 seconds for agents.
Variations are due to agent count, phone registration location, and agent desktop server used.
When using distributed VXML gateways and ISN, the gateways must fail-over from one site to another
if their primary site is lost. This failover takes approximately 30 seconds, and calls coming in to the
remote gateways during those 30 seconds will be lost.

Private Connection Between Site 1 and Site 2


If the private connection between ICM Central Controller sides A and B should fail, one ICM Router
will go out-of-service and the other ICM Router will then be running in simplex mode until the link is
reinstated. This situation will not cause any call loss or failure.
If the private connection between PG side A and PG side B should fail, the non-active PG will go
out-of-service, causing the active PG to run in simplex mode until the link is reinstated. This situation
will not cause any call loss or failure.
When using a combined private link, ICM Central Controller and PG private connections will be lost if
the link is lost. This will cause both components to switch to simplex mode as described above. This
situation will not cause any call loss or failure.

Connectivity to Central Site from Remote Agent Site


If connectivity to one of the central sites is lost from a remote agent site, all phones and agent desktops
will immediately switch to the second central site and begin processing calls. Failover typically takes
between 1 and 60 seconds.

Highly Available WAN Failure


By definition, a highly available (HA) WAN should not fail under normal circumstances. If the HA
WAN is dual-path and fully redundant, as it should be, a failure of this type would be highly unusual.
This section discusses what happens in this unlikely scenario.

Cisco IP Contact Center Enterprise Edition SRND


2-26 OL-7279-04
Chapter 2 Deployment Models
Remote Agent Over Broadband

If the HA WAN is lost for any reason, the Cisco CallManager cluster becomes split. The primary result
from this occurrence is that ICM loses contact with half of the agent phones. ICM is in communication
with only half of the cluster and cannot communicate with or see any phones registered on the other half.
This causes ICM to immediately log out all agents with phones that are no longer visible. These agents
cannot log back in until the highly available WAN is restored or their phones are forced to switch cluster
sides.

Remote Agent Over Broadband


An IPCC enterprise might want to deploy their network with support for remote at-home agents using a
Cisco IP Phone. This section outlines the IPCC Remote Agent solution that can be deployed using a
desktop broadband asymmetric digital subscriber line (ADSL) or Cable connection as the remote
network.
The Cisco Voice and Video Enabled IPSec VPN (V3PN) ADSL or Cable connection uses a Cisco 830
Series router as an edge router to the broadband network. The Cisco 830 Series router provides the
remote agent with V3PN, Encryption, Network Address Translation (NAT), Firewall, IOS Intrusion
Detection System (IDS), and QoS on the broadband network link to the IPCC campus. Remote agent
V3PN aggregation on the campus is provided via LAN to LAN VPN routers.

Advantages
• Remote agent deployment results in money saved for a contact center enterprise, thereby increasing
return on investment (ROI).
• A remote agent can be deployed with standard IPCC agent desktop applications such as Cisco
CTI OS, Cisco Agent Desktop, or customer relationship management (CRM) desktops.
• This model works with ADSL or Cable broadband networks.
• The Broadband Agent Desktop "Always on" connection is a secure extension of the corporate LAN
in the home office.
• At-home agents have access to the same IPCC applications and most IPCC features in their home
office as when they are working at the IPCC Enterprise contact center, and they can access those
features in exactly the same way
• This model provides high-quality voice using IP phones, with simultaneous data to the agent
desktop via existing broadband service.
• IPCC home agents and home family users can securely share broadband Cable and DSL
connections, with authentication of IPCC corporate users providing access to the VPN tunnel.
• The home agent solution utilizes the low-cost Cisco 831 Series router.
• This model supports dynamic IP addressing via Dynamic Host Configuration Protocol (DHCP) or
Point-to-Point Protocol over Ethernet (PPPoE).
• The Cisco 831 Series router provides VPN tunnel origination, Quality of Service (QoS) to the edge,
and Firewall (and other security functions), thus reducing the number of devices to be managed.
• The Remote Agent router can be centrally managed by the enterprise using a highly scalable and
flexible management product such as CiscoWorks.
• The remote agent solution is based on Cisco IOS VPN Routers for resiliency, high availability, and
a building-block approach to high scalability that can support thousands of home agents.
• All traffic, including data and voice, is encrypted with the Triple Data Encryption Standard (3DES).

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-27
Chapter 2 Deployment Models
Remote Agent Over Broadband

• This model can be deployed as part of an existing Cisco CallManager installation.


• Home agents can have the same extension type as campus agents.

Best Practices
• Follow all applicable V3PN and Business Ready Teleworker design guidelines outlined in the
documentation available at:
http://www.cisco.com/go/teleworker
http://www.cisco.com/go/v3pn
http://www.cisco.com/go/srnd
• Configure remote agent IP phones to use G.729 with minimum bandwidth limits. Higher quality
voice can be achieved with the G.711 codec. The minimum bandwidth to support G.711 is 512 kbps
upload speed.
• Implement fault and performance management tools such as NetFlow, Service Assurance Agent
(SAA), and Internetwork Performance Monitor (IPM).
• Wireless access points are supported; however, their use is determined by the enterprise security
polices for remote agents.
• Only one remote agent per household is supported.
• Cisco recommends that you configure the conference bridge on a DSP hardware device. There is no
loss of conference voice quality using a DSP conference bridge. This is the recommended solution
even for pure IP Telephony deployments.
• The Remote Agent over Broadband solution is supported only with centralized IPCC and Cisco
CallManager clusters.
• There might be times when the ADSL or Cable link goes down. When the link is back up, you might
have to reset your ADSL or Cable modem, Cisco 831 Series router, and IP phone. This task will
require remote agent training.
• Only unicast Music on Hold (MoH) streams are supported.
• There must be a Domain Name System (DNS) entry for the remote agent desktop, otherwise the
agent will not be able to connect to a CTI OS server. DNS entries can be dynamically updated or
entered as static updates.
• The remote agent's workstation and IP phone must be set up to use Dynamic Host Configuration
Protocol (DHCP).
• The remote agent’s PC requires Windows XP Pro for the operating system. In addition, XP Remote
Desktop Control must be installed.
• The Cisco 7960 IP Phone requires a power supply. The Cisco 831 Series router does not supply
power to the IP Phone.
• Home agent broadband bandwidth requires a minimum of 256 kbps upload speed and 1.4 Mbps
download speed for ADSL, and 1 Mbps download for Cable. Before actual deployment, make sure
that the bandwidth is correct. If you are deploying Cable, then take into account peak usage times.
If link speeds fall below the specified bandwidth, the home agent can encounter voice quality
problems such as clipping.
• Remote agent round-trip delay to the IPCC campus is not to exceed 180 ms for ADSL or 60 ms for
Cable. Longer delay times can result in voice jitter, conference bridge problems, and delayed agent
desktop screen pops.
• If the Music on Hold server is not set up to stream using a G.729 codec, then a transcoder must be
set up to enable outside callers to receive MoH.

Cisco IP Contact Center Enterprise Edition SRND


2-28 OL-7279-04
Chapter 2 Deployment Models
Remote Agent Over Broadband

• For Cisco Supervisor Desktop, there are supervisor limitations to silent monitoring, barge-in,
intercept, and voice recording with regard to home agent IP phones. Cisco Agent Desktop
(Enterprise and Express) home and campus supervisors cannot voice-monitor home agents.
Supervisors are capable of sending and receiving only text messages, and they can see which home
agents are online and can log them out.
• Desktop-based monitoring is not supported for IPCC Express with Cisco Agent Desktop.
Desktop-based monitoring is applicable only with IPCC Enterprise edition.
• CTI OS Supervisor home and campus supervisors can silently monitor, barge in, and intercept, but
not record home agents. CTI OS home and campus supervisors can send and receive text messages,
make an agent “ready,” and also log out home agents.
• Connect the agent desktop to the RJ45 port on the back of the IP phone. Otherwise, CTI OS
Supervisor will not be able to voice-monitor the agent phone.
• Only IP phones that are compatible with Cisco IPCC are supported. For compatibility information,
refer to the following documentation:
– Bill of Materials at
http://www.cisco.com/univercd/cc/td/doc/product/icm/ccbubom/index.htm
– Compatibility Matrix at
http://www.cisco.com/application/pdf/en/us/guest/products/ps1844/c1609/ccmigration_09186
a008031a0a7.pdf
– Release Notes for IPCC Express at
http://www.cisco.com/univercd/cc/td/doc/product/voice/sw_ap_to/apps_3_5/english/admn_ap
p/rn35_2.pdf
• You can find a test for the broadband line speed at http://www.Broadbandreports.com. From this
website, you can execute a test that will benchmark the home agent's line speed (both upload and
download) from a test server.
• The email alias for V3PN questions is: ask-ese-vpn@cisco.com.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-29
Chapter 2 Deployment Models
Remote Agent Over Broadband

Remote Agent with IP Phone Deployed via the Business Ready Teleworker
Solution
In this model, the remote agent’s IP phone and workstation are connected via the VPN tunnel to the main
IPCC campus. Customer calls routed to the remote agent are handled in the same manner as campus
agents. (See Figure 2-12.)

Figure 2-12 Remote Agent with IP Phone Deployed via the Business Ready Teleworker Solution

IPCC
Cisco IP Corporate
phone Network
VPN
IP
Broadband Head-End
modem Broadband Internet router
CTI Encrypted VPN tunnel
data

126030
Cisco 831
Series remote
router

Advantages
• High-speed broadband enables cost-effective office applications
• Site-to-site "always on" VPN connection
• Advanced security functions allow extension of the corporate LAN to the home office
• Supports full range of converged desktop applications, including CTI data and high-quality voice

Best Practices
• Minimum broadband speed supported is 256 kbps upload and 1.0 Mbps download for cable.
• Minimum broadband speed supported is 256 kbps upload and 1.4 Mbps download for ADSL.
• Agent workstation must have 500 MHz, 512 MB RAM or greater.
• IP phone must be configured to use G.711 on minimum broadband speeds.
• QoS is enabled only at the Cisco 831 Router edge. Currently, service providers are not providing
QoS.
• Enable security features on the Cisco 831 Series router.
• The Cisco 7200 VXR and Catalyst 6500 IPSec VPN Services Module (VPNSM) offer the best
LAN-to-LAN performance for agents.
• The remote agent's home phone must be used for 911 calls.
• Redirect on no answer (RONA) should be used when a remote agent is logged in and ready but is
unavailable to pick up a call.

Cisco IP Contact Center Enterprise Edition SRND


2-30 OL-7279-04
Chapter 2 Deployment Models
IPCC Outbound (Blended Agent) Option

IPCC Outbound (Blended Agent) Option


The ability for agents to handle both inbound and outbound contacts offers a way to optimize contact
center resources. The Cisco Outbound Option enables the multi-functional contact center to take
advantage of the ICM enterprise management, allowing contact center managers in need of outbound
campaign solutions to take advantage of the enterprise view that Cisco ICM Enterprise and Cisco IPCC
Enterprise maintain over agent resources.

Description and Characteristics


• The IP Dialer uses virtual IP phones to place outbound calls through a voice gateway via the Cisco
CallManager. The Dialer is a pure software solution that does not require telephony cards.
• The Outbound Option solution consists of three main processes:
– The Campaign Manager process resides on the Side-A Logger and is responsible for sending
configuration and customer records to all the Dialers in the enterprise.
– The Import process is responsible for importing customer records and Do Not Call records into
the system.
– The Dialer processes. Multiple Dialer processes may be connected to the Campaign Manager
server at multiple sites.
• A Media Routing PIM is required for each Dialer to reserve agents for outbound use.
• Outbound Option dialers maintain connections to the CTI Server (for agent CTI control), the
Campaign Manager (for configuration and customer records), Cisco CallManager (Skinny Client
Control Protocol connection for placing and transferring calls), and the Media Routing PG (to
reserve agents).

Sizing Information
• Maximum of 200 agents (any mixture of inbound and outbound agents) on a fully coresident
configuration (Outbound Option Dialer, IPCC PG, CTI Server, and CTI OS on a single server).
• Maximum of 300 agents (any mixture of inbound and outbound agents) on a PG when the CTI OS
on the PG server is configured for no more than 200 agents.
Figure 2-13 illustrates the model for the IPCC Outbound Option with more than 200 agents.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-31
Chapter 2 Deployment Models
IPCC Outbound (Blended Agent) Option

Figure 2-13 IPCC Outbound Option with More Than 200 Agents

ICM Peripheral ICM Peripheral


Gateway 5A Gateway 5B
Media Routing Media Routing
Blended Agent PIM (1) Blended Agent PIM (1)

Private LAN
Cross-over Cable
ICM Peripheral ICM Peripheral
Gateway 2A Gateway 2B
PSTN CallManager PIM (1) CallManager PIM (1) ICM CTIOS ICM CTIOS
Voice Network CTI Server CTI Server Server 2A Server 2B
Private LAN
Cross-over Cable Ethernet Converged
Visible LAN

Cisco router/
Voice Gateway V

IP IP

M M Cisco 7960 Cisco 7940


CCMA-PUB/ CCMB- ICM IPCC
IP Data TFTP SUB Dialer 1 Agent
WAN
PC
M M M IPCC

126032
CCMC- CRS1 CRS2 Agent
ICM
BACKUP 24 Ports 24 Ports PC
Dialer 2
SUB

Advantages
The Cisco Outbound Option Dialer solution allows an agent to participate in outbound campaigns as well
as inbound calls by utilizing a pure software IP-based dialer.
In summary, the main benefits of the IPCC Outbound Option are:
• IPCC Outbound Option has enterprise-wide dialing capability, with IP Dialers placed at multiple
call center sites. The Campaign Manager server is located at the central site.
• This option provides centralized management and configuration via the ICM Admin workstation.
• IPCC Release 6.0 and later provide the Enhanced Call Progress Analysis feature, including
answering machine detection.
• This option provides true call-by-call blending of inbound and outbound calls.
• This option incorporates flexible outbound mode control by using the ICM script editor to control
type of outbound mode and percentage of agents within a skill to use for outbound activity.
• Transfer to IVR mode (agent-less campaigns) and Direct Preview mode are available in IPCC
Release 6.0 and later.
• This option provides integrated webview reporting with outbound specific reporting templates.

Cisco IP Contact Center Enterprise Edition SRND


2-32 OL-7279-04
Chapter 2 Deployment Models
IPCC Outbound (Blended Agent) Option

Best Practices
Follow these guidelines and best practices when implementing the IPCC Outbound Option:
• A media routing PG is required, and a media routing PIM is required for each Dialer.
• An IP Dialer may be installed on an IPCC PG server for a total blended agent count of 200 (either
inbound, outbound, or blended). Multiple Dialers located at a single peripheral do provide some
fault tolerance but are not a true hot-standby model.
• IP Dialers support only the G.711 audio codec for customer calls. Although outbound agents may
be placed within a region that uses the G.729 codec, the codec switchover can add up to 1.5 seconds
to the transfer time between customer and agent and is therefore not recommended.
• IP Dialers should be located in close proximity to the Cisco CallManager cluster where the Dialers
are registered.
• Using the Cisco Media Termination phones with the outbound option might introduce an additional
0.5 second delay in transferring customer calls to the agent.
• The following gateways have been tested with IPCC Outbound Option Dialers:
– Cisco AS5300, AS5350, and AS5400 Series
– Cisco 6608
• All Outbound option dialers at a particular peripheral should have the same number of configured
ports.
• Outbound Option dialers perform a large number of call transfers, which increases the performance
load on the Cisco CallManager server. Ensure proper Cisco CallManager server sizing when
installing Outbound Option dialers. Also, proper Dialer call throttling should be enabled to prevent
overloading the Cisco CallManager server. For proper throttling values for your particular Cisco
CallManager server, refer to the Outbound Option Setup and Configuration Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6out/
For complete information on installing and configuring the Outbound software, see:
• Cisco ICM/IP Contact Center Enterprise Edition Outbound Option Setup and Configuration Guide
• Cisco ICM/IP Contact Center Enterprise Edition Outbound Option User Guide
Both of these documents are available online at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6out/

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-33
Chapter 2 Deployment Models
Traditional ACD Integration

Traditional ACD Integration


For enterprises that want to integrate traditional ACDs with their IPCC deployment, several options
exist. For enterprises that want to load-balance calls between a traditional ACD site and an IPCC site, a
pre-routing Network Interface Controller (NIC) could be added. (See Figure 2-14.) This requires that the
ICM have a NIC that supports the PSTN service provider. In this scenario, the PSTN will query the ICM
Central Controller (via the NIC) to determine which site is best, and the ICM response back to the PSTN
will instruct the PSTN where (which site) to deliver the call. Any call data provided by the PSTN to the
ICM will be passed to the agent desktop (traditional ACD or IPCC).
In order to transfer calls between the two sites (ACD site and IPCC site), a PSTN transfer service could
be used. Use of a PSTN transfer service avoids any double trunking of calls at either site. An alternative
to using a PSTN transfer service is to deploy TDM voice circuits between the traditional ACD and IPCC
voice gateways. In that environment, any transfer of a call back to the original site will result in double
trunking between the two sites. Each additional transfer between sites will result in an additional TDM
voice circuit being utilized.

Figure 2-14 Integrating a Traditional ACD with an IPCC Site

ICM Central
NIC Controller

PSTN

PG/CTI
ACD server

PG/CTI
IP IVR server

M
CallManager

IP IP IP
IP voice
TDM voice
CTI/Call
76612

control data

IP phones and IPCC agent desktops

An alternative to pre-routing calls from the PSTN is to have the PSTN deliver calls to just one site or to
split the calls across the two sites according to some set of static rules provisioned in the PSTN. When
the call arrives at either site, either the traditional ACD or the Cisco CallManager will generate a route
request to the ICM to determine which site is best for this call. If the call needs to be delivered to an
agent at the opposite site from where the call was originally routed, then TDM circuits between sites will
be required. Determination of where calls should be routed, and if and when they should be transferred
between sites, will depend upon the enterprise business environment, objectives, and cost components.

Cisco IP Contact Center Enterprise Edition SRND


2-34 OL-7279-04
Chapter 2 Deployment Models
Traditional IVR Integration

Traditional IVR Integration


There are numerous ways that traditional IVRs can be integrated into an IPCC deployment.
Determination of which way is best will depend upon many factors that are discussed in the following
sections. The primary consideration, though, is determining how to eliminate or reduce IVR double
trunking when transferring the call from the IVR.

Using PBX Transfer


Many call centers have existing traditional IVR applications that they are not prepared to rewrite. In
order to preserve these IVR applications, but yet integrate them into an IPCC environment, the IVR must
have an interface to the ICM. (See Figure 2-15.)
There are two versions of the IVR interface to the ICM. One is simply a post-routing interface, which
just allows the IVR to send a post-route request with call data to the ICM. The ICM will return a route
response instructing the IVR to transfer the call elsewhere. In this scenario, the traditional IVR will
invoke a PBX transfer to release its port and transfer the call into the IPCC environment. Any call data
passed from the IVR will be passed by the ICM to the agent desktop or IP IVR.
The other IVR interface to the ICM is the serial port communications interface (SCI). The SCI allows
the IVR to receive queuing instructions from the ICM. In the PBX model, the SCI is not required.
Even if the IVR has the SCI interface, Cisco still recommends that you deploy IP IVR for all call queuing
because this prevents any additional utilization of the traditional IVR ports. In addition, use of the
IP IVR for queuing provides a way to requeue calls on subsequent transfers or RONA treatment.

Figure 2-15 Traditional IVR Integration Using PBX Transfer

ICM Central
IVR PG Controller
PBX

PSTN

PG/CTI
IP IVR server

M
CallManager

IP IP IP
IP voice
TDM voice
CTI/Call
76613

control data

IP phones and IPCC agent desktops

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-35
Chapter 2 Deployment Models
Traditional IVR Integration

Using PSTN Transfer


This model is very similar to the previous model, except that the IVR invokes a PSTN transfer (instead
of a PBX transfer) so that the traditional IVR port can be released. (See Figure 2-16.) Again, the IP IVR
would be used for all queuing so that any additional occupancy of the traditional IVR ports is not
required and also so that any double trunking in the IVR is avoided. Any call data collected by the
traditional IVR application will be passed by the ICM to the agent desktop or IP IVR.

Figure 2-16 Traditional IVR Integration Using PSTN Transfer

ICM Central
IVR PG Controller

PSTN

PG/CTI
IP IVR server

M
CallManager

IP IP IP
IP voice
TDM voice
CTI/Call

76614
control data

IP phones and IPCC agent desktops

Cisco IP Contact Center Enterprise Edition SRND


2-36 OL-7279-04
Chapter 2 Deployment Models
Traditional IVR Integration

Using IVR Double Trunking


If your traditional IVR application has a very high success rate, where most callers are completely
self-served in the traditional IVR and only a very small percentage of callers ever need to be transferred
to an agent, then it might be acceptable to double-trunk the calls in the traditional IVR for that small
percentage of calls. (See Figure 2-17.) Unlike the previous model, if the traditional IVR has an SCI
interface, then the initial call queuing could be done on the traditional IVR. The reason this is beneficial
is that, in order to queue the call on the IP IVR, a second traditional IVR port would be used. By
performing the initial queuing on the traditional IVR, only one traditional IVR port is used during the
initial queuing of the call. However, any subsequent queuing as a result of transfers or RONA treatment
must be done on the IP IVR to avoid any double trunking. If the traditional IVR does not have an SCI
interface, then the IVR will just generate a post-route request to the ICM to determine where the call
should be transferred. All queuing in that scenario would have to be done on the IP IVR.

Figure 2-17 Traditional IVR Integration Using IVR Double Trunking

ICM Central
IVR PG Controller

PSTN

PG/CTI
IP IVR server

M
CallManager

IP IP IP
IP voice
TDM voice
CTI/Call
76615

control data

IP phones and IPCC agent desktops

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 2-37
Chapter 2 Deployment Models
Traditional IVR Integration

Using Cisco CallManager Transfer and IVR Double Trunking


Over time, it might become desirable to migrate the traditional IVR applications to the IP IVR. However,
if a small percentage of traditional IVR applications still exist for very specific scenarios, then the IVR
could be connected to a second voice gateway. (See Figure 2-18.) Calls arriving at the voice gateway
from the PSTN would be routed by Cisco CallManager. Cisco CallManager could route specific DNs to
the traditional IVR or let the ICM or IP IVR determine when to transfer calls to the traditional IVR. If
calls in the traditional IVR need to be transferred to an IPCC agent, then a second IVR port, trunk, and
voice gateway port would be used for the duration of the call. Care should be taken to ensure that transfer
scenarios do not allow multiple loops to be created because voice quality could suffer.

Figure 2-18 Traditional IVR Integration Using Cisco CallManager Transfer and IVR Double Trunking

ICM Central
IVR PG Controller

PSTN

PG/CTI
IP IVR server

M
CallManager

IP IP IP
IP voice
TDM voice
CTI/Call
76616

control data

IP phones and IPCC agent desktops

Cisco IP Contact Center Enterprise Edition SRND


2-38 OL-7279-04
C H A P T E R 3
Design Considerations for High Availability

This chapter covers several possible IPCC failover scenarios and explains design considerations for
providing high availability of system functions and features in each of those scenarios. This chapter
contains the following sections:
• Designing for High Availability, page 3-1
• Data Network Design Considerations, page 3-5
• Cisco CallManager and CTI Manager Design Considerations, page 3-7
• IP IVR (CRS) Design Considerations, page 3-11
• Internet Service Node (ISN) Design Considerations, page 3-13
• Multi-Channel Design Considerations (Cisco Email Manager Option and Cisco Collaboration
Server Option), page 3-15
• Cisco Email Manager Option, page 3-17
• Cisco Collaboration Server Option, page 3-18
• Cisco IPCC Outbound Option Design Considerations, page 3-19
• Peripheral Gateway Design Considerations, page 3-20
• Understanding Failure Recovery, page 3-31
• CTI OS Considerations, page 3-38
• Cisco Agent Desktop Considerations, page 3-39
• Other Considerations, page 3-39

Designing for High Availability


Cisco IPCC is a distributed solution that uses numerous hardware and software components, and it is
important to design each deployment in such a way that a failure will impact the fewest resources in the
call center. The type and number of resources impacted will depend on how stringent your requirements
are and which design characteristics you choose for the various IPCC components, including the
network infrastructure. A good IPCC design will be tolerant of most failures (defined later in this
section), but not all failures can be made transparent.
Cisco IPCC is a sophisticated solution designed for mission-critical call centers. The success of any
IPCC deployment requires a team with experience in data and voice internetworking, system
administration, and IPCC application configuration.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-1
Chapter 3 Design Considerations for High Availability
Designing for High Availability

Before implementing IPCC, use careful preparation and design planning to avoid costly upgrades or
maintenance later in the deployment cycle. Always design for the worst possible failure scenario, with
future scalability in mind for all IPCC sites.
In summary, plan ahead and follow all the design guidelines and recommendations presented in this
guide and in the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
For assistance in planning and designing your IPCC solution, consult your Cisco or certified Partner
Systems Engineer (SE).
Figure 3-1 shows a high-level design for a fault-tolerant IPCC single-site deployment.

Figure 3-1 IPCC Single-Site Design for High Availability

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
IDF IDF
switch 1 switch 2

MDF Firewall
MDF
TDM switch 1 switch 2
access
Cisco CallManager
cluster
IPCC Publisher
Agent 1 agents Agent 2 Corporate
PC PC Sub 1 M Sub 2 LAN
IP IVR 1 IP IVR
IP IP M M group
IP IVR 2

AW A AW B
CTI OS CTI OS CM CM VRU VRU
A B PG A PG B PG A PG B

CTI CTI
server server ICM A WedView
A B Call control, CTI data, Reporting
ICM B IP messaging Client
126155

TDM voice lines


ICM central controllers Ethernet lines

In Figure 3-1, each component in the IPCC site is duplicated for redundancy and connected to all of its
primary and backup servers, with the exception of the intermediate distribution frame (IDF) switch for
the IPCC agents and their phones. The IDF switches do not interconnect with each other, but only with
the main distribution frame (MDF) switches, because it is better to distribute the agents among different
IDF switches for load balancing and for geographic separation (for example, different building floors or
different cities). If an IDF switch fails, all calls should be routed to other available agents in a separate

Cisco IP Contact Center Enterprise Edition SRND


3-2 OL-7279-04
Chapter 3 Design Considerations for High Availability
Designing for High Availability

IDF switch or to an IP IVR (CRS) queue. Follow the design recommendations for a single-site
deployment as documented in the Cisco IP Telephony Solution Reference Network Design (SRND)
guide, available at
http://www.cisco.com/go/srnd
If designed correctly for high availability and load balancing, any IPCC site can lose half of its systems
and still be operational. With this type of design, no matter what happens in the IPCC site, each call
should be handled in one of the following ways:
• Routed and answered by an available IPCC agent
• Sent to an available IP IVR (CRS) or ISN port
• Answered by the Cisco CallManager AutoAttendant
• Prompted by an IP IVR (CRS) or ISN announcement that the call center is currently experiencing
technical difficulties, and to call back later
The components in Figure 3-1 can be rearranged to form two connected IPCC sites, as illustrated in
Figure 3-2.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-3
Chapter 3 Design Considerations for High Availability
Designing for High Availability

Figure 3-2 IPCC Single-Site Redundancy

IPCC site side A IPCC site side B


T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2
V V
IDF IDF
switch 1 switch 2

MDF MDF
TDM switch 1 switch 2
access
Publisher Agent 2
Agent 1
PC Subscriber 1 M Subscriber 2 PC

IP M M IP

IP IVR 1 IP IVR 2

CM CM
PG A PG B
VRU VRU
PG A PG B
CTI OS CTI OS
A B

CTI CTI
server server
A B

AW A AW B
ICM A ICM B
Call control, CTI data,
IP messaging
TDM voice lines
Ethernet lines
126156

WedView WedView
Reporting Client Reporting Client

Figure 3-2 emphasizes the redundancy of the single site design in Figure 3-1. Side A and Side B are
basically mirror images of each other. In fact, one of the main IPCC features to enhance high availability
is its simple mechanism for converting a site from non-redundant to redundant. To implement IPCC high
availability, all you have to do is to duplicate the first side and cross-connect all the corresponding parts.
The following sections use Figure 3-1 as the model design to discuss issues and features that you should
consider when designing IPCC for high availability. These sections use a bottom-up model (from a
network model perspective, starting with the physical layer first) that divides the design into segments
that can be deployed in separate stages.
Cisco recommends using only duplex (redundant) Cisco CallManager, IP-IVR/ISN, and ICM
configurations for all IPCC deployments that require high availability. This chapter assumes that the
IPCC failover feature is a critical requirement for all deployments, and therefore it presents only
deployments that use a redundant (duplex) Cisco CallManager configuration, with each Cisco

Cisco IP Contact Center Enterprise Edition SRND


3-4 OL-7279-04
Chapter 3 Design Considerations for High Availability
Data Network Design Considerations

CallManager cluster having at least one publisher and one subscriber. Additionally, where possible,
deployments should follow the best practice of having no devices, call processing, or CTI Manager
services running on the Cisco CallManager publisher.

Data Network Design Considerations


The IPCC design shown in Figure 3-3 starts from a time division multiplexing (TDM) call access point
and ends where the call reaches an IPCC agent. The bottom of the network infrastructure in the design
supports the IPCC environment for data and voice traffic. The network, including the PSTN, is the
foundation for the IPCC solution. If the network is poorly design to handle failures, then everything in
the call center is prone to failure because all the servers and network devices depend on the network for
communication. Therefore, the data and voice networks must be a primary part of your solution design
and the first stage for all IPCC implementations.
In addition, the choice of voice gateways for a deployment is critical because some protocols offer more
call resiliency than others. This chapter focuses on how the voice gateways should be configured for high
availability with the Cisco CallManager cluster(s).
For more information on voice gateways and voice networks in general, refer to the Cisco IP Telephony
Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Figure 3-3 High Availability in a Network with Two Voice Gateways and One Cisco CallManager Cluster

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

IDF IDF
switch 1 switch 2

TDM
access MDF MDF
switch 1 switch 2
Firewall
Cisco CallManager
cluster
Call control, CTI data,
IP messaging Publisher Corporate
TDM voice lines Sub 1 M Sub 2 LAN
Ethernet lines
M M
76602

Using multiple voice gateways avoids the problem of a single gateway failure causing blockage of all
calls. In a configuration with two voice gateways and one Cisco CallManager cluster, each gateway
should register with a different primary Cisco CallManager to spread the workload across the Cisco
CallManagers in the cluster. Each gateway should use the other Cisco CallManager as a backup in case
its primary Cisco CallManager fails. Refer to the Cisco IP Telephony Solution Reference Network
Design (SRND) guide (available at http://www.cisco.com/go/srnd) for details on setting up Cisco
CallManager redundancy groups for backup.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-5
Chapter 3 Design Considerations for High Availability
Data Network Design Considerations

When calculating the number of trunks from the PSTN, be sure enough trunks are available to handle
the maximum busy hour call attempts (BHCA) when one or more voice gateways fail. During the design
phase, first decide how many simultaneous voice gateway failures are acceptable for the site. Based upon
this requirement, the number of voice gateways used, and the distribution of trunks across those voice
gateways, you can determine the number of trunks required. The more you distribute the trunks over
multiple voice gateways, the fewer trunks you will need. However, using more voice gateways will
increase the cost of that component of the solution, so you should compare the annual operating cost of
the trunks (paid to the PSTN provider) against the one-time fixed cost of the voice gateways.
For example, assume the call center has a maximum BHCA that results in the need for four T1 lines, and
the company has a requirement for no call blockage in the event of a single component (voice gateway)
failure. If two voice gateways are deployed in this case, then each voice gateway should be provisioned
with four T1 lines (total of eight). If three voice gateways are deployed, then two T1 lines per voice
gateway (total of six) would be enough to achieve the same level of availability. If five voice gateways
are deployed, then one T1 per voice gateway (total of five) would be enough to achieve the same level
of availability. Thus, you can reduce the number of T1 lines required by adding more voice gateways.
The operational cost savings of fewer T1 lines may be greater than the one-time capital cost of additional
voice gateways. In addition to the recurring operational costs of the T1 lines, you should also factor in
the one-time installation cost of the T1 lines to ensure that your design accounts for the most
cost-effective solution. Every installation has different availability requirements and cost metrics, but
using multiple voice gateways is often more cost-effective. Therefore, it is a worthwhile design practice
to perform this cost comparison.
After you have determined the number of trunks needed, the PSTN service provider has to configure
them in such a way that calls can be terminated onto trunks connected to all of the voice gateways (or at
least more than one voice gateway). From the PSTN perspective, if the trunks going to the multiple voice
gateways are configured as a single large trunk group, then all calls will automatically be routed to the
surviving voice gateways when one voice gateway fails. If all of the trunks are not grouped into a single
trunk group within the PSTN, then you must ensure that PSTN re-routing or overflow routing to the other
trunk groups is configured for all dialed numbers.
If a voice gateway with a digital interface (T1 or E1) fails, then the PSTN automatically stops sending
calls to that voice gateway because the carrier level signaling on the digital circuit has dropped. Loss of
carrier level signaling causes the PSTN to busy out all trunks on that digital circuit, thus preventing the
PSTN from routing new calls to the failed voice gateway. When the failed voice gateway comes back
on-line and the circuits are back in operation, the PSTN automatically starts delivering calls to that voice
gateway again.
Because the voice gateways register with a primary Cisco CallManager, an increase in the amount of
traffic on a given voice gateway will result in more traffic being handled by its primary Cisco
CallManager. Therefore, when sizing the Cisco CallManager servers, plan for the possible failure of a
voice gateway and calculate the maximum number of trunks that may be in use on the remaining voice
gateways registered with each CallManager server.
With standalone voice gateways, it is possible that the voice gateway itself is operational but that its
communication paths to the Cisco CallManager servers are severed (for example, a failed Ethernet
connection). If this occurs in the case of a H.323 gateway, you can use the busyout-monitor interface
command to monitor the Ethernet interfaces on a voice gateway. To place a voice port into a busyout
monitor state, use the busyout-monitor interface voice-port configuration command. To remove the
busyout monitor state on the voice port, use the no form of this command.
When the voice gateway interface to the switch fails, the voice gateway automatically busies out all its
trunks. This prevents new calls from being routed to this voice gateway from the PSTN. Calls in progress
do not survive because the Real-Time Transport Protocol (RTP) stream connection no longer exists.
Parties at both ends of the line receive silence and, after a configurable timeout, calls are cleared. You
can set the Transmission Control Protocol (TCP) timeout parameter in the voice gateway, and you can

Cisco IP Contact Center Enterprise Edition SRND


3-6 OL-7279-04
Chapter 3 Design Considerations for High Availability
Cisco CallManager and CTI Manager Design Considerations

also set a default timeout in Cisco CallManager. The calls are cleared by whichever timeout expires first.
When the voice gateway interface to the switch recovers, the trunks are automatically idled and the
PSTN should begin routing calls to this voice gateway again (assuming the PSTN has not permanently
busied out those trunks).

Cisco CallManager and CTI Manager Design Considerations


Cisco CallManager Release 3.3(x) and later uses CTI Manager, a service that acts as an application
broker and abstracts the physical binding of the application to a particular Cisco CallManager server to
handle all its CTI resources. (Refer to the Cisco IP Telephony Solution Reference Network Design
(SRND) guide for further details about the architecture of the CTI Manager.) The CTI Manager and
Cisco CallManager are two separate services running on a Cisco CallManager server. Some other
services running on a Cisco CallManager server include TFTP, Cisco Messaging Interface, and
Real-time Information Server (RIS) data collector services.
The main function of the CTI Manager is to accept messages from external CTI applications and send
them to the appropriate resource in the Cisco CallManager cluster. The CTI Manager uses the Cisco
JTAPI link to communicate with the applications. It acts like a JTAPI messaging router. The JTAPI
client library in Cisco CallManager Release 3.1 and above connects to the CTI Manager instead of
connecting directly to the Cisco CallManager service, as in releases prior to 3.1. In addition, there can
be multiple CTI Manager services running on different Cisco CallManager servers in the cluster that are
aware of each other (via the Cisco CallManager service, which is explained later in this section). The
CTI Manager uses the same Signal Distribution Layer (SDL) signaling mechanism that the Cisco
CallManager services in the cluster use to communicate with each other. However, the CTI Manager
does not directly communicate with the other CTI Managers in its cluster. (This is also explained later
in detail.)
The main function of the Cisco CallManager service is to register and monitor all the IP telephony
devices. It basically acts as a switch for all the IP telephony resources and devices in the system, while
the CTI Manager service acts as a router for all the CTI application requests for the system devices.
Some of the devices that can be controlled by JTAPI that register with the Cisco CallManager service
include the IP phones, CTI ports, and CTI route points.
Figure 3-4 illustrates some of the functions of Cisco CallManager and the CTI Manager.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-7
Chapter 3 Design Considerations for High Availability
Cisco CallManager and CTI Manager Design Considerations

Figure 3-4 Functions of Cisco CallManager and the CTI Manager

Publisher
(CTI Manager
and
CallManager)

ICM
PG IVR
SDL SDL
Subscriber Subscriber
JTAPI (CTI Manager JTAPI
(CTI Manager
and and
Cisco Cisco
CallManager) CallManager) Softphone
MGCP SCCP
V H.323

Skinny client control


protocol (SCCP)

76603
IP IP IP IP IP IP

The servers in a Cisco CallManager cluster communicate with each other using the Signal Distribution
Layer (SDL) service. SDL signaling is used only by the Cisco CallManager service to talk to the other
Cisco CallManager services to make sure everything is in sync within the Cisco CallManager cluster.
The CTI Managers in the cluster are completely independent and do not establish a direct connection
with each other. CTI Managers route only the external CTI application requests to the appropriate
devices serviced by the local Cisco CallManager service on this subscriber. If the device is not resident
on its local Cisco CallManager subscriber, then the Cisco CallManager service forwards the application
request to the appropriate Cisco CallManager in the cluster. Figure 3-5 shows the flow of a device
request to another Cisco CallManager in the cluster.

Figure 3-5 CTI Manager Device Request to a Remote Cisco CallManager

Publisher
(CTI Manager
and
CallManager)

Subscriber Subscriber
(CTI Manager (CTI Manager
and and
ICM Cisco Cisco
PG CallManager) CallManager)
Request: IPCC
agent ext. 101 Forward request

Device not on local Device


Cisco CallManager found

IP IP
76604

IP phone ext. 100 IP phone ext. 101

Cisco IP Contact Center Enterprise Edition SRND


3-8 OL-7279-04
Chapter 3 Design Considerations for High Availability
Cisco CallManager and CTI Manager Design Considerations

It is important to load-balance devices and CTI applications evenly across all the nodes in the Cisco
CallManager cluster.
The external CTI applications use a JTAPI user account on the CTI Manager to establish a connection
and assume control of the Cisco CallManager devices registered to this JTAPI user. In addition, given
that the CTI Managers are independent from each other, any CTI application can connect to any CTI
Manager to perform its requests. However, because the CTI Managers are independent, one CTI
Manager cannot pass the CTI application to another CTI Manager upon failure. If the first CTI Manager
fails, the external CTI application must implement the failover mechanism to connect to another CTI
Manager in the cluster. For example, the Voice Response Unit (VRU) Peripheral Gateway (PG) allows
the administrator to input two CTI Managers, primary and secondary, in its JTAPI subsystem. The Cisco
CallManager PG handles failover for the CTI Manager by using its two sides, sides A and B, which both
log into the same JTAPI user upon initialization of the two CTI Managers. However, only one Cisco
CallManager PG side allows the JTAPI user to register and monitor the user devices to conserve system
resources in the Cisco CallManager cluster. The other side of the Cisco CallManager and VRU PG stays
in hot-standby mode, waiting to be activated immediately upon failure of the active side.
The CTI applications can use the same JTAPI user multiple times to log into separate CTI Managers.
This feature allows you to load-balance the CTI application connections across the cluster, and it adds
an extra layer of failover and redundancy at the CTI application level by allowing multiple connections
to separate CTI Managers while using the same JTAPI user to maintain control. However, keep in mind
that every time a JTAPI connection is established with a CTI Manager (JTAPI user logs into a CTI
Manager), the server CPU and memory usage will increase because the CTI application registers and
monitors events on all the devices associated with the JTAPI user. Therefore, make sure to allocate the
CTI application devices so that they are local to the CTI Manager where the application is connected.
(See Figure 3-6.)

Figure 3-6 CTI Application Device Registration

Publisher
(CTI Manager
and
CallManager)

Subscriber Subscriber
(CTI Manager (CTI Manager
and and
ICM Cisco Cisco
PG CallManager) CallManager) IP IVR
JTAPI user 2 logs in
JTAPI user 1
logs in
User 2 CTI ports

IP IP IP IP
76605

Agent 1 Agent 2 Agent 3 Agent 4

Figure 3-6 shows two external CTI applications using the CTI Manager, the Cisco CallManager PG, and
the IP IVR (CRS). The Cisco CallManager PG logs into the CTI Manager using the JTAPI account
User 1, while IP IVR (CRS) uses User 2. Each subscriber has two phones to load-balance the calls, and
each server has one JTAPI connection to load-balance the CTI applications.
To avoid overloading the available resources, it is best to load-balance devices (phones, gateways, ports,
CTI Route Points, CTI Ports, and so forth) and CTI applications evenly across all the nodes in the Cisco
CallManager cluster.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-9
Chapter 3 Design Considerations for High Availability
Cisco CallManager and CTI Manager Design Considerations

Cisco CallManager and CTI Manager design should be the second design stage, right after the network
design stage, and deployment should occur in this same order. The reason is that the IP telephony
infrastructure must be in place to dial and receive calls using its devices before you can deploy any
telephony applications. Before moving to the next design stage, make sure that a PSTN phone can call
an IP phone and that this same IP phone can dial out to a PSTN phone, with all the call survivability
capabilities considered for treating these calls. Also keep in mind that the Cisco CallManager cluster is
the heart of the IPCC system, and any server failure in a cluster will take down two services (CTI and
Cisco CallManager), thereby adding extra load to the remaining servers in the cluster.
Distribute Cisco CallManager devices (phones, CTI ports, and CTI route points) evenly across all Cisco
CallManagers. Also be sure that all servers can handle the load for the worst-case scenarios, where they
are the only remaining server in their cluster. For more information on how to load-balance the Cisco
CallManager clusters, refer to the Cisco IP Telephony Solution Reference Network Design (SRND)
guide, available at
http://www.cisco.com/go/srnd

Configuring ICM for CTI Manager Redundancy


To enable Cisco CallManager support for CTI Manager failover in a duplex Cisco CallManager model,
perform the following steps:

Step 1 Create a Cisco CallManager redundancy group, and add subscribers to the group. (Publishers and TFTP
servers should not be used for call processing, device registration, or CTI Manager use.)
Step 2 Designate two CTI Managers to be used for each side of the duplex Peripheral Gateway (PG).
Step 3 Assign one of the CTI Managers to be the JTAPI service of the Cisco CallManager PG side A. (See
Figure 3-7.)
Step 4 Assign the remaining CTI Manager to be the JTAPI service of the Cisco CallManager PG side B. (See
Figure 3-7.)

Cisco IP Contact Center Enterprise Edition SRND


3-10 OL-7279-04
Chapter 3 Design Considerations for High Availability
IP IVR (CRS) Design Considerations

Figure 3-7 Assigning CTI Managers for PG Sides A and B

126812
PG side A, Cisco CallManager PIM 1 PG side B, Cisco CallManager PIM 1

IP IVR (CRS) Design Considerations


The JTAPI subsystem in IP IVR (CRS) can establish connections with two CTI Managers. This feature
enables IPCC designs to add IP IVR redundancy at the CTI Manager level in addition to using the ICM
script to check for the availability of IP IVR before sending a call to it. Load balancing is highly
recommended to ensure that all IP IVRs are used in the most efficient way.
Figure 3-8 shows two IP IVR (CRS) servers configured for redundancy within one Cisco CallManager
cluster. The IP IVR group should be configured so that each server is connected to a different CTI
Manager service on different Cisco CallManager subscribers in the cluster for load balancing and high
availability. Using the redundancy feature of the JTAPI subsystem in the IP IVR server, you can
implement redundancy by adding the IP addresses or host names of two Cisco CallManagers from the
cluster. Then, if one of the Cisco CallManagers fails, the IP IVR associated with that particular Cisco
CallManager will fail-over to the second Cisco CallManager.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-11
Chapter 3 Design Considerations for High Availability
IP IVR (CRS) Design Considerations

Figure 3-8 High Availability with Two IP IVR Servers and One Cisco CallManager Cluster

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

IDF IDF MDF


switch 1 switch 2 switch 1 MDF
switch 2
TDM
access
Firewall
Cisco CallManager
cluster
Publisher Corporate
LAN
Sub 1 M Sub 2

M M

Call control, CTI data,


IP messaging
IP IVR 1 IP IVR 2 TDM voice lines
Ethernet lines

76606
IP IVR group

You can increase IP IVR (CRS) availability by using one of the following optional methods:
• Call-forward-busy and call-forward-on-error features in Cisco CallManager. This method is more
complicated, and Cisco recommends it only for special cases where a few critical CTI route points
and CTI ports absolutely must have high availability down to the call processing level in Cisco
CallManager. For more information on this method, see IP IVR (CRS) High Availability Using
Cisco CallManager, page 3-13.
• ICM script features to check the availability of an IP IVR prior to sending a call to it. For more
information on this method, see IP IVR (CRS) High Availability Using ICM, page 3-13.

Note Do not confuse the IP IVR (CRS) subsystems with services. IP IVR uses only one service, the Cisco
Application Engine service. The IP IVR subsystems are connections to external applications such as the
CTI Manager and ICM.

Cisco IP Contact Center Enterprise Edition SRND


3-12 OL-7279-04
Chapter 3 Design Considerations for High Availability
Internet Service Node (ISN) Design Considerations

IP IVR (CRS) High Availability Using Cisco CallManager


You can implement IP IVR (CRS) port high availability by using any of the following call forward
features in Cisco CallManager:
• Forward Busy — forwards calls to another port or route point when Cisco CallManager detects that
the port is busy. This feature can be used to forward calls to another CTI port when an IP IVR CTI
port is busy due to an IP IVR application problem, such as running out of available CTI ports.
• Forward No Answer — forwards calls to another port or route point when Cisco CallManager
detects that a port has not picked up a call within the timeout period set in Cisco CallManager. This
feature can be used to forward calls to another CTI port when an IP IVR CTI port is not answering
due to an IP IVR application problem.
• Forward on Failure — forwards calls to another port or route point when Cisco CallManager detects
a port failure caused by an application error. This feature can be used to forward calls to another
CTI port when an IP IVR CTI port is busy due to a Cisco CallManager application error.

Note When using the call forwarding features to implement high availability of IP IVR ports, avoid creating
a loop in the event that all the IP IVR servers are unavailable. Basically, do not establish a path back to
the first CTI port that initiated the call forwarding.

IP IVR (CRS) High Availability Using ICM


You can implement IP IVR (CRS) high availability through ICM scripts. You can prevent calls from
queuing to an inactive IP IVR by using the ICM scripts to check the IP IVR Peripheral Status before
sending the calls to it. For example, you can program an ICM script to check if the IP IVR is active by
using an IF node or by configuring a Translation Route to the Voice Response Unit (VRU) node (by
using the consider if field). This method can be modified to load-balance ports across multiple IP IVRs,
and it is easily scalable to virtually any number of IP IVRs.

Note All calls at the IP IVR are dropped if the IP IVR server, IVR-to-CallManager JTAPI link, or the IP IVR
PG fails.

Internet Service Node (ISN) Design Considerations


The Internet Service Node (ISN) may be deployed with IPCC as an alternative to IP IVR for call
treatment and queueing. ISN is different from IP IVR in that it does not rely on Cisco CallManager for
JTAPI call control. ISN uses H.323 for call control and is used "in front of" Cisco CallManager or other
PBX systems as part of a hybrid IPCC or migration solution. (See Figure 3-9.)

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-13
Chapter 3 Design Considerations for High Availability
Internet Service Node (ISN) Design Considerations

Figure 3-9 High Availability with Two ISN Servers

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

IDF IDF MDF


switch 1 switch 2 switch 1 MDF
switch 2
TDM
access
Firewall
Combination ISN
Voice Browser and
ISN Application
Server Corporate
ISN1 ISN2
LAN

Call control, CTI data,


IP messaging
ISN PG ISN PG TDM voice lines
Side A Side B Ethernet lines

126813
ISN PG Pair

ISN uses the following system components:


• Cisco Voice Gateway
The Cisco Voice Gateway is typically used to terminate TDM PSTN trunks and calls to transform
them into IP-based calls on an IP network. With ISN, these voice gateways also provide additional
functionality using the IOS built-in Voice Extensible Markup Language (VXML) Voice Browser to
provide caller treatment and call queueing on the voice gateway without having to move the call to
a physical IP IVR. The gateway can also use the Media Resource Control Protocol (MRCP) interface
to add Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) functions on the gateway
as well under ISN control.
• ISN Voice Browser
The ISN Voice Browser is used in conjunction with the VXML Voice Browser on the voice gateway
to provide call control signalling when calls are switched between the ingress gateway and another
endpoint gateway or IPCC agent. The voice browsers register with the application servers as well
as gatekeepers so that, when new calls come into the system, the gatekeeper can associate the dialed
number with a particular set of voice browsers that can handle the call.
• ISN Application Server
The ISN Application Server controls the voice browsers (both VXML Voice Browsers on gateways
and ISN Voice Browsers) and interfaces to the ICM Peripheral Gateway to obtain instructions and
pass data to the ICM routing script. Instructions from the ICM Peripheral Gateway are translated by
the Application Server to VXML code and sent to the voice browsers for processing.

Cisco IP Contact Center Enterprise Edition SRND


3-14 OL-7279-04
Chapter 3 Design Considerations for High Availability
Multi-Channel Design Considerations (Cisco Email Manager Option and Cisco Collaboration Server Option)

• ISN Media Server


The ISN caller treatment is provided either by using ASR/TTS functions via MRCP or with
predefined .wav files stored on media servers. The Media Servers act as web servers and serve up
the .wav files to the voice browsers as part of their VXML processing. Media Servers can be
clustered using the Cisco Content Services Switch (CSS) products, thus allowing multiple Media
Servers to be pooled behind a single URL for access by all the voice browsers in the network.
• H.323 Gatekeepers
Gatekeepers are used with ISN to register the voice browsers and associate them with specific dialed
numbers. When calls come into the network, the gateway will query the gatekeeper to find out where
to send the call based upon the dialed number. The gatekeeper also is aware of the state of the voice
browsers and will load-balance calls across them and avoid sending calls to out-of-service voice
browsers or ones that have no available sessions.
Availability of the ISN can be increased by:
• Adding additional redundant ISN systems under control of the ICM Peripheral gateways, thus
allowing the ICM to balance the calls across the platforms
• Adding additional ISN components to an ISN system (for example, a single ISN with multiple voice
browsers)
• Adding gatekeeper redundancy with HSRP
• Adding Cisco Content Server to load-balance .wav file requests across multiple ISN Media Servers

Note Calls in ISN are not dropped if the Application Server or ISN PG fails because they can be redirected to
another ISN Voice Browser on another ISN controlled gateway as part of the fault-tolerant design using
TCL scripts in the voice gateway that are provided with the ISN images.

For more information on these options, review the ISN product documentation at:
http://www.cisco.com/univercd/cc/td/doc/product/icm/isn/isn21/index.htm

Multi-Channel Design Considerations (Cisco Email Manager


Option and Cisco Collaboration Server Option)
The IPCC Enterprise solution can be extended to support multi-channel customer contacts, with email
and web contacts being routed by the IPCC to agents in a blended or "universal queue" mode. The
following optional components are integrated into the IPCC architecture (see Figure 3-10):
• Media Routing Peripheral Gateway
To route multi-channel contacts, the Cisco e-Mail Manager and Cisco Collaboration Server Media
Blender communicate with the Media Routing Peripheral Gateway. The Media Routing Peripheral
Gateway, like any peripheral gateway, can be deployed in a redundant or duplex manner with two
servers interconnected for high availability. Typically, the Media Routing Peripheral Gateway is
co-located at the Central Controller and has an IP-socket connection to the multi-channel systems.
• Admin Workstation ConAPI Interface
The integration of the Cisco multi-channel options allows for the ICM and optional systems to share
configuration information about agents and their related skill groups. The Configuration
Application Programming Interface (ConAPI) runs on an Admin Workstation and can be configured
with a backup service running on another Admin Workstation.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-15
Chapter 3 Design Considerations for High Availability
Multi-Channel Design Considerations (Cisco Email Manager Option and Cisco Collaboration Server Option)

• Agent Reporting and Management (ARM) and Task Event Services (TES) Connections
ARM and TES services provide call (ARM) and non-voice (TES) state and event notification from
the IPCC CTI Server to the multi-channel systems. These connections provide agent information to
the email and web environments as well as accepting and processing task requests from them. The
connection is a TCP/IP socket that connects to the agent's associated CTI Server, which can be
deployed as a redundant or duplex pair on the Agent Peripheral Gateway.

Figure 3-10 Multi-Channel System

ConAPI
Logger AW ConAPI
Router

Database Database

TES Workstation CTI


Desktop Phone
MR-PG IPCC Agent-PG
TES
OPC CTI Server OPC CTI Server IP

PIM3 PIM2 IPCC/EA Agents


MR-PIM MR-PIM PIM
MR MR ARM ARM

PC
Phone M
CEM CEM CCS
Database Cisco
CallManager

126033
Customers/Callers
ConAPI
ConAPI

Recommendations for high availability:


• Deploy the Media Routing Peripheral Gateways in duplex pairs.
• Deploy ConAPI as a redundant pair of Admin Workstations that are not used for configuration and
scripting so that they will be less likely to be shut off or rebooted. Also consider using the HDS
servers at the central sites to host this function.
• Deploy the IPCC Agent Peripheral Gateways and CTI Servers in duplex pairs.

Cisco IP Contact Center Enterprise Edition SRND


3-16 OL-7279-04
Chapter 3 Design Considerations for High Availability
Cisco Email Manager Option

Cisco Email Manager Option


The Cisco Email Manager is integrated with the IPCC Enterprise Edition to provide full email support
in the multi-channel contact center with IPCC. It can be deployed using a single server (see Figure 3-11)
for a small deployments or with multiple servers to meet larger system design requirements. The major
components of Cisco Email Manager are:
• Cisco Email Manager Server — The core routing and control server; it is not redundant.
• Cisco Email Manager Database Server — The server that maintains the online database of all emails
and configuration and routing rules in the system. It can be co-resident on the Cisco Email Manager
server for smaller deployments or on a dedicated server for larger systems.
• Cisco Email Manager UI Server — This server allows the agent user interface (UI) components to
be off-loaded from the main Cisco Email Manager server to scale for larger deployments or to
support multiple remote agent sites. Each remote site could have a local UI Server to reduce the data
traffic from the agent browser clients to the Cisco Email Manager server (see Figure 3-12).

Figure 3-11 Single Cisco Email Manager Server

Administrative
Agent Browsers Browsers

SpellServer UIServer RServer

CEM Server
Machine TServer Inbasket

CEM DB CCL DB

Database Server
126034

Machine

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-17
Chapter 3 Design Considerations for High Availability
Cisco Collaboration Server Option

Figure 3-12 Multiple UI Servers

Agent Browsers Administrative


Agent Browsers
Browsers

UIServer
SpellServer RServer
UIServer Machine

UIServer TServer Inbasket


UIServer Machine CEM Server
Machine

CEM DB CCL DB

Database Server

126035
Machine

Cisco Collaboration Server Option


The Cisco Collaboration Server is integrated with the IPCC Enterprise Edition to provide web chat and
co-browsing support in the multi-channel contact center with IPCC. The major components of the Cisco
Collaboration Server are (see Figure 3-13):
• Cisco Collaboration Server — Collaboration servers are deployed outside the corporate firewall in
a demilitarized zone (DMZ) with the corporate web servers they support. You can deploy multiple
collaboration servers for larger systems.
• Cisco Collaboration Server Database Server — This is the server that maintains the online database
of all chat and browsing sessions as well as configuration and routing rules in the system. It can be
co-resident on the Cisco Collaboration Server; however, because the Cisco Collaboration Server is
outside the firewall, most enterprises deploy it on a separate server inside the firewall to protect the
historical data in the database. Multiple Cisco Collaboration Servers can point to the same database
server to reduce the total number of servers required for the solution.
• Cisco Collaboration Server Media Blender — This server polls the collaboration servers to check
for new requests, and it manages the Media Routing and CTI/Task interfaces to connect the agent
and caller. Each IPCC Agent Peripheral Gateway will have its own Media Blender, and each Media
Blender will have a Media Routing peripheral interface manager (PIM) component on the Media
Routing Peripheral Gateway.
• Cisco Collaboration Dynamic Content Adaptor (DCA) — This server is deployed in the DMZ with
the collaboration server, and it allows the system to share content that is generated dynamically by
programs on the web site (as opposed to static HTTP pages).

Cisco IP Contact Center Enterprise Edition SRND


3-18 OL-7279-04
Chapter 3 Design Considerations for High Availability
Cisco IPCC Outbound Option Design Considerations

Figure 3-13 Cisco Collaboration Server

Internet DMZ

DCA
Caller CallManager Workstation CTI
Agent PG
requests for Desktop Phone
ARM
a Call Back CTI
or for Web CTI SVR
DCA Connection IP
Collaboration
with Chat Media Routing Agents
(SSC, MSC), CCS ARM (MR) PG
or Web MRI MRI
Collaboration CMB MR PIM
with a Phone ICM BAPI
Call (BC) Queue
ICM Distributor
AW
ICM Administration
F CMS_jserver
I F Connection (aka
R I Conapi Connection)
E R
E HTTP Agent Connection
W
A W

126036
L A Note: Arrow indicates the direction in
L L which the connection is initiated.
L

Cisco IPCC Outbound Option Design Considerations


The Outbound Option provides the ability for IPCC Enterprise to place calls on behalf of agents to
customers based upon a predefined campaign. The major components of the Outbound Option are (see
Figure 3-14):
• Outbound Option Campaign Manager — A software module that manages the dialing lists and rules
associated with the calls to be placed. This software is loaded on the Logger platform and is not
redundant; it can be loaded and active on only one server of the duplex pair of Loggers in the IPCC
system.
• Outbound Option Dialer — A software module that performs the dialing tasks on behalf of the
Campaign Manager. In IPCC, the Dialer emulates a set of IP phones for Cisco CallManager to make
the outbound calls, and it detects the caller and manages the interaction tasks with the CTI OS server
to transfer the call to an agent. It also interfaces with the Media Routing Peripheral Gateway, and
each Dialer has its own peripheral interface manager (PIM) on the Media Routing Peripheral
Gateway.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-19
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

Figure 3-14 IPCC Outbound Option

Logger MR PG IPCC PG
SQL Server
IPCC PIM IPCC PIM CTI/CTIOS
7/2000
Import
ODBC TCP/IP TCP/IP TCP/IP
EMT
Campaign
manager EMT Dialer

IP/TI/
Analog/EI

IP ACD/
CO Gateway M CallManager
Administrative
Workstation

126037
Components with dashed lines
are only used by IPCC Agent
Desktop

The system can support multiple dialers across the enterprise, all of which are under control of the
central Campaign Manager software. Although they do not function as a redundant or duplex pair the
way a Peripheral Gateway does, with a pair of dialers under control of the Campaign Manager, a failure
of one of the dialers can be handled automatically and calls will continue to be handled by the surviving
dialer. Any calls that were already connected to agents would remain connected and would experience
no impact from the failure.
For smaller implementations, the Dialer could be co-resident on the IPCC Peripheral Gateway. For
larger systems, the Dialer should be on its own server, or you could possibly use multiple Dialers under
control of the central Campaign Manager.
Recommendations for high availability:
• Deploy the Media Routing Peripheral Gateways in duplex pairs.
• Deploy Dialers on their own servers as standalone devices to eliminate a single point of failure. (If
they were co-resident on a PG, the dialer would go down if the PG server failed.)
• Deploy multiple Dialers and make use of them in the Campaign Manager to allow for automatic fault
recovery to a second Dialer in the event of a failure.
• Include Dialer "phones" (virtual phones in Cisco CallManager) in redundancy groups in Cisco
CallManager to allow them to fail-over to a different subscriber, as would any other phone or device
in the Cisco CallManager cluster.

Peripheral Gateway Design Considerations


The ICM CallManager Peripheral Gateway (PG) uses the Cisco CallManager CTI Manager process to
communicate with the Cisco CallManager cluster, with a single Peripheral Interface Manager (PIM)
controlling agent phones anywhere on the cluster. The Peripheral Gateway PIM process registers with
CTI Manager on one of the Cisco CallManager servers in the cluster, and the CTI Manager accepts all
JTAPI requests from the PG for the cluster. If the phone, route point, or other device being controlled

Cisco IP Contact Center Enterprise Edition SRND


3-20 OL-7279-04
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

by the PG is not registered to that specific Cisco CallManager server in the cluster, the CTI Manager
forwards that request via Cisco CallManager SDL links to the other Cisco CallManager servers in the
cluster. There is no need for a PG to connect to multiple Cisco CallManager servers in a cluster.
Duplex Cisco CallManager PG implementations are highly recommended because the PG will have only
one connection to the Cisco CallManager cluster using a single CTI Manager. If that CTI Manager were
to fail, the PG would no long be able to communicate with the Cisco CallManager cluster. Adding a
redundant or duplex PG allows the ICM to have a second pathway or connection to the Cisco
CallManager cluster using a second CTI Manager process on a different Cisco CallManager server in
the cluster.
The minimum requirement for ICM high-availability support for CTI Manager and IP IVR (CRS) is a
duplex (redundant) Cisco CallManager PG environment with one Cisco CallManager cluster containing
at least two servers. Therefore, the minimum configuration for a Cisco CallManager cluster in this case
is one publisher and one subscriber. (See Figure 3-15.)

Figure 3-15 ICM High Availability with One Cisco CallManager Cluster

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2

Cisco CallManager
cluster Corporate
Publisher LAN
Sub 1 M Sub 2 IP IVR 1 IP IVR
M group
M
IP IVR 2

CM CM
PG A PG B

Call control, CTI data,


ICM A ICM B IP messaging
TDM voice lines
76607

Ethernet lines
ICM central controllers

Redundant ICM servers can be located at the same physical site or geographically distributed. In both
cases, the ICM Call Router and Logger/Database Server processes are interconnected through a private,
dedicated LAN. If the servers are located at the same site, you can provide the private LAN by inserting
a second NIC card in each server (sides A and B) and connecting them with a crossover cable. If the
servers are geographically distributed, you can provide the private LAN by inserting a second NIC card

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-21
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

in each server (sides A and B) and connecting them with a dedicated T1 line that meets the specific
network requirements for this connection as documented in the Cisco ICM Software Installation Guide,
available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm50doc/coreicm5/plngupin/instl
gd.pdf
Within the ICM PG, two software processes are run to manage the connectivity to the Cisco
CallManager cluster: the JTAPI Gateway and the CallManager PIM. The JTAPI Gateway is started by
the PG automatically and runs as a node-managed process, which means that the PG will monitor this
process and automatically restart it if it should fail for any reason. The JTAPI Gateway handles the
low-level JTAPI socket connection protocol and messaging between the PIM and the Cisco CallManager
CTI Manager, and it is specific for the version of Cisco CallManager.
The ICM PG PIM is also a node-managed process and is monitored for unexpected failures and
automatically restarted. This process manages the higher-level interface between the ICM and the Cisco
CallManager cluster, requesting specific objects to monitor and handling route requests from the Cisco
CallManager cluster.
In a duplex ICM PG environment, both JTAPI services from both Cisco CallManager PG sides log into
the CTI Manager upon initialization. Cisco CallManager PG side A logs into the primary CTI Manager,
while PG side B logs into the secondary CTI Manager. However, only the active side of the Cisco
CallManager PG registers monitors for phones and CTI route points. The duplex ICM PG pair works in
hot standby mode, with only the active PG side PIM communicating with the Cisco CallManager cluster.
The standby side logs into the secondary CTI Manager only to initialize the interface and prime it for a
failover. The registration and initialization services of the Cisco CallManager devices take a significant
amount of time, and having the CTI Manager primed significantly decreases the time for failover.
In duplex PG operation, the PG side that is able to connect to the ICM Call Router Server and request
configuration information first will be the side that goes active. It is not deterministic based upon the
Side A or Side B designation of the PG device, but it depends only upon the ability of the PG to connect
to the Call Router, and it ensures that only the PG side that has the best connection to the Call Router
will attempt to go active.

Cisco CallManager Failure Scenarios


A duplex ICM model contains no single points of failure. However, there are scenarios where a
combination of multiple failures can prevent IPCC from accepting new incoming calls. Also, if a
component of the IPCC solution does not itself support redundancy and failover, existing calls on that
component will be dropped. The following ICM failure scenarios have the most impact on high
availability, and Cisco CallManager Peripheral Interface Managers (PIMs) cannot activate if the either
of the following failure scenarios occurs (see Figure 3-16):
• PIM side A and the secondary CTI Manager that services the PIM on side B both fail.
• PIM side B and the primary CTI Manager that services the PIM on side A both fail.
In either of these cases, the ICM will have no visibility to the Cisco CallManager cluster.

Cisco IP Contact Center Enterprise Edition SRND


3-22 OL-7279-04
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

Figure 3-16 Cisco CallManager PGs Cannot Cross-Connect to Backup CTI Managers

ICM PG ICM PG
A B

Subscriber 1 Subscriber 2
(CTI Manager (CTI Manager
and and
Cisco CallManager) Publisher Cisco CallManager)

126158
(CTI Manager
and
CallManager)

ICM Failover Scenarios


This section describes how redundancy works in the following failure scenarios:
• Scenario 1 - Cisco CallManager and CTI Manager Fail, page 3-23
• Scenario 2 - Cisco CallManager PG Side A Fails, page 3-24
• Scenario 3 - Only Cisco CallManager Fails, page 3-25
• Scenario 4 - Only CTI Manager Fails, page 3-26

Scenario 1 - Cisco CallManager and CTI Manager Fail


Figure 3-17 shows a complete system failure or loss of network connectivity on Cisco CallManager A.
The CTI Manager and Cisco CallManager services are both active on the same server, and Cisco
CallManager A is the primary CTI Manager in this case. The following conditions apply to this scenario:
• All phones and gateways are registered with Cisco CallManager A.
• All phones and gateways are configured to re-home to Cisco CallManager B (that is, B is the
backup).
• Cisco CallManagers A and B are each running a separate instance of CTI Manager.
• When all of the software services on CallManager Subscriber A fail (call processing, CTI Manager,
and so on), all phones and gateways re-home to Cisco CallManager B.
• PG side A detects a failure and induces a failover to PG side B.
• PG side B becomes active and registers all dialed numbers and phones; call processing continues.
• After an agent disconnects from all calls, the IP phone re-homes to the backup Cisco
CallManager B. The agent will have to log in again manually using the agent desktop.
• When Cisco CallManager A recovers, all phones and gateways re-home to it.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-23
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

• PG side B remains active, using the CTI Manager on Cisco CallManager B.


• During this failure, any calls in progress at an IPCC agent will remain active. When the call is
completed, the phone will re-home to the backup Cisco CallManager automatically.
• After the failure is recovered, the PG will not fail back to the A side of the duplex pair. All CTI
messaging will be handled using the CTI Manager on Cisco CallManager B, which will
communicate to Cisco CallManager A to obtain phone state and call information.

Figure 3-17 Scenario 1 - Cisco CallManager and CTI Manager Fail

CallManager CallManager
PG A PG B

CallManager A CallManager B
M M

IP

ICM synchronzation messages


CallManager intra-cluster messages
JTAPI messages
H323 or MGCP messages
79925

SCCP messages

Scenario 2 - Cisco CallManager PG Side A Fails


Figure 3-18 shows a failure on PG side A and a failover to PG side B. All CTI Manager and Cisco
CallManager services continue running normally. The following conditions apply to this scenario:
• All phones and gateways are registered with Cisco CallManager A.
• All phones and gateways are configured to re-home to Cisco CallManager B (that is, B is the
backup).
• Cisco CallManagers A and B are each running a separate instance of CTI Manager.
• When PG side A fails, PG side B becomes active.
• PG side B registers all dialed numbers and phones; call processing continues.
• After an agent disconnects from all calls, that agent's desktop functionality is restored to the same
state prior to failover.
• When PG side A recovers, PG side B remains active and uses the CTI Manager on Cisco
CallManager B.

Cisco IP Contact Center Enterprise Edition SRND


3-24 OL-7279-04
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

Figure 3-18 Scenario 2 - Cisco CallManager PG Side A Fails

CallManager CallManager
PG A PG B

CallManager A CallManager B
M M

IP

ICM synchronzation messages


CallManager intra-cluster messages
JTAPI messages
H323 or MGCP messages

79926
SCCP messages

Scenario 3 - Only Cisco CallManager Fails


Figure 3-19 shows a failure on Cisco CallManager A. The CTI Manager services are running on Cisco
CallManagers C and D, and Cisco CallManager C is acting as the primary CTI Manager. However, all
phones and gateways are registered with Cisco CallManager A. During this failure, Cisco CallManager
is not affected because the PG communicates with the CTI Manager service, not the Cisco CallManager
service. All phones re-home individually to the standby Cisco CallManager B if they are not in a call. If
a phone is in a call, it re-homes to Cisco CallManager B after it disconnects from the call.
The following conditions apply to this scenario:
• All phones and gateways are registered with Cisco CallManager A.
• All phones and gateways are configured to re-home to Cisco CallManager B (that is, B is the
backup).
• Cisco CallManagers C and D are each running a separate instance of CTI Manager.
• When Cisco CallManager A fails, phones and gateways re-home to Cisco CallManager B.
• PG side A remains connected and active, with a CTI Manager connection on Cisco CallManager
subscriber C. It does not fail-over because the JTAPI/CTI Manager connection has not failed.
However, it will see the phones and devices being unregistered from Cisco CallManager
subscriber A (where they were registered) and will then be notified of these devices being
re-registered on Cisco CallManager subscriber B automatically. During the time that the agent
phones are not registered, the PG will disable the agent desktops to prevent the agents from
attempting to use the system while their phones are not actively registered with a Cisco CallManager
subscriber.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-25
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

• Call processing continues for any devices not registered to Cisco CallManager subscriber A. Call
processing also continues for those devices on subscriber A when they are re-registered with their
backup subscriber.
• Agents on an active call will stay in their connected state until they complete the call; however, the
agent desktop will be disabled to prevent any conference, transfer, or other telephony events during
the failover. After the agent disconnects the active call, that agent's phone will re-register with the
backup subscriber, and the agent will have to log in again manually using the agent desktop.
• When Cisco CallManager A recovers, phones and gateways re-home to it. This re-homing can be
set up on Cisco CallManager to gracefully return groups of phones and devices over time or to
require manual intervention during a maintenance window to minimize the impact to the call center.
• Call processing continues normally after the phones and devices have returned to their original
subscriber.

Figure 3-19 Scenario 3 - Only Cisco CallManager Fails

CallManager CallManager
PG A PG B

CallManager C CallManager D
M M

CallManager A M M CallManager B

IP

ICM synchronzation messages


CallManager intra-cluster messages
JTAPI messages
79927

H323 or MGCP messages


SCCP messages

Scenario 4 - Only CTI Manager Fails


Figure 3-20 shows a CTI Manager service failure on Cisco CallManager C. The CTI Manager services
are running on Cisco CallManagers C and D, and Cisco CallManager C is the primary CTI Manager.
However, all phones and gateways are registered with Cisco CallManager A. During this failure, both
the CTI Manager and the PG fail-over to their secondary sides. Because the JTAPI service on PG side B

Cisco IP Contact Center Enterprise Edition SRND


3-26 OL-7279-04
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

is already logged into the secondary (now primary) CTI Manager, the device registration and
initialization time is significantly shorter than if the JTAPI service on PG side B had to log into the CTI
Manager.
The following conditions apply to this scenario:
• All phones and gateways are registered with Cisco CallManager A.
• All phones and gateways are configured to re-home to Cisco CallManager B (that is, B is the
backup).
• Cisco CallManagers C and D are each running a separate instance of CTI Manager.
• When Cisco CallManager C fails, PG side A detects a failure of the CTI Manager on that server and
induces a failover to PG side B.
• PG side B registers all dialed numbers and phones with Cisco CallManager D, and call processing
continues.
• After an agent disconnects from all calls, that agent's desktop functionality is restored to the same
state prior to failover.
• When Cisco CallManager C recovers, PG side B continues to be active and uses the CTI Manager
on Cisco CallManager D.

Figure 3-20 Only CTI Manager Fails

CallManager CallManager
PG A PG B

CallManager C CallManager D
M M

CallManager A M M CallManager B

IP

ICM synchronzation messages


CallManager intra-cluster messages
JTAPI messages
H323 or MGCP messages
79928

SCCP messages

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-27
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

IPCC Scenarios for Clustering over the WAN


IPCC Enterprise can also be overlaid with the Cisco CallManager design model for clustering over the
WAN, which allows for high availability of Cisco CallManager resources across multiple locations and
data center locations. There are a number of specific design requirements for Cisco CallManager to
support this deployment model, and IPCC adds its own specific requirements and new failover
considerations to the model.
Specific testing has been performed to identify the design requirements and failover scenarios, but no
code changes were made to the core IPCC solution components to support this model. The success of
this design model relies on specific network configuration and setup, and the network must be monitored
and maintained. The component failure scenarios noted previously (see ICM Failover Scenarios, page
3-23) are still valid in this model, and the additional failure scenarios for this model include:
• Scenario 1 - ICM Central Controller or Peripheral Gateway Private Network Fails, page 3-28
• Scenario 2 - Visible Network Fails, page 3-29
• Scenario 3 - Visible and Private Networks Both Fail (Dual Failure), page 3-30
• Scenario 4 - Remote Agent Location WAN Fails, page 3-31

Scenario 1 - ICM Central Controller or Peripheral Gateway Private Network Fails


In clustering over the WAN with IPCC, there must be a dedicated, isolated private network connection
between the geographically distributed Central Controller (Call Router/Logger) and the split Peripheral
Gateway pair to maintain state and synchronization between the sides of the system, and UDP heartbeats
are generated to verify the health of this link. The ICM uses the heartbeats to detect a failure on the
private link. Missing five consecutive heartbeats will signal the ICM that the link or the remote partner
system might have failed.
If the private network fails between the ICM Central Controllers, the following conditions apply:
• The Call Routers will detect the failure by missing five consecutive UDP heartbeats. Both Call
Routers send a "test other side" (TOS) message to the Peripheral Gateways, starting with PG1A,
then PG1B, then PG2A, and so forth. The TOS message requests the Peripheral Gateway to check
if it can "see" the Call Router on the other side to determine if the failure is a network failure or a
failure of the redundant pair.
• The Call Routers verify which side sees more active connections of the Peripheral Gateways. That
side will continue to function as the active Call Router in simplex mode, and the redundant Call
Router will be disabled.
• All the Peripheral Gateways will realign their active data feed to the active Call Router over the
visible network, with no failover or loss of service.
• There is no impact to the agents, calls in progress, or calls in queue. The system can continue to
function normally; however; the Call Routers will be in simplex mode until the private network link
is restored.
If the private network fails between the Cisco CallManager Peripheral Gateways, the following
conditions apply:
• The Peripheral Gateway sides will detect the failure by missing five consecutive UDP heartbeats.
The Peripheral Gateways verify which side of the duplex pair has the active connection to the Cisco
CallManager cluster.
• The Peripheral Gateway side of the duplex pair that was actively connected to the Cisco
CallManager cluster will continue to function as the active side of the pair, in simplex mode. The
other side will be inactive until the private network connection is restored.

Cisco IP Contact Center Enterprise Edition SRND


3-28 OL-7279-04
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

• There is no impact to the agents, calls in progress, or calls in queue. The system can continue to
function normally; however; the Call Routers will be in simplex mode until the private network link
is restored.
If the two private network connections were combined into one link, the failures would follow the same
path; however, the system would be running in simplex on both the Call Router and the Peripheral
Gateway. If a second failure were to occur at that point, the system could lose some or all of the call
routing and ACD functionality.

Scenario 2 - Visible Network Fails


The visible network in this design model is the network path between the data center locations where the
main system components (Cisco CallManager subscribers, Peripheral Gateways, IP-IVR/ISN
components, and so forth) are located. This network is used to carry all the voice traffic (RTP stream
and call control signalling), ICM CTI (call control signalling) traffic, as well as all typical data network
traffic between the sites. In order to meet the requirements of Cisco CallManager clustering over the
WAN, this link must be highly available with very low latency and sufficient bandwidth. This link is
critical to the IPCC design because it is part of the fault-tolerant design of the system, and it must be
highly resilient as well.
If the visible network fails between the data center locations, the following conditions apply:
• The Cisco CallManager subscribers will detect the failure and continue to function locally, with no
impact to local call processing and call control. However, any calls that were set up over this WAN
link will fail with the link.
• The ICM Call Routers will detect the failure because the normal flow of TCP keep-alives from the
remote Peripheral Gateways will stop. Likewise, the Peripheral Gateways will detect this failure by
the loss of TCP keep-alives from the remote Call Routers. The Peripheral Gateways will
automatically realign their data communications to the local Call Router, and the local Call Router
will then use the private network to pass data to the Call Router on the other side to continue call
processing. This does not cause a failover of the Peripheral Gateway or the Call Router.
• Agents might be affected by this failure under the following circumstances:
– If the agent desktop (Cisco Agent Desktop or CTI OS) is registered to the Peripheral Gateway
on side A of the system but the physical phone is registered to side B of the Cisco CallManager
cluster
Under normal circumstances, the phone events would be passed from side B to side A over the
visible network via the CTI Manager Service to present these events to the side A Peripheral
Gateway. The visible network failure will not force the IP Phone to re-home to side A of the
cluster, and the phone will remain operational on the isolated side B. The Peripheral Gateway
will no longer be able to see this phone, and the agent will be logged out of IPCC automatically
because the system can no longer direct calls to the agent’s phone.
– If the agent desktop (Cisco Agent Desktop or CTI OS) and IP Phone are both registered to
side A of the Peripheral Gateway and Cisco CallManager, but the phone is reset and it
re-registers to a side B of the Cisco CallManager subscriber
If the IP Phone re-homes or is manually reset and forced to register to side B of a Cisco
CallManager subscriber, the Cisco CallManager subscriber on side A that is providing the CTI
Manager Service to the local Peripheral Gateway will unregister the phone and remove it from
service. Because the visible network is down, the remote Cisco CallManager subscriber at
side B cannot send the phone registration event to the remote Peripheral Gateway. IPCC will
log out this agent because it can no longer control the phone for the agent.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-29
Chapter 3 Design Considerations for High Availability
Peripheral Gateway Design Considerations

– If the agent desktop (CTI OS or Cisco Agent Desktop) is registered to the CTI OS Server at the
side-B site but the active Peripheral Gateway side is at the side-A site
Under normal operation, the CTI OS desktop (and Cisco Agent Desktop Server) will
load-balance their connections to the CTI OS Server pair. At any given time, half the agent
connections would be on a CTI OS server that has to cross the visible network to connect to the
active Peripheral Gateway CTI Server (CG). When the visible network fails, the CTI OS Server
detects the loss of connection with the remote Peripheral Gateway CTI Server (CG) and
disconnects the active agent desktop clients to force them to re-home to the redundant CTI OS
Server at the remote site. The CTI OS agent desktop is aware of the redundant CTI OS server
and will automatically use this server. During this transition, the agent desktop will be disabled
and will return to operational state as soon as it is connected to the redundant CTI OS server.
(The agent may be logged out or put into not-read state, depending upon the /LOAD parameter
defined for the Cisco CallManager Peripheral Gateway in ICM Config Manager).

Scenario 3 - Visible and Private Networks Both Fail (Dual Failure)


Individually, the private and visible networks can fail with limited impact to the IPCC agents and calls.
However, if both of these networks fail at the same time, the system will be reduced to very limited
functionality. This failure should be considered catastrophic and should be avoided by careful WAN
design, with backup and resiliency built into the design.
If both the visible and private networks fail at the same time, the following conditions apply:
• The Cisco CallManager subscribers will detect the failure and continue to function locally, with no
impact to local call processing and call control. However, any calls that were set up over this WAN
link will fail with the link.
• The Call Routers and Peripheral Gateways will detect the private network failure after missing five
consecutive UDP heartbeats. These heartbeats are generated every 100 ms, and the failure will be
detected within about 500 ms on this link.
• The Call Routers will attempt to contact their Peripheral Gateways with the "test other side"
message to determine if the failure was a network issue or if the remote Call Router had failed and
was no longer able to send heartbeats. The Call Routers will determine the side with the most active
Peripheral Gateway connections, and that side will stay active in simplex mode while the remote
Call Router will be in standby mode. The Call Routers will send a message to the Peripheral
Gateways to realign their data feeds to the active call router only.
• The Peripheral Gateways will determine which side has the active Cisco CallManager connection.
However, it will also consider the state of the Call Router, and the Peripheral Gateway will not
remain active if it is not able to connect to an active Call Router.
• The surviving Call Router and Peripheral Gateways will detect the failure of the visible network by
the loss of TCP keep-alives on the visible network. These keep-alives are sent every 400 ms, so it
can take up to two seconds before this failure is detected.
• The Call Router will be able to see only the local Peripheral Gateways, which are those used to
control local IP-IVR or ISN ports and the local half of the CallManager Peripheral Gateway pair.
The remote IP-IVR or ISN Peripheral Gateways will be off-line, taking them out of service in the
ICM Call Routing Scripts (using the "peripheral on-line" status checks) and forcing any of the calls
in progress on these devices to be disconnected. (ISN can redirect the calls upon failure.)
• Any new calls that come into the disabled side will not be routed by the IPCC, but they can be
redirected or handled using standard Cisco CallManager "redirect on failure" for their CTI Route
Points.

Cisco IP Contact Center Enterprise Edition SRND


3-30 OL-7279-04
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

• Agents will be impacted as noted above if their IP Phones are registered to the side of the Cisco
CallManager cluster opposite the location of their active Peripheral Gateway and CTI OS Server
connection. Only agents that were active on the surviving side of the Peripheral Gateway with
phones registered locally to that site will not be impacted.
At this point, the Call Router and Cisco CallManager Peripheral Gateway will run in simplex mode, and
the system will accept new calls from only the surviving side for IPCC call treatment. The IP-IVR/ISN
functionality will also be limited to the surviving side as well.

Scenario 4 - Remote Agent Location WAN Fails


The IPCC design model for clustering over the WAN assumes the IPCC agents are remotely located at
multiple sites connected by a WAN. Each agent location requires WAN connectivity to both of the data
center locations where the Cisco CallManager and ICM components are located. These connections
should be isolated and provide for redundancy as well as making use of basic SRST functionality in the
event of a complete network failure so that the remote site would still have basic dial tone service to
make emergency (911) calls.
If side A of the WAN at the remote agent location fails, the following conditions apply:
• Any IP phones that are homed to the side-A Cisco CallManager subscribers will automatically
re-home to the side-B subscribers (provide the redundancy group is configured).
• Agent desktops that are connected to the CTI OS or Cisco Agent Desktop server at that site will
automatically realign to the redundant CTI OS server at the remote site. (Agent desktop will be
disabled during the realignment process.)
If both sides of the WAN at the remote agent location fail, the following conditions apply:
• The local voice gateway will detect the failure of the communications path to the Cisco CallManager
cluster and will go into SRST mode to provide local dial-tone functionality.
• The agent desktop will detect the loss of connectivity to the CTI OS Server (or Cisco Agent Desktop
Server) and automatically log the agent out of the system. While the IP phones are in SRST mode,
they will not be able to function as IPCC agents.

Understanding Failure Recovery


This section analyzes the failover recovery of each individual part (products and subcomponents inside
each product) of the IPCC solution.

Cisco CallManager Service


In larger deployments, it is possible that the Cisco CallManager where agent phones are registered will
not be running the CTI Manager service that communicates with the Cisco CallManager. When an active
Cisco CallManager service fails, all the devices registered to it are reported out of service by the CTI
Manager service. Cisco CallManager reporting shows the call as terminated when the Cisco
CallManager failure occurred because, from a Cisco CallManager reporting perspective, any calls in
progress are terminated and the agents are logged out so that future calls are not routed to them. IP
phones of agents not on calls at the time of failure will quickly register with the backup Cisco
CallManager. The IP phone of an agent on a call at the time of failure will not register with the backup
Cisco CallManager until after the agent completes the current call. If MGCP gateways are used, then the
calls in progress survive, but further call control functions (hold, retrieve, transfer, conference, and so
on) are not possible.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-31
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

When the active Cisco CallManager fails, the agent desktops show the agents as being logged out, their
IP phones display a message stating that the phone has gone off-line, and all the IP phone soft keys are
grayed out until the phones fail-over to the backup Cisco CallManager. To continue receiving calls, the
agents must wait for their phones to re-register with a backup Cisco CallManager to have their desktop
functionality restored by the CTI server to the state prior to the Cisco CallManager service failure. Upon
recovery of the primary Cisco CallManager, the agent phones re-register with their original service
because all the Cisco CallManager devices are forced to register with their home Cisco CallManager.
In summary, the Cisco CallManager service is separate from the CTI Manager service, which talks to
the Cisco CallManager PG via JTAPI. The Cisco CallManager service is responsible for registering the
IP phones, and its failure does not affect the Cisco CallManager PGs. From a Cisco CallManager
perspective, the PG does not go off-line because the Cisco CallManager server running CTI Manager
remains operational. Therefore, the PG does not need to fail-over.

IP IVR (CRS)
When a CTI Manager fails, the IP IVR (CRS) JTAPI subsystem shuts down and restarts by trying to
connect to the secondary CTI Manager, if a secondary is specified. In addition, all voice calls at this IP
IVR are dropped. If there is an available secondary CTI Manager, it logs into this CTI Manager again
and re-registers all the CTI ports associated with the IP IVR JTAPI user. After all the Cisco CallManager
devices are successfully registered with the IP IVR JTAPI user, the server resumes its Voice Response
Unit (VRU) functions and handles new calls. This does not impact the Internet Service Node (ISN)
because it does not depend upon the Cisco CallManager JTAPI service.

ICM
The ICM is a collection of services and processes within these services. The failover and recovery
process for each of these services is unique and requires carefully examination to understand the impact
to other parts of the IPCC solution, including another ICM service.
As stated previously, all redundant ICM services discussed in this chapter must be located at the same
site and connected through a private LAN. You can provide the private LAN by installing a second
network interface card (NIC) in each server (sides A and B) and connecting them with a crossover cable.
By doing this, you can eliminate all external network equipment failures.

Cisco CallManager PG and CTI Manager Service


When the active CTI Manager or PG fails, the JTAPI detects an OUT_OF_SERVICE event and induces
a failover to the standby PG. Since the standby PG is logged into the standby CTI Manager already, it
registers monitors for the phones with logged-in agents and configured dialed numbers and CTI route
points. This initialization service takes place at a rate of about 5 devices per second. The agent desktops
show them as being logged out, and a message displays stating that their routing client or peripheral
(Cisco CallManager) has gone off-line. (This warning can be turned on or off, depending on the
administrator's preference.) All agents lose their desktop functionality until the failure recovery is
complete. The agents can recognize this event because the agent state display on their desktop will show
logged out, and the login button will be the only button available. Any existing calls handled by the agent
should remain alive without any impact to the caller.

Note Agents should not push any buttons during desktop failover because these keystrokes can be buffered
and sent to the CTI server when it completes its failover and restores the agent states.

Cisco IP Contact Center Enterprise Edition SRND


3-32 OL-7279-04
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

Once the CTI Manager or PG completes its failover, the agents can return to their previous call state
(talking, ready, not ready, and so forth). At this point, the agents should also be able to release, transfer,
or conference calls if they were on a call at the time of the failure. All the call data that had been collected
and stored via a call data update message is retained on the agent desktops, recovered, and matched with
call context information saved on the PG. However, all agents without active calls are reset to the default
Not Ready state. In addition, the Longest Available Agent (LAA) algorithm resets the timers for all the
agents to zero.

ICM Voice Response Unit PG


When a Voice Response Unit (VRU) PG fails, all the calls currently in queue on that IP IVR (CRS) are
dropped. Calls queued in the Internet Service Node (ISN) are not dropped and will be redirected to a
secondary ISN or number in the dial plan, if available. However, the Service Control Interface (SCI) link
of the failed VRU PG automatically connects to the backup VRU PG so that all new calls can be handled
properly. Upon recovery of the failed VRU PG, the currently running VRU PG continues to operate as
the active VRU PG. Therefore, having redundant VRU PGs adds significant value because it allows an
IP IVR to continue to function as an active IP IVR. Without VRU PG redundancy, a VRU PG failure
would block use of that IP IVR even though the IP IVR is working properly. (See Figure 3-21.)

Figure 3-21 Redundant ICM VRU PGs with Two IP IVR Servers

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2

Cisco CallManager
cluster Corporate
Publisher LAN

Sub 1 M Sub 2 IP IVR 1 IP IVR


M group
M
IP IVR 2

CM VRU VRU
CM PG A PG B
PG A PG B

Call control, CTI data,


IP messaging ICM A ICM B
TDM voice lines
76609

Ethernet lines
ICM central controllers

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-33
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

ICM Call Router and Logger


The ICM Central Controllers or ICM Servers are shown in these diagrams as a single set of redundant
servers. However, depending upon the size of the implementation, they could be deployed with multiple
servers to host the following key software processes:
• ICM Call Router
The ICM Call Router is the "brain" of the system that maintains a constant memory image of the
state of all the agents, calls, and events in the system. It performs the call routing in the system,
executing the user-created ICM Routing Scripts and populating the real-time reporting feeds for the
Administrative Workstation. The Call Router software runs in synchronized execution, with both of
the redundant servers running the same memory image of the current state across the system. They
keep this information updated by passing the state events between the servers on the private LAN
connection.
• ICM Logger and Database Server
The ICM Logger and Database Server maintains the system database for the configuration (agent
IDs, skill groups, call types, and so forth) and scripting (call flow scripts) as well as the historical
data from call processing. The Loggers receive data from their local Call Router process to store in
the system database. Because the Call Routers are synchronized, the Logger data is also
synchronized. In the event that the two Logger databases are out of synchronization, they can be
resynchronized manually by using the ICMDBA application over the private LAN. The Logger also
provides a replication of its historical data to the customer Historical Database Server (HDS) Admin
Workstations over the visible network.
In the event that the one of the ICM Call Routers should fail, the surviving server will detect the failure
after missing five consecutive heartbeats on the private LAN. The Call Routers generate these heart
beats every 100 ms, so it will take up to 500 ms to detect this failure. Upon detection of the failure, the
surviving Call Router will contact the Peripheral Gateways in the system to verify the type of failure that
occurred. The loss of heartbeats on the private network could be caused by either of the following
conditions:
• Private network outage — It is possible for the private LAN switch or WAN to be down but for both
of the ICM Call Routers to still be fully operational. In this case, the Peripheral Gateways will still
see both of the ICM Call Routers even though they cannot see each other over the private network.
In this case, the Call Routers will both send a Test Other Side message to the PGs to determine if
the Call Router on the other side is still operational and which side should be active. Based upon the
messages from the PGs, the Call Router that previously had the most active PG connections would
remain active in simplex mode, and the Call Router on the other side would go idle until the private
network is restored.
• Call Router hardware failure — It is possible for the Call Router on the other side to have a physical
hardware failure and be completely out of service. In this case, only the surviving Call Router would
be communicating with the Peripheral Gateways using the Test Other Side message. The Peripheral
Gateways would report that they can no longer see the Call Router on the other side, and the
surviving Call Router would take over the active processing role in simplex mode.
During the Call Router failover processing, any Route Requests sent to the Call Router from a Carrier
Network Interface Controller (NIC) or Peripheral Gateway will be queued until the surviving Call
Router is in active simplex mode. Any calls in progress in the IVR or at an agent will not be impacted.
If one of the ICM Logger and Database Servers were to fail, there would be no immediate impact except
that the local Call Router would no longer be able to store data from call processing. The redundant
Logger would continue to accept data from its local Call Router. When the Logger server is restored, the
Logger will contact the redundant Logger to determine how long it had been off-line. If the Logger was
off-line for less than 12 hours, it will automatically request all the transactions it missed from the

Cisco IP Contact Center Enterprise Edition SRND


3-34 OL-7279-04
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

redundant Logger while it was off-line. The Loggers maintain a recovery key that tracks the date and
time of each entry recorded in the database, and these keys will be used to restore data to the failed
Logger over the private network.
If the Logger was off-line for more than 12 hours, the system will not automatically resynchronize the
databases. In this case, resynchronization has to be done manually using the ICMDBA application.
Manual resynchronization allows the system administrator to decide when to perform this data transfer
on the private network, perhaps scheduling it during a maintenance window when there would be little
call processing activity in the system.
The Logger replication process that sends data from the Logger database to the HDS Admin
Workstations will automatically replicate each new row written to the Logger database when the
synchronization takes place as well.
There is no impact to call processing during a Logger failure; however, the HDS data that is replicated
from that Logger would stop until the Logger can be restored.
Additionally, if the Outbound Option is used, the Campaign Manager software is loaded on only one of
the Logger platforms (must be Logger A). If that platform is out of service, any outbound calling will
stop until the Logger can be restored to operational status.

Admin Workstation Real-Time Distributor (RTD)


The Administrative Workstation (AW) Real-Time Distributor (RTD) provides the user interface to the
system for making configuration and scripting changes. It also hosts the web-based reporting tool,
WebView and Internet Script Editor.
These servers do not support redundant or duplex operation, as the other ICM system components do.
However, you can deploy multiple Administrative Workstation servers to provide redundancy for the
IPCC. (See Figure 3-22.)

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-35
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

Figure 3-22 Redundant ICM Distributors and AW Servers

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2

Cisco CallManager
cluster Corporate
Publisher LAN

Sub 1 M Sub 2 IP IVR 1 IP IVR


M group
M
IP IVR 2

CM VRU VRU
CM PG A PG B
PG A PG B

IPCC_Site1
AW AW
A B

ICM A
Call control, CTI data,
IP messaging ICM B
TDM voice lines
Ethernet lines

126159
ICM central controllers
WedView
Reporting Client

Administrative Workstation Real-Time Distributors are clients of the ICM Call Router real-time feed
that provides real-time information about the entire IPCC across the enterprise. Real-Time Distributors
at the same site can be set up as part of an Admin Site that includes a designated primary real-time
distributor and one or more secondary real-time distributors. Another option is to add Client Admin
Workstations which do not have their own local SQL databases and are homed to a Real-Time
Distributor for their SQL database and real-time feed.
The Admin Site reduces the number of real-time feed clients the ICM Call Router has to service at a
particular site. For remote sites, this is important because it can reduce the required bandwidth to support
remote Admin Workstations across a WAN connection.
When using an Admin Site, the primary real-time distributor is the one that will register with the ICM
Call Router for the real-time feed, and the other real-time distributors within that Admin Site register
with the primary real-time distributor for the real-time feed. If the primary real-time distributor is down
or does not accept the registration from the secondary real-time distributors, they will register with the

Cisco IP Contact Center Enterprise Edition SRND


3-36 OL-7279-04
Chapter 3 Design Considerations for High Availability
Understanding Failure Recovery

ICM Call Router for the real-time feed. Client AWs that cannot register with the primary or secondary
real-time distributors will not be able to perform any Admin Workstation tasks until the distributors are
restored.
Alternatively, each real-time distributor could be deployed in its own Admin Site regardless of the
physical site of the device. This will create more overhead for the ICM Call Router to maintain multiple
real-time feed clients; however, it will prevent a failure of the primary real-time distributor from taking
down the secondary distributors at the site.
Additionally, if the Admin Workstation is being used to host the ConAPI interface for the Multi-Channel
Options (Cisco Email Manager and Cisco Content Server), any configuration changes made to the ICM,
Cisco Email Manager, or Cisco Content Server systems will not be passed over the ConAPI interface
until it is restored.

CTI Server
The CTI Server monitors the PIM data traffic for specific CTI messages (such as "call ringing" or "off
hook" events) and makes them available to CTI clients such as the CTI OS Server or Cisco Agent
Desktop Enterprise Server. It also processes third-party call control messages (such as "make call" or
"answer call") from the CTI clients and sends these messages via the PIM interface of the PG to Cisco
CallManager to process the event on behalf of the agent desktop.
CTI Server is redundant on duplex CTI Servers or can be co-resident on the PG servers. (See
Figure 3-23.) It does not, however, maintain agent state in the event of a failure. Upon failure of the CTI
Server, the redundant CTI server becomes active and begins processing call events. CTI OS Server is a
client of the CTI Server and is designed to monitor both CTI Servers in a duplex environment and
maintain the agent state during failover processing. CTI OS agents will see their desktop buttons
gray-out during the failover to prevent them from attempting to perform tasks while the CTI Server is
down. The buttons will be restored as soon as the redundant CTI Server is restored, and the agent does
not have to log on again to the desktop application.
The CTI Server is also critical to the operation of the Multi-Channel Options (Cisco Email Manager and
Cisco Content Server) as well as the Outbound Option. If the CTI Server is down on both sides of the
duplex agent Peripheral Gateway pair, none of the agents for that Agent Peripheral Gateway will be able
to log into these applications.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-37
Chapter 3 Design Considerations for High Availability
CTI OS Considerations

Figure 3-23 Redundant CTI Servers with No Cisco Agent Desktop Server Installed

T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V

MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2

Cisco CallManager
cluster Corporate
Publisher LAN

Sub 1 M Sub 2 IP IVR 1 IP IVR


M group
M
IP IVR 2

CTI CTI CM CM
server server PG A PG B VRU VRU
A B PG A PG B

AW A AW B

ICM A ICM B
Call control, CTI data,
IP messaging
TDM voice lines
126160
Ethernet lines ICM central controllers
WedView
Reporting Client

CTI OS Considerations
CTI OS acts as client to CTI Server and provides agent and supervisor desktop functionality for IPCC.
It manages agent state and functionality during a failover of CTI Server, and it can be deployed as
redundant CTI OS Servers. The CTI OS Agent Desktop load-balances the agents between the redundant
servers automatically, and agents sitting next to each other may in fact be registered to two different CTI
OS Servers.
The CTI Object Server (CTI OS) consists of two services, the CTI OS service and the CTI driver. If
either of these two fails, then the active CTI OS fails-over to its peer server. Therefore, it is important
to keep both of these services active at all times.

Cisco IP Contact Center Enterprise Edition SRND


3-38 OL-7279-04
Chapter 3 Design Considerations for High Availability
Cisco Agent Desktop Considerations

Cisco Agent Desktop Considerations


Cisco Agent Desktop is a client of CTI OS, which provides for automatic failover and redundancy for
the Cisco Agent Desktop Server. If the Cisco CallManager Peripheral Gateway or CTI Server (CG)
fail-over, CTI OS maintains the agent state and information during the failover to prevent agents from
being logged out by the system because of the failover.
The Cisco Agent Desktop Servers (Enterprise Server, Chat, RASCAL, and so forth) can also be deployed
redundantly to allow for failover of the core Cisco Agent Desktop components. Cisco Agent Desktop
software is aware of the redundant Cisco Agent Desktop Servers and will automatically fail-over in the
event of a Cisco Agent Desktop Server process or hardware failure.

Other Considerations
An IPCC failover can affect other parts of the solution. Although IPCC may stay up and running, some
data could be lost during its failover, or other products that depend on IPCC to function properly might
not be able to handle an IPCC failover. This section examines what happens to other critical areas in the
IPCC solution during and after failover.

Reporting
The IPCC reporting feature uses real-time, five-minute and half-hour intervals to build its reporting
database. Therefore, at the end of each five-minute and half-hour interval, each Peripheral Gateway will
gather the data it has kept locally and send it to the Call Routers. The Call Routers process the data and
send it to their local Logger and Database Servers for historical data storage. If the deployment has the
Historical Data Server (HDS) option, that data is then replicated to the HDS server from the Logger as
it is written to the Logger database.
The Peripheral Gateways provide buffering (in memory and on disk) of the five-minute and half-hour
data collected by the system to handle network connectivity failures or slow network response as well
as automatic retransmission of data when the network service is restored. However, physical failure of
both Peripheral Gateways in a redundant pair can result in loss of the half-hour or five-minute data that
has not been transmitted to the Central Controller. Cisco recommends the use of redundant Peripheral
Gateways to reduce the chance of losing both physical hardware devices and their associated data during
an outage window.
When agents log out, all their reporting statistics stop. The next time the agents log in, their real-time
statistics start from zero. Typically, ICM failover does not force the agents to log out; however, it does
reset their agent statistics when the ICM failover is complete, but their agent desktop functionality is
restored back to its pre-failover state.
For further information, refer to the Cisco IP Contact Center Reporting Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm50doc/icm5rept/index.htm

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 3-39
Chapter 3 Design Considerations for High Availability
Other Considerations

Cisco IP Contact Center Enterprise Edition SRND


3-40 OL-7279-04
C H A P T E R 4
Sizing Call Center Resources

Central to designing an IP Contact Center (or any call center) is the proper sizing of its resources. This
chapter discusses the tools and methodologies needed to determine the required number of call center
agents (based on customer requirements such as call volume and service level desired), the number of
IP IVR ports required for various call scenarios (such as call treatment, queuing, and self-service
applications), and the number of voice gateway ports required to carry the traffic volume coming from
the PSTN or other TDM source such as PBXs and TDM IVRs.
The methodologies and tools presented in this chapter are based on traffic engineering principles using
the Erlang-B and Erlang-C models applied to the various resource in an IPCC deployment. Examples are
provided for an IPCC deployment to illustrate how resources can be impacted under various call
scenarios such as call treatment in the IP IVR and agent wrap-up time. These tools and methodologies
are intended as building blocks for sizing call center resources and for any telephony applications in
general.

Call Center Basic Traffic Terminology


It is important to be familiar with, and to be consistent in the use of, common call center terminology.
Improper use of these terms in the tools used to size call center resources can lead to inaccurate sizing
results.
The terms listed in this section are the most common terms used in the industry for sizing call center
resources. For additional call center industry terminology, refer to the definitions at
http://www.thecallcenterschool.com/glossary.html
(There are also other resources available on the internet for defining call center terms.)
In addition to the terms listed in this section, the section on the Cisco IPC Resource Calculator, page 4-6,
defines the specific terms used for the input and output of the Cisco call center sizing tool, IPC Resource
Calculator.
Also, for more details on various call center terms and concepts discussed in this document, refer to the
IPCC product documentation available online at
http://www.cisco.com

Busy Hour or Busy Interval


A busy interval could be one hour or less, such as 30 minutes or 15 minutes, if sizing is desired for such
smaller intervals. The busy interval occurs when the most traffic is offered during this period of the day.
The busy hour or interval varies over days, weeks, and months. There are weekly busy hours and
seasonal busy hours. There is one busiest hour in the year. Common practice is to design for the average

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-1
Chapter 4 Sizing Call Center Resources
Call Center Basic Traffic Terminology

busy hour (the average of the 10 busiest hours in one year). This average is not always applied, however,
when staffing is required to accommodate a marketing campaign or a seasonal busy hour such as an
annual holiday peak. In a call center, staffing for the maximum number of agent is determined using peak
periods, but staffing requirements for the rest of the day are calculated separately for each period
(usually every hour) for proper scheduling of agents to answer calls versus scheduling agents for offline
activities such as training or coaching. For trunks or IVR ports (in most cases), it is not practical to add
or remove trunks or ports daily, so these resources are sized for the peak periods. In some retail
environments, additional trunks could be added during the peak season and disconnected afterwards.

Busy Hour/Interval Call Attempts (BHCA)


The BHCA is the total number of calls during the peak traffic hour (or interval) that are attempted or
received in the call center. For the sake of simplicity, we will assume that all calls offered to the voice
gateway are received and serviced by the call center resources (agents and IP IVR ports). Calls normally
originate from the PSTN, although calls to a call center can also be generated internally, such as by a
help-desk application.

Servers
Servers are resources that handle traffic loads or calls. There are many types of servers in a call center,
such as PSTN trunks and gateway ports, agents, voicemail ports, and IVR ports.

Talk Time
Talk time is amount of time an agent spends talking to a caller, including the time an agent places a caller
on hold and the time spent during consultative conferences.

Wrap-Up Time (After-Call Work Time)


After the call is terminated (caller finishes talking to an agent and hangs up), the wrap-up time is the
time it takes an agent to "wrap up" the call by performing such tasks as updating a database, recording
notes from the call, or any other activity performed until an agent becomes available to answer another
call. The IPCC term for this concept is “after-call work time.”

Average Handle Time (AHT)


AHT is the mean (or average) call duration during a specified time period. It is a commonly used term
that refers to the sum of several types of "handle time," such as call treatment time, talk time, and
queuing time. In its most common definition, AHT is the sum of agent talk time and agent wrap-up time.

Erlang
Erlang is a measurement of traffic load during the busy hour. The Erlang is based on having 3600
seconds (60 minutes, or 1 hour) of calls on the same circuit, trunk, or port. (One circuit is busy for one
hour regardless of the number of calls or how long the average call lasts.) If a contact center receives 30
calls in the busy hour and each call lasts for six minutes, this equates to 180 minutes of traffic in the busy
hour, or 3 Erlangs (180 min/60 min). If the contact center receives 100 calls averaging 36 seconds each
in the busy hour, then total traffic received is 3600 seconds, or 1 Erlang (3600 sec/3600 sec).
Use the following formula to calculate the Erlang value:
Traffic in Erlangs = (Number of calls in the busy hour ∗ AHT in sec) / 3600 sec
The term is named after the Danish telephone engineer A. K. Erlang, the originator of queuing theory
used in traffic engineering.

Cisco IP Contact Center Enterprise Edition SRND


4-2 OL-7279-04
Chapter 4 Sizing Call Center Resources
Call Center Basic Traffic Terminology

Busy Hour Traffic (BHT) in Erlangs


BHT is the traffic load during the busy hour and is represented as the product of the BHCA and the AHT
normalized to one hour:
BHT = (BHCA ∗ AHT seconds) / 3600, or
BHT = (BHCA ∗ AHT minutes) / 60
For example, if the call center receives 600 calls in the busy hour, averaging 2 minutes each, then the
busy hour traffic load is (600 * 2/60) = 20 Erlangs.
BHT is typically used in Erlang-B models to calculate resources such as PSTN trunks or self-service
IVR ports. Some calculators perform this calculation transparently using the BHCA and AHT for ease
of use and convenience.

Grade of Service (Percent Blockage)


This measurement is the probability that a resource or server is busy during the busy hour. All resources
might be occupied when a user places a call. In that case, the call is lost or blocked. This blockage
typically applies to resources such as voice gateway ports, IVR ports, PBX lines, and trunks. In the case
of a voice gateway, grade of service is the percentage of calls that are blocked or that receive busy tone
(no trunks available) out of the total BHCA. For example, a grade of service of 0.01 means that 1% of
calls in the busy hour would be blocked. A 1% blockage is a typical value to use for PSTN trunks, but
different applications might require different grades of service.

Blocked Calls
A blocked call is a call that is not serviced immediately. Callers are considered blocked if they are
rerouted to another route or trunk group, delayed and put in a queue, or if they hear a tone (such as a
busy tone) or announcement. The nature of the blocked call will determine the model used for sizing the
particular resources.

Service Level
This term is a standard in the contact center industry, and it refers to the percentage of the offered call
volume (received from the voice gateway and other sources) that will be answered within x seconds,
where x is a variable. A typical value for a sales call center is 90% of all calls answered in less than
10 seconds (some calls will be delayed in a queue). A support-oriented call center might have a different
service level goal, such as 80% of all calls answered within 30 seconds in the busy hour. Your contact
center’s service level goal drives the number of agents needed, the percentage of calls that will be
queued, the average time calls will spend in queue, and the number of PSTN trunks and IP IVR ports
needed. For an additional definition of service level within IPCC products, refer to the IPCC glossary
available online at
http://www.cisco.com

Queuing
When all agents are busy with other callers or are unavailable (after call wrap-up mode), subsequent
callers must be placed in a queue until an agent becomes available. The percentage of calls queued and
the average time spent in the queue are determined by the service level desired and by agent staffing.
Cisco's IPCC solution uses an IP IVR to place callers in queue and play announcements. An IVR can
also be used to handle all calls initially (call treatment, prompt and collect – such as DTMF input or
account numbers – or any other information gathering) and for self-service applications where the caller
is serviced without needing to talk to an agent (such as obtaining a bank account balance, airline
arrival/departure times, and so forth). Each of these scenarios requires a different number of IP IVR
ports to handle the different applications because each will have a different average handle time and

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-3
Chapter 4 Sizing Call Center Resources
Call Center Resources and the Call Timeline

possibly a different call load. The number of trunks or gateway ports needed for each of these
applications will also differ accordingly. (See the section on Sizing Call Center Agents, IVR Ports, and
Trunks, page 4-11, for examples on how to calculate the number of trunks and gateway ports needed.)

Call Center Resources and the Call Timeline


The focus of this chapter is on sizing the following main resources in a call center:
• Agents
• Gateway ports (PSTN trunks)
• IP IVR ports.
It is helpful first to understand the anatomy of an inbound call center call as it relates to the various
resources used and the holding time for each resource. Figure 4-1 shows the main resources used and the
occupancy (hold/handle time) for each of these resources.

Figure 4-1 Inbound Call Timeline

IVR Agent Agent Agent


Ring Answers Answers Hangs-up Ready

Network Treatment/Queue Delay Agent Talk Time Wrap-up Time

Time Trunk is Occupied

Time IVR is Occupied

126044
Time Agent is Occupied

Ring delay time (network ring) should be included if calls are not answered immediately. This delay
could be a few seconds on average, and it should be added to the trunk average handle time.

Erlang Calculators as Design Tools


Many traffic models are available for sizing telephony systems and resources. Choosing the right model
depends on three main factors:
• Traffic source characteristics (finite or infinite)
• How lost calls are handled (cleared, held, delayed)
• Call arrival patterns (random, smooth, peaked).
For purposes of this document, there are mainly two traffic models that are commonly used in sizing call
center resources, Erlang-B and Erlang-C. There are many other resources on the internet that give
detailed explanations of the various models (search using "traffic engineering").
Erlang calculators are designed to help answer the following questions:
• How many PSTN trunks do I need?
• How many agents do I need?
• How many IVR ports do I need?

Cisco IP Contact Center Enterprise Edition SRND


4-4 OL-7279-04
Chapter 4 Sizing Call Center Resources
Erlang Calculators as Design Tools

Before you can answer these basic questions, you must have the following minimum set of information
that is used as input to these calculators:
• The busy hour call attempts (BHCA)
• Average handle time (AHT) for each of the resources
• Service level (percentage of calls that are answered within x seconds)
• Grade of service, or percent blockage, desired for PSTN trunks and IP IVR ports
The remaining sections of this chapter help explain the differences between the Erlang-B and Erlang-C
traffic models in simple terms, and they list which model to use for sizing the specific call center
resource (agents, gateway ports, and IP IVR ports). There are various web sites that provide call center
sizing tools free of charge (some offer feature-rich versions for purchase), but they all use the two basic
traffic models, Erlang-B and Erlang-C. Cisco does not endorse any particular vendor product; it is up to
the customer to choose which tool suits their needs. The input required for any of the tools, and the
methodology used, are the same regardless of the tool itself.
Cisco has chosen to develop its own telephony sizing tool, called Cisco IPC Resource Calculator. The
first version discussed here is designed to size call center resources. Basic examples are included later
in this chapter to show how to use the Cisco IPC Resource Calculator. Additional examples are also
included to show how to use the tool when some, but not all, of the input fields are known or available.
Before discussing the Cisco IPC Resource Calculator, the next two sections present a brief description
of the generic Erlang models and the input/output of such tools (available on the internet) to help the
reader who does not have access to the Cisco IPC Resource Calculator or who chooses to use other
non-Cisco Erlang tools.

Erlang-C
The Erlang-C model is used to size agents in call centers that queue calls before presenting them to
agents. This model assumes:
• Call arrival is random.
• If all agents are busy, new calls will be queued and not blocked.
The input parameters required for this model are:
• The number of calls in the busy hour (BHCA) to be answered by agents
• The average talk time and wrap-up time
• The delay or service level desired, expressed as the percentage of calls answered within a specified
number of seconds
The output of the Erlang-C model lists the number of agents required, the percentage of calls delayed or
queued when no agents are available, and the average queue time for these calls.

Erlang-B
The Erlang-B model is used to size PSTN trunks, gateway ports, or IP IVR ports. It assumes the
following:
• Call arrival is random.
• If all trunks/ports are occupied, new calls are lost or blocked (receive busy tone) and not queued.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-5
Chapter 4 Sizing Call Center Resources
Cisco IPC Resource Calculator

The input and output for the Erlang B model consists of the following three factors. You need to know
any two of these factors, and the model will calculate the third.
• Busy Hour Traffic (BHT), or the number of hours of call traffic (in Erlangs) during the busiest hour
of operation. BHT is the product of the number of calls in the busy hour (BHCA) and the average
handle time (AHT).
• Grade of Service, or the percentage of calls that are blocked because not enough ports are available
• Ports (lines), or the number of IP IVR or gateway ports

Cisco IPC Resource Calculator


Figure 4-2 is a snapshot of the current Cisco IP Communications (IPC) Resource Calculator, followed
by a definition of each of the input and output fields, how to use them, and how to interpret them.
Cisco is continually enhancing the IPC Resource Calculators, and the latest versions are available at
http://tools.cisco.com/partner/ipccal/index.htm
The Cisco IPC Resource Calculators are accessible to Cisco internal employees and Cisco partners.
These tools are based on industry Erlang traffic models. Other Erlang traffic calculators available on the
Web can also be used for sizing various contact center resources.

Cisco IP Contact Center Enterprise Edition SRND


4-6 OL-7279-04
Chapter 4 Sizing Call Center Resources
Cisco IPC Resource Calculator

Figure 4-2 Cisco IPC Resource Calculator

IPC Resource Calculator Input Fields (What You Must Provide)


When using the Cisco IPC Resource Calculator, you must provide the following input data:

Project Identification
This field is a description to identify the project or customer name and the specific scenario for this
calculation. It helps to distinguish the different scenarios run (exported and saved) for a project or a
customer proposal.

Calls Per Interval (BHCA)


The number of calls attempted during the busiest interval, or busy hour call attempts (BHCA). You can
choose the interval to be 60 minutes (busy hour), 30 minutes (half-hour interval), or 15 minutes. This
choice of interval length allows the flexibility to calculate staffing requirements more accurately for the
busiest periods within one hour, if desired. It can also be used to calculate staffing requirements for any
interval of the day (non-busy hour staffing).

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-7
Chapter 4 Sizing Call Center Resources
Cisco IPC Resource Calculator

Service Level Goal (SLG)


The percentage of calls to be answered within a specified number of seconds (for example, 90% within
30 seconds).

Average Call Talk Time


The average number of seconds a caller will be on-line after an agent answers the call. This value
includes time talking and time placed on hold by the agent, until the call is terminated. It does not include
time spent in the IVR for call treatment or time in queue.

Average After-Call Work Time


The average agent wrap-up time in seconds after the caller hangs up. This entry assumes that agents are
available to answer calls when they are not in wrap-up mode. If seated agents enter into another mode
(other than the wrap-up mode) where they are unavailable to answer calls, then this additional time
should be included (averaged for all calls) in the after-call work time.

Average Call Treatment Time (IVR)


The average time in seconds a call spends in the IVR before an attempt is made to send the call to an
agent. This time includes greetings and announcements as well as time to collect and enter digits (known
as prompt and collect, or IVR menuing) to route the call to an agent. It does not include queuing time if
no agents are available. (This queuing time is calculated in the output section of the calculator.) The call
treatment time should not include calls arriving at the IVR for self-service with no intention to route
them to agents. Self-service IVR applications should be sized separately using an Erlang-B calculator.

Wait Before Abandon (Tolerance)


This field is the amount of time in seconds that a contact center manager expects callers to wait in queue
(tolerance) for an agent to become available before they abandon the queue (hang up). This value has no
effect on any of the output fields except the abandon rate (number of calls abandoned).

Blockage % (PSTN Trunks)


This field is also known as Grade of Service, or the percentage of calls that will receive busy tone (no
trunks available on the gateway) during the busy hour or interval. For example, 1% blockage means that
99% of all calls attempted from the PSTN during the interval will have a trunk port available on the
gateway to reach the IVR or an agent.

Check to Manually Enter Agents


The user may manually enter the number of agents, after checking this box. If the number of agents is
too far from the recommended number, the calculator will show an error message. The error will appear
any time the number of calls queued reaches 0% or 100%.

IPC Resource Calculator Output Fields (What You Want to Calculate)


The IPC Resource Calculator calculates the following output values based on your input data:

Recommended Agents
The number of seated agents (calculated using Erlang-C) required to staff the call center during the busy
hour or busy interval.

Cisco IP Contact Center Enterprise Edition SRND


4-8 OL-7279-04
Chapter 4 Sizing Call Center Resources
Cisco IPC Resource Calculator

Calls Completed (BHCC)


This field is the busy hour call completions (BHCC), or the number of expected calls completed during
the busy hour. It is the number of calls attempted minus the number of calls blocked.

Calls Answered Within Target SLG


The percentage of calls that are answered within the set target time entered in the Service Level Goal
(SLG) field. This value is the calculated percentage of calls answered immediately if agents are
available. It includes a portion of calls queued if no agents are available within the SLG (for example,
less than 30 seconds). It does not include all queued calls because some calls will be queued beyond the
SLG target.

Calls Answered Beyond SLG


The percentage of calls answered beyond the set target time entered in the Service Level Goal (SLG)
field. For example, if the SLG is 90% of calls answered within 30 seconds, the calls answered beyond
SLG would be 10%. This value includes a portion of all calls queued, but only the portion queued beyond
the SLG target (for example, more than 30 seconds).

Queued Calls
The percentage of all calls queued in the IVR during the busy hour or interval. This value includes calls
queued and then answered within the Service Level Goal as well as calls queued beyond the SLG. For
example, if the SLG is 90% of calls answered within 30 seconds and queued calls are 25%, then there
are 10% of calls queued beyond 30 seconds, and the remaining 15% of calls are queued and answered
within 30 seconds (the SLG).

Calls Answered Immediately


The percentage of calls answered immediately by an agent after they receive treatment (if implemented)
in the IVR. These calls do not have to wait in queue for an agent. As in the preceding example, if 25%
of the calls are queued (including those beyond the target of 30 seconds), then 75% of the calls would
be answered immediately.

Average Queue Time (AQT)


The average amount of time in seconds that calls will spend in queue waiting for an agent to become
available during the interval. This value does not include any call treatment in the IVR prior to
attempting to send the call to an agent.

Average Speed of Answer (ASA)


The average speed of answer for all calls during the interval, including queued calls and calls answered
immediately.

Average Call Duration


The total time in seconds that a call remained in the system. This value is the sum of the average talk
time, the average IVR delay (call treatment), and the average speed of answer.

Agents Utilization
The percentage of agent time engaged in handling call traffic versus idle time. After-call work time is
not included in this calculation.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-9
Chapter 4 Sizing Call Center Resources
Cisco IPC Resource Calculator

Calls Exceeding Abandon Tolerance


The percentage (and number) of calls that will abandon their attempt during the interval, based on
expected Tolerance time specified in the input. If this output is zero, it means that all queued calls were
answered by an agent in less than the specified abandon time (longest queued call time is less than the
abandon time).

PSTN Trunk Utilization


This value is the occupancy rate of the PSTN trunks, calculated by dividing the offered load (Erlangs)
by the number of trunks

Voice Trunks Required


The number of PSTN gateway trunks required during the busy interval, based on the number of calls
answered by the voice gateway and the average hold time of a trunk during the busy interval. This value
includes average time of call treatment in the IVR, queuing in the IVR (if no agents are available), and
agent talk time. This calculated number assumes all trunks are grouped in one large group to handle the
specified busy hour (or interval) calls. If several smaller trunk groups are used instead, then additional
trunks would be required, therefore smaller groups are less efficient.

IVR Ports Required for Queuing


The number of IVR ports required to hold calls in queue while the caller waits for an agent to become
available. This value is based on an Erlang-B calculation using the number of queued calls and the
average queue time for those calls.

IVR Ports Required for Call Treatment


The number of IVR ports required for calls being treated in the IVR. This value is based on an Erlang-B
calculation using the number of calls answered and the average call treatment time (average IVR delay).

Total IVR Ports Requirement


This value is the total number of IVR ports required if the system is configured with separate port groups
for queuing and treatment. Pooling the ports for treatment and queuing results in fewer ports for the same
amount of traffic than if the traffic is split between two separate IVR port pools or groups. However,
Cisco recommends that you configure the number of ports required for queuing in a separate group, with
the ability to overflow to other groups if available.

Submit
After entering data in all required input fields, click on the Submit button to compute the output values.

Export
Click on the Export button to save the calculator input and output in a comma-separated values (CSV)
format to a location of your choice on your hard drive. This CSV file could be imported into a Microsoft
Excel spreadsheet and formatted for insertion into bid proposals or for presentation to clients or
customers. Multiple scenarios could be saved by changing one or more of the input fields and combining
all outputs in one Excel spreadsheet by adding appropriate titles to columns reflecting changes in the
input. This format makes comparing results of multiple scenarios easy to analyze.

Cisco IP Contact Center Enterprise Edition SRND


4-10 OL-7279-04
Chapter 4 Sizing Call Center Resources
Sizing Call Center Agents, IVR Ports, and Trunks

Sizing Call Center Agents, IVR Ports, and Trunks


The call center examples in this section illustrate how to use the IPC Resource Calculator in various
scenarios, along with the impact on required resources. The first example is a basic call flow, where all
incoming calls to the call center are presented to the voice gateway from the PSTN. Calls are routed
directly to an agent, if available; otherwise, calls are queued until an agent becomes available.

Basic Call Center Example


This example forms the basis for all subsequent examples in this chapter. After a brief explanation of
the output results highlighting the three resources (agents, IVT ports, and PSTN trunks) in this basic
example, subsequent examples build upon it by adding different scenarios, such as call treatment and
agent wrap-up time, to demonstrate how the various resources are impacted by different call scenarios.
This basic example uses the following input data:
• Total BHCA (60-minute interval) into the call center from the PSTN to the voice gateway = 2,000.
• Desired service level goal (SLG) of 90% of calls answered within 30 seconds.
• Average call talk time (agent talk time) = 150 seconds (2 minutes and 30 seconds).
• No after-call work time (agent wrap-up time = 0 seconds).
• No call treatment (prompt and collect) is implemented initially. All calls will be routed to available
agents or will be queued until an agent becomes available.
• Wait time before abandoned (tolerance) = 150 seconds (2 minutes and 30 seconds).
• Desired grade of service (percent blockage) for the PSTN trunks on the voice gateway = 1%.
After entering the above data in the input fields, pressing the Submit button at the bottom of the
calculator results in the output shown in Figure 4-3.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-11
Chapter 4 Sizing Call Center Resources
Sizing Call Center Agents, IVR Ports, and Trunks

Figure 4-3 Basic Example

Notice that the output shows 1980 calls completed by the voice gateway, out of the total of 2000 calls
attempted from the PSTN. This is because we have requested a provisioning of 1% blockage from our
PSTN provider, which results in 20 calls (1%) being blocked and receiving busy tone out of the total
2000 calls.

Agents
The result of 90 seated agents is determined by using the Erlang-C function imbedded in the IPC
Resource Calculator, and calls will be queued to this resource (agents).
Notice that, with 90 agents, the calculated service level is 93% of calls answered within 30 seconds,
which exceeds the desired 90% requested in the input section. Had there been one less agent (89 instead
of 90), then the 90% SLG would not have been met.
This result also means that 7% of the calls will be answered beyond the 30 second SLG. In addition,
there will be 31.7% of calls queued; some will queue less than 30 seconds and others longer. The average
queue time for queued calls is 20 seconds.
If 31.7% of the calls will queue, then 68.3% of the calls will be answered immediately without delay in
a queue, as shown in the output in Figure 4-3.

IVR Ports Required for Queuing


In this basic example, the IP IVR is being used as a queue manager to queue calls when no agents are
available. The calculator shows the percent and number of calls queued (31.7%, or 627 calls) and the
average queue time (20 seconds).

Cisco IP Contact Center Enterprise Edition SRND


4-12 OL-7279-04
Chapter 4 Sizing Call Center Resources
Sizing Call Center Agents, IVR Ports, and Trunks

These two outputs from the Erlang-C calculation are then used as inputs for the imbedded Erlang-B
function in the calculator to compute the number of IVR ports required for queuing and the
corresponding PSTN trunks required. The Erlang-B function is used here because a call would receive
a busy signal (be lost) if no trunks or IVR ports were available to answer or service the call.
The following traffic load impacts the required number of IP IVR ports for queuing derived from the
output of the calculator:
• The traffic load presented by the calls that queue (627 queued) with an average queue time of
20 seconds when no agents are available to answer the call immediately. This load shows that
10 IVR ports are required for queuing.

PSTN Trunks (Voice Gateway Ports)


The following traffic loads impact the required number of trunks:
• The load presented by the incoming traffic (1980 answered calls), with an average agent talk time
for all calls of 150 seconds.
• The traffic load presented by the calls that queue (31.7% queued), with an average queue time of 20
seconds when no agent is available to answer the call immediately.
Total trunks required to carry this total traffic load above is 103 trunks.
This calculation does not include trunks that might be needed for call scenarios that require all calls to
be treated first in the IVR before they are presented to available agents. That scenario is discussed in the
next example.

Call Treatment Example


This example builds upon the basic example in the preceding section. Again, all incoming calls to the
call center are presented to the voice gateway from the PSTN, then calls are immediately routed to the
IP IVR for call treatment (such as an initial greeting or to gather account information) before they are
presented to an agent, if available. If no agents are available, calls are queued until an agent becomes
available.
The impact of presenting all calls to the IP IVR is that the PSTN trunks are held longer, for the period
of the call treatment holding time. More IP IVR ports are also required to carry this extra load, in
addition to the ports required for queued calls.
Call treatment (prompt and collect) does not impact the number of required agents because the traffic
load presented to the agents (number of calls, talk time, and service level) has not changed.
Using a 15-second call treatment and keeping all other inputs the same, Figure 4-4 shows the number of
PSTN trunks (112) and IP IVR ports (16) required in addition to the existing 10 ports for queuing.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-13
Chapter 4 Sizing Call Center Resources
Sizing Call Center Agents, IVR Ports, and Trunks

Figure 4-4 Call Treatment in IVR

After-Call Work Time (Wrap-up Time) Example


Using the previous example, we now add the assumption that agents will spend an average of 45 seconds
of work time (wrap-up time) after each call. We can then use the IPC Resource Calculator to determine
the number of agents required to handle the same traffic load (see Figure 4-5).
After-call work time (wrap-up time) begins after the caller hangs up, so trunk and IP IVR resources are
not impacted and should remain the same. Assuming the SLG and traffic load also remain the same,
additional agents would be required only to service the call load and to compensate for the time agents
are in the wrap-up mode.

Cisco IP Contact Center Enterprise Edition SRND


4-14 OL-7279-04
Chapter 4 Sizing Call Center Resources
Call Center Sizing With Self-Service IVR Applications

Figure 4-5 After-Call Work Time

Note that trunks and IVT ports remained virtually the same, except that there is one additional trunk (113
instead of 112). This slight increase is not due to the wrap-up time, but rather is a side effect of the slight
change in the SLG (92% instead of 93%) due to rounding calculations for the required 116 agents due
to wrap-up time.

Call Center Sizing With Self-Service IVR Applications


Self-service applications route calls to an IVR (Cisco IP IVR or Cisco Internet Service Node (ISN)) and
present them with a menu of choices to access information from various databases. The calls are serviced
in the IVR and not answered by agents. At the end of the transaction, the caller hangs up and does not
need to speak to an agent. Examples include accessing bank account balances and transactions, airline
flight arrival and departure information, menu services such as driving directions, and so forth.
These self-service applications have a different call load and IVR average handle time than the call load
presented to the agents. In this case, only PSTN trunks (voice gateway ports) and IVR ports are required,
and no additional agents are required.
For such self-service applications, a standalone Erlang-B calculation is necessary to compute the
additional PSTN trunks and IVR ports required, as shown in the following example. These trunks and
IVR ports can then be added to the rest of the requirements needed in a call center for calls presented to
agents, as described in previous examples and illustrated in the following example.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-15
Chapter 4 Sizing Call Center Resources
Call Center Sizing With Self-Service IVR Applications

Call Center Example with IVR Self-Service Application


In this example, we assume that the call center receives 18,000 calls in the busy hour, and a portion of
the calls (20%) are self-serviced in the IVR without ever being transferred to an agent. These calls last
an average of about 60 seconds before they complete their transaction and hang up.
Another portion (20%) are identified as "high priority" callers (based on calling number, number dialed,
or other automatic identifier) and are immediately routed to a specialized group of agents without any
call treatment in the IVR but with a high service level goal of 95% of the calls to be answered within 10
seconds.
The remaining portion (60%) is normal calls that are presented with a menu in the IVR (call treatment,
prompt and collect) before they are transferred to an agent, and they have an SLG of 90% answered
within 30 seconds. The average call treatment is about 3 minutes and 9 seconds (171 seconds), and the
average talk time is 5 minutes (300 seconds).
In summary, the three traffic loads (call types) coming into this call center consist of the following:
• IVR self-service calls:
18,000 ∗ 20% = 3600 calls.
Average IVR call treatment = 60 seconds.
• High-priority calls (transferred to agents directly, no delay in IVR):
18,000 ∗ 20% = 3600 calls.
Routed immediately to agents if available; no IVR call treatment.
Average talk time = 6 minutes (360 seconds).
SGL = 95% of the calls to be answered within 10 seconds.
• Normal calls:
18,000 ∗ 60% = 10,800 calls.
Average time in IVR for call treatment = 171 seconds.
Average talk time = 5 minutes (300 seconds).
SGL = 90% of the calls to be answered within 30 seconds.
For high-priority and normal calls, a call might have to wait in queue in the IVR if no agents are available
to answer the call immediately.
We can compute the required resources for each of the three types of calls as follows:

IVR Self-Service Calls


• 18,000 ∗ 20% = 3600 calls.
• Average IVR call treatment = 60 seconds.
For self-service applications only, where no agents are required or involved, we will use the Cisco
IP IVR Stand-Alone Calculator (shown in Figure 4-6), which uses Erlang-B to compute the necessary
trunks and IP IVR ports.
• Busy hour traffic (BHT) = (3600 calls ∗ 60 sec/3600) = 60 Erlangs.
• Assume PSTN blockage = 1%.

Cisco IP Contact Center Enterprise Edition SRND


4-16 OL-7279-04
Chapter 4 Sizing Call Center Resources
Call Center Sizing With Self-Service IVR Applications

Inserting these values into the first column, titled Self Service, in the calculator produces the following
results, as illustrated in Figure 4-6:
• 75 IVR ports for self-service
• 75 trunks
These PSTN trunks and IVR ports are in addition to any that might be needed for priority calls (20%)
and normal calls (60%) that require PSTN trunks and IVR ports for queuing and call treatment before
transferring to an agent. The remaining columns could be used if you had multiple trunk and IVR groups
that were not pooled together (multiple self-service applications) or if the IVR employed had the
capability to queue calls at the edge (remote branch with local PSTN incoming gateway), as is the case
with Cisco ISN.

Figure 4-6 IVR Self-Service Calls

Note There are many Erlang-B calculators available for free on the Web. (Search on Erlang-B.)

High-Priority Calls (Transferred to Agents Directly, no IVR Call Treatment)


• 18,000 ∗ 20% = 3600 calls.
• Routed immediately to agents if available; no IVR call treatment.
• Average talk time = 6 minutes (360 seconds).
• SGL = 95% of the calls to be answered within 10 seconds.
Inserting these parameters into the IPC Resource Calculator produces the following results (as illustrated
in Figure 4-7):
• 384 agents are required.
• 386 trunks are required.
• 6 IVR ports are needed for queuing.
• No call treatment ports are required here.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-17
Chapter 4 Sizing Call Center Resources
Call Center Sizing With Self-Service IVR Applications

Figure 4-7 High-Priority Calls

Normal Calls
• 18,000 ∗ 60% = 10,800 calls.
• Average time in IVR for call treatment = 171 seconds.
• Average talk time = 5 minutes (300 seconds).
• SGL = 90% of the calls to be answered within 30 seconds.
Inserting these parameters into the IPC Resource Calculator produces the following results (as illustrated
in Figure 4-8):
• 907 agents are required.
• 1469 trunks are required.
• 44 IVR ports are needed for queuing.
• 563 IVR ports are needed for call treatment.

Cisco IP Contact Center Enterprise Edition SRND


4-18 OL-7279-04
Chapter 4 Sizing Call Center Resources
Call Center Sizing With Self-Service IVR Applications

Figure 4-8 Normal Calls

Now that we have sized all the resources required for the three types of calls in this call center example,
we can add the results to determine the total required resources of each type (agents, PSTN trunks, and
IVR ports):
• Agents for high-priority calls (calls answered by agents, no IVR) = 384
• Agents for normal calls (calls transferred to agents after IVR treatment) = 907
• Total agents = 384 + 907 = 1291
• IVR ports for self-service = 75
• IVR ports for queuing = 6 + 40 = 46
• IVR ports for call treatment = 540
• Total IVR ports = 75 + 46 + 540 = 661
• Total PSTN trunks = 75 + 386 + 1469 = 1930
If IP IVR is used, then you must enter the number of call treatment and queuing ports into the
Configuration and Ordering Tool for proper sizing of server resources. You can access the Configuration
and Ordering Tool at
http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_how_to_order.html
If Cisco ISN is the IVR type used, refer to the section on Sizing ISN Components, page 4-20, for
additional details on sizing ISN servers.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-19
Chapter 4 Sizing Call Center Resources
Sizing ISN Components

Sizing ISN Components


Internet Service Node (ISN) sizing involves the following components:
• ISN Server Sizing, page 4-20
• Sizing ISN Licenses, page 4-23
• Cisco IOS Gateway Sizing, page 4-24
• ICM Peripheral Gateway (PG) Sizing, page 4-24
• Prompt Media Server Sizing, page 4-24

ISN Server Sizing


According to the Cisco Internet Service Node Technical Reference (available at http://www.cisco.com),
an ISN Combo Box (an ISN Application server and an ISN Voice Browser hosted on the same server)
deployed in the common comprehensive ISN deployment model can support 300 effective calls. An
effective call can be a call that is undergoing call treatment or queuing in the IVR on the Cisco IOS
gateway (under VXML control of the ISN), or it can be a call that the ISN has transferred to an agent
and that is still being monitored/controlled by the ISN. In the latter case, for example, if the ISN uses
the IP Transfer method to route a call to an agent, then the ISN is still monitoring that call and it counts
as an effective call. On the other hand, if an Outpulse or IN Transfer is used to route a call to an agent,
then ISN no longer has any control over that call, so that call does not count as an effective call. In
addition, calls that are transferred to agents directly without ISN (Voice Browser) control or
involvement do not count as effective calls either, unless those calls are unable to find an agent and are
subsequently treated by the ISN.

Note This section uses the same example as in the preceding Call Center Example with IVR Self-Service
Application, page 4-16, but it reiterates the parameters of that example for clarity and simplicity.

ISN Server Sizing Example


Consider an example where a call center receives BHCAs of 18,000 calls. Assume a portion of the calls,
20%, are self-serviced in the IVR without ever being transferred to an agent. These calls last an average
of about 60 seconds before they complete their transaction and hang up.
Also assume that another portion of the incoming calls, 20%, will be identified as high-priority callers
(based on calling number, number dialed, or other automatic identifier) and will be routed immediately
to a specialized group of agents with a high service level goal (SLG) of 95% of the callers to be answered
within 10 seconds without any intended ISN involvement (no call treatment). Note, however, that a
small percentage of these calls will not be able to reach an agent immediately and will have to be diverted
for queuing by the ISN.
The remaining portion, 60%, is normal calls that will be presented with IVR treatment by the ISN before
they are transferred to an agent with an SLG of 90% answered within 30 seconds. The average call
treatment is 3 minutes and 9 seconds (171 seconds) and the average talk time is 5 minutes (300 seconds).
The traffic loads (call types) coming into this call center are as follows:
• IVR self-service calls:
18,000 ∗ 20% = 3600 calls.
Average IVR call treatment = 60 seconds.

Cisco IP Contact Center Enterprise Edition SRND


4-20 OL-7279-04
Chapter 4 Sizing Call Center Resources
Sizing ISN Components

• High-priority calls (transferred to agents directly):


18,000 ∗ 20% = 3600 calls.
Calls routed immediately to agents if available; no IVR call treatment.
Average talk time = 6 minutes (360 seconds).
SLG = 95% of the callers to be answered within 10 seconds.
If agent is not available, call must be queued by ISN.
• Normal calls:
18,000 ∗ 60% = 10800 calls.
Average IVR time for call treatment = 171 seconds.
Average talk time = 5 minutes (300 seconds).
SLG = 90% of the callers to be answered within 30 seconds.
In the latter two call types, high-priority and normal calls, a call might have to wait in queue in the IVR
(ISN) if no agents are available to answer the call.
After computing the required IVR ports and agents for each of these call types using the tools and
methodology described in chapter on Sizing Call Center Resources, page 4-1, we obtain the following
results:
• IVR self-service calls (using the Cisco IP IVR standalone Erlang-B calculator):
– 75 ports for IVR (prompt and collect)
– 75 trunks
• High-priority calls transferred to agents directly, with no ISN involvement or call treatment (using
the Cisco IPC Resource Calculator):
– 384 agents are required
– 386 trunks (no VXML required)
– 7 ports for queuing if calls cannot reach an agent immediately
– No IVR ports are required
• Normal calls (using the Cisco IPC Resource Calculator):
– 907 agents are required
– 1469 trunks
– 44 ports for queuing
– 563 ports for IVR (prompt and collect)

Totaling the Results


Now that we have calculated all the resources required for the three types of calls, we can total the results
to determine the number of effective calls needed to size the ISN properly. Remember that an effective
call is a call undergoing IVR call treatment or queuing treatment by the ISN, or it is a call that the ISN
has transferred to an agent but that the ISN still needs to monitor. In our example, the agents are all IPCC
agents. Therefore, when the ISN routes calls to the agents, the ISN uses the IP Transfer routing method,
which means that the ISN continues to monitor those calls.
For this example, the totals are:
• Ports required for IVR = (75 + 563) = 638
• Ports required for queuing = (7 + 44) = 51

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-21
Chapter 4 Sizing Call Center Resources
Sizing ISN Components

• PSTN trunks (with VXML) = 75 + 1469 = 1544


• PSTN trunks (no VXML) = 386
• PSTN trunks (total) = 75 + 1469 + 386 = 1930
Thus, the combined IVR and queuing load on the ISN is 689 simultaneous calls, where the IVR queuing
treatment is physically performed on Cisco IOS gateways (using the ISN Comprehensive deployment
model).
The number of calls transferred to IPCC agents by the ISN (which still require monitoring by the ISN)
includes the simultaneous calls handled by the 907 agents needed for normal calls (because the ISN
treated every normal call before sending it to an agent) plus a slight extra amount for the high-priority
calls that are transferred to agents after being queued by ISN (because they could not initially reach
agents). The extra amount needed for the these high-priority calls is a fairly complex calculation that can
be approximated in most cases by simply doubling the number of ISN queue ports required for the
high-priority calls, which in this case is:
(7 ∗ 2) = 14 extra calls
The number of transferred calls to agents monitored by ISN in this example is, therefore:
907 + 14 = 921 simultaneous calls
Thus, the total simultaneous number of effective calls for ISN sizing purposes is:
689 (IVR and queuing) + 921 (transferred) = 1610 effective calls
Each ISN Combo Box supports 300 effective calls, so for our example we will need:
1610/300 = 6 ISN Combo Boxes
An additional ISN Combo Box is normally recommended for redundancy, giving us a final total of:
7 ISN Combo Boxes
Figure 4-9 illustrates the preceding ISN sizing example. (For simplicity, the Cisco IOS gatekeeper and
Cisco CallManager are not shown.)

Figure 4-9 ISN Server Sizing Example

921 calls transferred


to agents by ISN
Other calls routed
to agents without ISN
involvement

IP

ISN

VXML/H.323 GED125
PSTN V PG

689 calls getting ISN sees an ICM


“effective” load
132075

IVR/queuing on
an IOS gateway of 689 + 921 =
under ISN 1610 calls

Cisco IP Contact Center Enterprise Edition SRND


4-22 OL-7279-04
Chapter 4 Sizing Call Center Resources
Sizing ISN Components

Sizing ISN Licenses


ISN deployments require the following types of licenses:
• ISN Software Licenses, page 4-23
• ISN Session Licenses, page 4-23

ISN Software Licenses


An ISN Application Server software license and an ISN Voice Browser software license are required for
each ISN Combo Box. Therefore, our example deployment in Figure 4-9 would require 7 ISN
Application Server software licenses and 7 ISN Voice Browser software licenses.

ISN Session Licenses


In addition to the ISN software licenses, the following session licenses are also required:
• ISN Application Server
An ISN Application Server session license is required for the maximum number of simultaneous
calls requiring IVR/queuing treatment by the ISN. In our example, 689 ISN Application Server
session licenses are required.
• ISN Voice Browser
An ISN Voice Browser session license is required for the maximum number of simultaneous
IVR/queuing calls plus the number of calls transferred to agents that are still being monitored by the
ISN. In other words, ISN Voice Browser session licenses are required for the number of effective
calls, as defined earlier. In our example, 1610 ISN Voice Browser session licenses are required.

Alternative Simplified Method for ISN Capacity Sizing


This section presents a fast, conservative alternative to the fairly rigorous ISN sizing calculations used
in the preceding sections. This simplified method can be used if you already know the number of agents
and the number of IVR/queuing sessions required. The simplified method also assumes a worst-case
scenario, wherein every call sent to an agent is being monitored by the ISN.
For the example in Figure 4-9, assume that we know 689 sessions are required for IVR/queuing and that
there are 1291 agents. In a worst-case scenario, we simply add the number of IVR/queuing sessions to
the number of agents to reach a conservative estimate of 1980 effective calls. This number of effective
calls requires 7 ISN Combo Boxes plus an extra for redundancy, giving us 8 ISN Combo Boxes.
Therefore, 8 ISN Application Server software licenses and 8 ISN Voice Browser software licenses are
also required.
With this method, we would estimate that 689 ISN Application Server session licenses are required (for
the 689 IVR/queuing sessions), but the number of ISN Voice Browser session licenses would now be
(689 + 1291) = 1980. This estimate is higher than the number derived from the more rigorous sizing
calculations, but it is not a bad estimate, and it has the advantage of being conservative.
However, if the estimates for the number of agents and IVR ports are very high compared to what are
actually need (calculated method), then the licenses and servers would be overestimated and overpriced.
One reason the number of agents might be overestimated is that sometimes this count includes all hired
agents rather than the actual number of seated agents required for the busy hour. (See Agent Staffing
Considerations, page 4-25, for more information on estimating the number of agents needed.)

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-23
Chapter 4 Sizing Call Center Resources
Sizing ISN Components

Cisco IOS Gateway Sizing


You can estimate the number of PSTN ingress ports required on the Cisco IOS gateway by adding all
the IVR/queuing ports plus the number of agents. For the example in Figure 4-9, 1980 gateway ingress
ports are required.
With the ISN comprehensive deployment model, the Cisco IOS gateway does more than simply
terminate PSTN ports. It can also act as a VXML browser to perform prompt/collect and queuing
treatment under ISN control. The VXML and PSTN functions can reside on the same Cisco IOS gateway
or on separate gateways. Contact your Cisco Systems Engineer (SE) for guidance on sizing Cisco IOS
gateways for VXML performance.

ICM Peripheral Gateway (PG) Sizing


Each ISN Application Server requires an instance of the ICM VRU PIM. Therefore, based on our
estimate of 7 ISN Combo Boxes, our example deployment would require 7 ICM VRU PIM instances.
There is no explicit charge for the PIM licenses; they are included with the ISN Application Server
software licenses.
For details on VRU PG and PIM capacities, refer to the chapter on Sizing IPCC Components and
Servers, page 5-1

Prompt Media Server Sizing


The prerecorded prompts played out by Cisco IOS gateways (under VXML control of the ISN) can be
stored in gateway flash memory, provided there is enough gateway memory to hold the prompt .wav
files. In most cases, however, the prompt files are stored on a separate web server called the prompt
media server.
Use the following method to estimate how many prompt media servers are required:
Assume that the prerecorded prompt .wav files have to be pushed to the Cisco IOS gateway using G.711
bandwidth (approximately 80 kbps). Thus, each call receiving prompt play from the Cisco IOS gateway
requires 80 kbps of media from the prompt media server.
The other piece of required information is the media serving capacity of the web server that will act as
the prompt media server. To determine the maximum number of prompts that serve can support, divide
the media serving capacity of the media server by 80 kbps per prompt.
For example, assume the prompt media server has a media serving capacity of 32 Mbps. That server can
support:
32 Mbps / 80 kbps = 400 simultaneous prompts per media server
For the example in Figure 4-9, we need to play prompts (media) for 689 self-service and IVR/queued
calls, which would require 689/400 = 2 prompt media servers (plus a recommended third prompt media
server for failover purposes).
This sizing method is not rigorous, but it is nonetheless reasonably accurate and conservative. If you can
cache some of the prompts in gateway memory, then you might be able to reduce the required number
of prompt media servers.

Cisco IP Contact Center Enterprise Edition SRND


4-24 OL-7279-04
Chapter 4 Sizing Call Center Resources
Agent Staffing Considerations

Agent Staffing Considerations


In calculating agent requirements, make the following adjustments to factor in all the activities and
situations that make agents unproductive or unavailable:

Agent Shrinkage
Agent shrinkage is a result of any time for which agents are being paid but are not available to handle
calls, including activities such as breaks, meetings, training, off-phone work, unplanned absence,
non-adherence to schedules, and general unproductive time.

Agent Shrinkage Percentage


This factor will vary and should be calculated for each call center. In most call centers, it ranges from
20% to 35%.

Agents Required
This number is based on Erlang-C results for a specific call load (BHCA) and service level.

Agents Staffed
To calculate this factor, divide the number of agents required from Erlang-C by the productive agent
percentage (or 1 minus the shrinkage percentage). For example, if 100 agents are required from Erlang-C
and the shrinkage is 25%, then 100/.75 yields a staffing requirement of 134 agents.

Call Center Design Considerations


Consider the following design factors when sizing call center resources:
• Compute resources required for the various busy intervals (busy hours), such as seasonal busy hours
and average daily busy hour. Many businesses compute the average of the 10 busiest hours of the
year (excluding seasonal busy hours) as the busy-hour staffing. Retail business call centers will add
temporary staff based on seasonal demands such as holiday seasons. Run multiple interval
calculations to understand daily staff requirements. Every business has a different call load
throughout the day or the week, and agents must be staffed accordingly (using different shifts or
staffing levels). Customer Relationship Management (CRM) and historical reporting data help to
fine-tune your provisioning computations to maintain or improve service levels.
• When sizing IVR ports and PSTN trunks, it is better to over-provision than to under-provision. The
cost of trimming excess capacity (disconnecting PSTN lines) is much cheaper than lost revenue, bad
service, or legal risks. Some governmental agencies are required to meet minimum service levels,
and outsourced call centers might have to meet specific service level agreements.
• If the call center receives different incoming call loads on multiple trunk groups, additional trunks
would be required to carry the same load using one large trunk group. You can use the Erlang-B
calculator to size the number of trunks required, following the same methodology as in the Call
Treatment Example, page 4-13. Sizing of required trunks must be done for each type of trunk group.
• Consider marketing campaigns that have commercials asking people to call now, which can cause
call loads to peak during a short period of time. The Erlang traffic models are not designed for such
short peaks (bunched-up calls); however, a good approximation would be to use a shorter busy
interval, such as 15 minutes instead of 60 minutes, and to input the expected call load during the
busiest 15 minutes to compute required agents and resources. Using our Basic Call Center Example,
page 4-11, a load of 2000 calls in 60 minutes (busy interval) requires 90 agents and 103 trunks. We
would get exactly the same results if we used an interval of 15 minutes with 500 calls (¼ of the call

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 4-25
Chapter 4 Sizing Call Center Resources
Call Center Design Considerations

load). However, if 600 of the calls arrive during a 15-minute interval and the balance of the calls
(1400) arrive during the rest of the hour, then 106 agents and 123 trunks would be required instead
to answer all 600 calls within the same service level goal. In a sales call center, the potential to
capture additional sales and revenue could justify the cost of the additional agents, especially if the
marketing campaign commercials are staggered throughout the hour, the day, and the various time
zones.
• Consider agent absenteeism, which can cause service levels to go down, thus requiring additional
trunks and IP IVR queuing ports because more calls will be waiting in queue longer and fewer calls
will be answered immediately.
• Adjust agent staffing based on the agent shrinkage factor (adherence to schedules and staffing
factors, as explained in Agent Staffing Considerations, page 4-25).
• Allow for growth, unforeseen events, and load fluctuations. Increase trunk and IVR capacity to
accommodate the impact of these events (real life) compared to Erlang model assumptions.
(Assumptions might not match reality.) If the required input is not available, make assumptions for
the missing input, run three scenarios (low, medium, and high), and choose the best output result
based on risk tolerance and impact to the business (sales, support, internal help desk, industry,
business environment, and so forth). Some trade industries publish call center metrics and statistics,
such as those shown in Table 4-1, available from web sites such as
http://www.benchmarkportal.com. You can use such industry statistics in the absence of any
specific data about your call center (no existing CDR records, historical reports, and so forth).

Table 4-1 eBusiness Best Practices for All Industries, 20011

Inbound Call Center Statistics Average Best Practices


80% calls answered in? (seconds) 36.7 18.3
Average speed of answer (seconds) 34.6 21.2
Average talk time (minutes) 6.1 3.3
Average after-call work time (minutes) 6.6 2.8
Average calls abandoned 5.5% 3.7%
Average time in queue (seconds) 45.3 28.1
Average number of calls closed on first contact 70.5% 86.8%
Average TSR occupancy 75.1% 84.3%
Average time before abandoning (seconds) 66.2 31.2
Average adherence to schedule 86.3% 87.9%
Cost per call $9.90 $7.12
Inbound calls per 8-hour shift 69.0 73.9
Percentage attendance 86.8% 94.7%
1. Special Executive Summary; Principal Investigator, Dr. Jon Anton; Purdue University, Center for Customer-Driven Quality.

Use the output of the IPC Resource Calculator as input for other Cisco configuration and ordering tools
that may require as input, among other factors, the number of IVR ports, number of agents, number of
trunks, and the associated traffic load (BHCA).

Cisco IP Contact Center Enterprise Edition SRND


4-26 OL-7279-04
C H A P T E R 5
Sizing IPCC Components and Servers

Proper sizing of your Cisco IP Contact Center (IPCC) Enterprise solution is important for optimum
system performance and scalability. Sizing considerations include the number of agents the solution can
support, the maximum busy hour call attempts (BHCA), and other variables that affect the number, type,
and configuration of servers required to support the deployment. Regardless of the deployment model
chosen, IPCC Enterprise is based on a highly distributed architecture, and questions about capacity,
performance, and scalability apply to each element within the solution as well as to the overall solution.
This chapter presents best design practices focusing on scalability and capacity for IPCC Enterprise
deployments. The design considerations, best practices, and capacities presented in this chapter are
derived primarily from testing and, in other cases, extrapolated test data. This information is intended to
enable you to size and provision IPCC solutions appropriately.

Sizing Considerations for IPCC


This section discusses the following IPCC sizing considerations:
• Core IPCC Components, page 5-1
• Minimum Hardware Configurations for IPCC Core Components, page 5-8
• Additional Sizing Factors, page 5-9

Core IPCC Components


When sizing IPCC deployments, IP Telephony components are a critical factor in capacity planning.
Good design, including multiple Cisco CallManagers and clusters, must be utilized to support significant
call loads. For additional information on Cisco CallManager capacity and sizing of IP Telephony
components, refer to Sizing Cisco CallManager Servers For IPCC, page 6-1, and to the Cisco IP
Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
Additionally, because of varying agent and skill group capacities, proper sizing of the CTI OS and Cisco
Agent Desktop servers should be considered together with the IP Telephony components.
Finally, the remaining ICM components, while able to scale extremely well, are affected by specific
configuration element sizing variables that also have an impact on the system resources.
These factors, discussed in this section, must be considered and included in the planning of any
deployment.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-1
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

The information presented in Figure 5-1, Figure 5-2, and Table 5-1 does not apply equally to all
implementations of IPCC. The data is based on testing in particular scenarios, and it serves only as a
guide, along with the sizing variables information in this chapter. As always, you should be conservative
when sizing and should plan for growth.

Note Sizing considerations are based upon capacity and scalability test data. Major ICM software processes
were run on individual servers to measure their specific CPU and memory usage and other internal
system resources. Reasonable extrapolations were used to derive capacities for co-resident software
processes and multiple CPU servers. This information is meant as a guide for determining when ICM
software processes can be co-resident within a single server and when certain processes need their own
dedicated server. Table 5-1 assumes that the deployment scenario includes two fully redundant servers
that are deployed as a duplexed pair. While a non-redundant deployment might theoretically deliver
higher capacity, no independent testing has been done to validate this theory. Therefore, you can and
should refer to Table 5-1 for sizing information about simplexed as well as duplexed deployments.

Note The Cisco IP Contact Center solution does not provide a quad-processor Media Convergence Server
(MCS) at this time. If extra performance is required beyond the limits described in the table below, it
might be possible to use an off-the-shelf quad-processor server in lieu of the MCS 7845. For server
specifications, refer to the Cisco Intelligent Contact Management Software Bill of Materials (BOM)
documentation available at http://www.cisco.com/univercd/cc/td/doc/product/icm/ccbubom/index.htm.

The following notes apply to all figures and tables in this chapter:
• The number of agents indicates the number of logged-in agents.
• Server types:
– APG = Agent Peripheral Gateway
– CAD = Cisco Agent Desktop
– HDS = Historical Data Server
– PRG = Progger
– RGR = Rogger

Cisco IP Contact Center Enterprise Edition SRND


5-2 OL-7279-04
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Figure 5-1 Minimum Servers Required for Typical IPCC Deployment with CTI Desktop

Max Agent Count 250 500 1,000 2,000

PGR RGR RGR Router


Routing
Database
HDS HDS HDS Logger
Reporting

HDS

APG APG APG

Peripheral APG APG


Gateways
Agent
Services APG

APG

132068
The following notes apply to Figure 5-1:
• Sizing is based upon the Cisco MCS 7845 (3.0 GHz or higher) and 5 skill groups per agent.
• Voice Response Unit (VRU) and Cisco CallManager components are not shown.
• For more than 2,000 agents, refer to Table 5-1.
• The Agent Peripheral Gateway (APG) consists of a Generic PG (Cisco CallManager PIM and VRU
PIM), CTI Server, and CTI OS.
• For more information about APG deployment and configuration options, see Figure 5-3 and
Figure 5-4.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-3
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Figure 5-2 Minimum Servers Required for Typical IPCC Deployment with Cisco Agent Desktop

Max Agent Count 100 300 1,200 2,400

PGR RGR RGR Router


CAD
Routing
Database HDS HDS HDS Logger
Reporting

HDS

APG* APG* APG* APG*


CAD CAD CAD CAD

Peripheral APG* APG* APG*


Gateways CAD CAD CAD
Agent
Services APG* APG* APG*
CAD CAD CAD

APG* APG* APG*


CAD CAD CAD

132069
The following notes apply to Figure 5-2:
• Sizing is based upon the Cisco MCS 7845 (3.0 GHz or higher) and 5 skill groups per agent.
• Voice Response Unit (VRU) and Cisco CallManager components are not shown.
• For more than 2,400 agents, refer to Table 5-1.
• The Agent Peripheral Gateway (APG) consists of a Generic PG (Cisco CallManager PIM and VRU
PIM), CTI Server, and CTI OS.

Cisco IP Contact Center Enterprise Edition SRND


5-4 OL-7279-04
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Table 5-1 Sizing Information for IPCC Components and Servers

Number Maximum
Component Server Model1 of CPUs Agents Notes
Progger: MCS-7835I-3.0-CC1 Cannot be co-resident with Administrative
Workstation (AW) or Historical Data Server (HDS).
Peripheral MCS-7835H-3.0-CC1 1 100
Gateway, Router, Maximum of 50 simultaneous queued calls.
and Logger
MCS-7845H-3.0-CC1 250 Maximum of 125 simultaneous queued calls.
Logger database is limited to 14 days.
2
Maximum of 100 agents on MCS-7845 if using a
co-resident Cisco Agent Desktop server.
Maximum of 50 agents on MCS-7845 if using a
co-resident Dialer.
Maximum of 25 agents on MCS-7845 if using a
co-resident Cisco Agent Desktop server and Dialer.
Outbound Dialer is not supported on the MCS-7835
in the Progger configuration.
Rogger: MCS-7835I-3.0-CC1
Router and Logger MCS-7835H-3.0-CC1 1 500
MCS-7845H-3.0-CC1 2 1,500
Router MCS-7845H-3.0-CC1 2 5,000 MCS-7835 not supported
Third-party quad 4 6000
server
Logger MCS-7845H-3.0-CC1 2 5,000 MCS-7835 not supported
Third-party quad 4 6,000
server
Administrative MCS-7835I-3.0-CC1 AW/HDS cannot be co-resident with a Progger,
Workstation (AW) Rogger, Router, Logger, or PG.
MCS-7835H-3.0-CC1 1 500
and Historical Data
Maximum of 2 AW/HDS supported with a single
Server (HDS) MCS-7845H-3.0-CC1 2 5,000
Logger; maximum of 4 with duplexed Loggers.
Third-party quad 4 6,000
WebView server can be co-resident with HDS for up
server
to 50 simultaneous users.
WebView MCS-7835I-3.0-CC1 50 A total of 4 WebView servers may be deployed to
Reporting Server MCS-7835H-3.0-CC1 1 simultaneous reach 200 simultaneous WebView clients.
WebView Difference between MCS-7845 and MCS-7835 is the
MCS-7845H-3.0-CC1 2 clients number of agents supported by AW/HDS.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-5
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Table 5-1 Sizing Information for IPCC Components and Servers (continued)

Number Maximum
Component Server Model1 of CPUs Agents Notes
Agent PG MCS-7835I-3.0-CC1 Refer to Figure 5-3 and Figure 5-4 for more details
about Agent PG configuration options.
(Inbound only) MCS-7835H-3.0-CC1 1 250
Up to 150 Cisco Agent Desktop agents are supported
on an MCS-7835 server Agent PG.
MCS-7845H-3.0-CC1 2 500
Up to 300 Cisco Agent Desktop agents are supported
on a MCS-7845 server Agent PG.
VRU ports should not exceed half of the maximum
supported agents listed in the Maximum Agents
column. Additional VRU PGs can be deployed to
accommodate a greater number of VRU ports.
For more information on the various Agent PG
deployment options, see Peripheral Gateway and
Server Options, page 5-10.
Voice Response MCS-7835I-3.0-CC1 Use the number of ports instead of agent count.
Unit (VRU) PG
MCS-7835H-3.0-CC1 1 600 ports Average of 5 Run VRU Script Nodes per call.
MCS-7845H-3.0-CC1 2 1200 ports Maximum of 8 PIMs per MCS-7845 and 4 PIMs per
MCS-7835. Not to exceed 2 PIMs per 300 ports on a
Generic PG.
Agent PG with MCS-7835H-3.0-CC1 1 (Inbound Moving the dialer off of the Agent PG has no effect
Outbound Voice agents) + (2.5 on the total number of Outbound agents supported.
MCS-7845H-3.0-CC1 2
(includes Dialer ∗ Outbound
Each transfer to a VRU port is equivalent to an agent.
and Media Routing agents) < 250
PG)
(Inbound
agents) + (2.5
∗ Outbound
agents) < 500
Dialer only MCS-7835H-3.0-CC1 1 100 Each transfer to a VRU port is equivalent to an agent.
MCS-7845H-3.0-CC1 2 200
Agent PG with MCS-7835H-3.0-CC1 1 Up to 500 Media Routing (MR) PG co-residency requires the
Media Blender MCS-7845H-3.0-CC1 2 total sessions MCS-7845. See subsequent rows of this table for
(Collaboration capacity numbers.
includes Media
Routing PG)
Media Blender MCS-7845H-3.0-CC1 2 Up to 500 MCS-7835 is not supported.
total sessions
With MCS-7845:
• Single-session chat: 250 agents and 250 callers
• Blended collaboration: 250 agents and
250 callers
• Multi-session chat: 125 agents and 375 callers

Cisco IP Contact Center Enterprise Edition SRND


5-6 OL-7279-04
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Table 5-1 Sizing Information for IPCC Components and Servers (continued)

Number Maximum
Component Server Model1 of CPUs Agents Notes
Web Collaboration MCS-7845H-3.0-CC1 2 500 total MCS-7835 is not supported.
Server sessions or
With MCS-7845:
250
one-to-one • Single-session chat: 250 agents and 250 callers
• Blended collaboration: 250 agents and
250 callers
• Multi-session chat: 125 agents and 375 callers
Dynamic Content MCS-7845H-3.0-CC1 2 100 MCS-7835 is not supported.
Adapter (DCA) for
DCA co-residency is not supported.
Web Option
Overall limitation (MCS-7845): 100 concurrent
DCA sessions.
Email Manager MCS-7835H-3.0-CC1 1 See note. MCS-7835 is not supported.
Server
MCS-7845H-3.0-CC1 2 1000 (max) Less than 10 agents: all Cisco Email Manager
components and databases co-exist on single server
(MCS-7845).
Up to 250 agents: 2 servers – Cisco Email Manager
AppServer, UI Server, and WebView on first;
database server (Primary, LAMBDA, and CIR
databases) on second.
Up to 500 agents: 4 servers – Cisco Email Manager
AppServer on first; Cisco Email Manager UI Server
(first) and WebView server on second; Cisco Email
Manager UI Server (second) on third; database
server on fourth. (In this scenario, an MCS-7835 may
be used for the second UI Server box.)
Up to 1000 agents: 7 servers – Cisco Email Manager
AppServer on first (quad processor recommended);
Cisco Email Manager UI Server (first) and WebView
server on second; Cisco Email Manager UI Server
(second) on third; Cisco Email Manager UI Server
(third) on fourth; Cisco Email Manager UI Server
(forth, required if more than 750 agents) on fifth,
database server (Primary and LAMDA) on sixth;
database server (CIR) on seventh. (In this scenario,
an MCS-7835 may be used for the n+1 UI Server
boxes.)
For sizing information, refer to the Cisco Intelligent
Contact Management Software Bill of Materials
(BOM) documentation available at
http://www.cisco.com/en/US/partner/products/sw/c
ustcosw/ps1001/products_usage_guidelines_list.ht
ml.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-7
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Table 5-1 Sizing Information for IPCC Components and Servers (continued)

Number Maximum
Component Server Model1 of CPUs Agents Notes
Internet Service For the server specifications for the Internet Service
Node (ISN) Node (ISN), refer to the Cisco Internet Service Node
Application Server (ISN) Software Bill of Materials available at
and Voice Browser http://www.cisco.com/univercd/cc/td/doc/product/ic
m/ccbubom/index.htm.
IP IVR Server For the IP IVR server specifications, refer to the
Cisco IPCC Express and IP IVR Configuration and
Ordering Tool, available at
http://www.cisco.com/en/US/partner/products/sw/c
ustcosw/ps1846/prod_how_to_order.html
1. The MCS-7835I-3.0-CC1 and MCS-7835H-3.0-CC1 servers are no longer available from Cisco, but they can still be used in an IPCC configuration. Cisco
currently sells the MCS-7835I1-3.0-CC1 server as a replacement for the MCS-7835I-3.0-CC1 and the MCS-7835H1-3.0-CC1 as a replacement for the
MCS-7835H-3.0-CC1.

Minimum Hardware Configurations for IPCC Core Components


The sizing information in Table 5-1 must be applied to each deployment to gauge server sizing
requirements accurately. The following considerations and guidelines apply to the information in
Table 5-1:
• Formal and critical call center deployments are encouraged to use dual CPU server configurations,
especially for the Progger.
• You must adhere to the capacity limits for the number of agents on each system component. Agent
capacities are based upon a maximum of 30 BHCA per agent and 90 calls per IVR port. Adjustments
either way are not supported: fewer agents does not necessarily equate to higher BHCA, nor does
lower BHCA equate to a greater number of agents supported.
• It is possible to use different server model configurations for the Rogger and the PGs, as long as you
observe the BHCA and agent maximums for each separate component and adhere to the
recommendations in the Cisco Intelligent Contact Management Software Bill of Materials (BOM)
documentation.
• The capacity maximums assume a normal amount of CTI traffic for each given configuration.
Extraordinary CTI traffic (from very large IVRs, for example) will decrease the BHCA and agent
maximums.
• The capacity numbers are based on an average of 5 Run Voice Response Unit (VRU) scripts,
running consecutively in the ICM script, per IVR call. If a deployment has a more complex
ICM/IVR script than this, it will also decrease the maximum BHCA and agent capacity.
• The capacity numbers are also based on 5 skill group per agent. If a deployment has more than five
groups per agent, it will also decrease the maximum BHCA and agent capacity and should be
handled on a case-by-case basis.

Cisco IP Contact Center Enterprise Edition SRND


5-8 OL-7279-04
Chapter 5 Sizing IPCC Components and Servers
Sizing Considerations for IPCC

Additional Sizing Factors


Many variables in the IPCC configuration and deployment options can affect the hardware requirements
and capacities. This section describes the major sizing variables and how they affect the capacity of the
various IPCC components. In addition, Table 5-2 summarizes the sizing variables and their effects.

Busy Hour Call Attempts (BHCA)


The number of calls attempted during a busy hour is an important metric. As BHCA increases, there is
an increase in the load on all IPCC components, most notably on Cisco CallManager, IP IVR, and the
Cisco CallManager PG. The capacity numbers for agents assume up to 30 calls per hour per agent.

Agents
The number of agents is another important metric that will impact performance of most IPCC server
components including Cisco CallManager clusters. For impact of agents on the performance of Cisco
CallManager components, see Sizing Cisco CallManager Servers For IPCC, page 6-1

Skill Groups
The number of skill groups per agent has significant effects on the CTI OS Server, the Cisco
CallManager PG, and the ICM Router and Logger. Cisco recommends that you limit the number of skill
groups per agent to 5 or fewer, when possible, and that you periodically remove unused skill groups so
that they do not affect system performance. You can also manage the effects on the CTI OS server by
increasing the value for the frequency of statistical updates.

Queuing
The IP IVR places calls in a queue and plays announcements until an agent answers the call. For sizing
purposes, it is important to know whether the IVR will handle all calls initially (call treatment) and direct
the callers to agents after a short queuing period, or whether the agents will handle calls immediately
and the IVR will queue only unanswered calls when all agents are busy. The answer to this question
determines very different IVR sizing requirements and affects the performance of the ICM
Router/Logger and Voice Response Unit (VRU) PG. Required VRU ports can be determined using the
Cisco IPC Resource Calculator. (See Cisco IPC Resource Calculator, page 4-6, for more information.)

ICM Script Complexity


As the complexity and/or number of ICM scripts increase, the processor and memory overhead on the
ICM Router and VRU PG will increase significantly. The delay time between replaying Run VRU
scripts also has an impact.

Reporting
Real-time reporting can have a significant effect on Logger, Progger, and Rogger processing due to
database access. A separate server is required for an Administrative Workstation (AW) and/or Historical
Data Server (HDS) to off-load reporting overhead from the Logger, Progger, and Rogger.

IVR Script Complexity


As IVR script complexity increases with features such as database queries, the load placed upon the IVR
server and the Router also increases. There is no good rule of thumb or benchmark to characterize the
IP IVR performance when used for complex scripting, complex database queries, or transaction-based
usage. Cisco recommends that you test complex IVR configurations in a lab or pilot deployment to
determine the response time of database queries under various BHCA and how they affect the processor
and memory for the IVR server, PG, and Router.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-9
Chapter 5 Sizing IPCC Components and Servers
Peripheral Gateway and Server Options

IP IVR Self-Service Applications


In deployments where the IP IVR is also used for self-service applications, the self-service applications
are in addition to the IPCC load and must be factored into the sizing requirements as stated in Table 5-1.

Third-Party Database and Cisco Resource Manager Connectivity


Carefully examine connectivity of any IPCC solution component to an external device and/or software
to determine the overall effect on the solution. Cisco IPCC solutions are very flexible and customizable,
but they can also be complex. Contact centers are often mission-critical, revenue-generating, and
customer-facing operations. Therefore, Cisco recommends that you engage a Cisco partner (or Cisco
Advanced Services) with the appropriate experience and certifications to help you design your IPCC
solution.

Extended Call Context (ECC)


The ECC usage impacts PG, Router, Logger, and network bandwidth. There are many ways that ECC
can be configured and used. The capacity impact will vary based on ECC configuration and should be
handled on a case-by-case basis.

Peripheral Gateway and Server Options


An ICM Peripheral Gateway (PG) translates messages coming from the Cisco CallManager servers, the
IP IVR, or other third-party automatic call distributors (ACDs) or voice response units (VRUs) into
common internally formatted messages that are then sent to and understood by the ICM. In the reverse,
it also translates ICM messages so that they can be sent to and understood by the peripheral devices.
The Peripheral Interface Manager (PIM) is the software process that runs on the PG and performs the
message translation and control. Every peripheral device that is part of the IPCC solution must be
connected to a PG and PIM.
Figure 5-3 and Figure 5-4 illustrate various configuration options for the Agent PG with CTI OS and
Cisco Agent Desktop.
Table 5-2 lists PG and PIM sizing recommendations

Figure 5-3 Agent PG Configuration Options with CTI OS

Agent PG Agent PG Agent PG Agent PG Agent PG


(All-in-one) (with Generic) (with CCM) (with Outbound) (with Blender)

CCM PG (CCM PIM) Generic PG CCM PG (CCM PIM) Generic PG Generic PG


(CCM PIM and VRU PIM) (CCM PIM and VRU PIM) (CCM PIM and VRU PIM)
VRM PG (VRU PIM)
CTI SVR CTI SVR CTI SVR CTI SVR CTI SVR
CTI OS CTI OS CTI OS CTI OS CTI OS
MR PG MR PG
Dialer Blender
132070

Cisco IP Contact Center Enterprise Edition SRND


5-10 OL-7279-04
Chapter 5 Sizing IPCC Components and Servers
CTI OS

Figure 5-4 Agent PG Configuration Options with Cisco Agent Desktop

Agent PG Agent PG Agent PG Agent PG Agent PG


(All-in-one) (with Generic) (with CCM) (with Outbound) (with Blender)

CCM PG (CCM PIM) Generic PG CCM PG (CCM PIM) Generic PG Generic PG


(CCM PIM and VRU PIM) (CCM PIM and VRU PIM) (CCM PIM and VRU PIM)
VRM PG (VRU PIM)
CTI SVR CTI SVR CTI SVR CTI SVR CTI SVR
CTI OS CTI OS CTI OS CTI OS CTI OS
MR PG MR PG
Dialer Blender

132071
CAD SVR CAD SVR CAD SVR CAD SVR CAD SVR

Table 5-2 PG and PIM Sizing Recommendations

Sizing Variable Recommendation, Based on ICM Software Release 5.0


Maximum number of PGs per ICM 80
Maximum PG types per server platform Up to 2 PG types are permitted per server, provided that any
given server is limited to the maximum agent and VRU port
limitations outlined in Table 5-1.
Can PGs be remote from ICM? Yes
Can PGs be remote from Cisco CallManager or IP IVR? No
PIM types Cisco CallManager, IVR, Media Routing (MR), and ACD
Maximum number of PIMs per PG Actual number of IVR PIMs is determined by the size of the
IPCC deployment (agents, IVR ports, BHCA, and so forth).
Under most circumstances, 5 PIMs per PG (Agent PG) and 8
PIMs per PG (standalone PG) is a reasonable limit.
Maximum number of PIM types per PG (CTI server may be 2 + CTI Server
added)
Maximum number of IVRs controlled by one Cisco Refer to the Cisco IP Telephony Solution Reference Network
CallManager Design (SRND), available at http://www.cisco.com/go/srnd.
Maximum number of CTI servers per PG 1
Can PG be co-resident with Cisco CallManager on Media No
Convergence Server (MCS)?

CTI OS
The CTI OS is most commonly configured as a co-resident component on the Agent PG (see Figure 5-3
and Figure 5-4), supporting up to 500 agents.
Table 5-3 lists additional sizing factors for CTI OS.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-11
Chapter 5 Sizing IPCC Components and Servers
Cisco Agent Desktop Component Sizing

Table 5-3 CTI OS Sizing Factors

Sizing Factor MSC-7845 MSC-7835 Comments


Maximum number of agents plus supervisors 500 250
Maximum number of supervisors 50 25 10% of supported agents1
Maximum number of teams 50 25 10% of supported agents1
Maximum BHCA per agent 30 30 Supervisors do not receive
new calls.
Maximum number of agents plus supervisors 100 50 20% of supported agents1
per team
Maximum number of supervisors per team 10 5 2% of supported agents1
Maximum number of agents per supervisor 100 50 20% of supported agents1
Skill groups per agent 5 5
Extended Call Context (ECC) None None
1. These percentages apply to the CTI OS Server agent capacity and not to the entire Contact Center capacity.

The numbers in Table 5-3 are based on the following assumptions:


• Hyper-threaded is enabled for the MCS server.
• The traffic profile is:
– 85% of calls answered by agents
– 10% of calls transferred
– 5% of calls conferenced
For more than 500 CTI agents, additional Agent PG instances are off-loaded (500 agents each) to
additional servers. The CTI OS capacity decreases when CTI OS configuration values differ from those
listed in Table 5-3. For example, the CTI OS capacity decreases as the number of skill groups per agent
increases. In this case, platform resource usage increase significantly due to the increase in work load
required to query and update agents and skill groups.

Cisco Agent Desktop Component Sizing


For details on the components and architecture of the Cisco Agent Desktop, see Agent Desktop and
Supervisor Desktop, page 7-1.
Server capacities for the Cisco Agent Desktop CTI Option vary based on the total number of agents,
whether or not Switched Port Analyzer (SPAN) monitoring and recording is used, and the number of
simultaneous recordings.
This section presents sizing guidelines for the following installable Cisco Agent Desktop Server
components:
• Cisco Agent Desktop Base Services, page 5-13
• Cisco Agent Desktop VoIP Monitor Service, page 5-13
• Cisco Agent Desktop Recording and Playback Service, page 5-13

Cisco IP Contact Center Enterprise Edition SRND


5-12 OL-7279-04
Chapter 5 Sizing IPCC Components and Servers
Cisco Agent Desktop Component Sizing

Cisco Agent Desktop Base Services


The Cisco Agent Desktop Base Services consist of a set of application servers that run as Microsoft
Windows 2000 services. They include Chat Service, Directory Services, Enterprise Service, IP Phone
Agent Service, LDAP Monitor Service, Licensing and Resource Manager Service, Recording and
Statistics Service, and Sync Service. In addition, there are application servers that may be placed on the
same or separate computers as the Base Servers. These additional applications include the VoIP Monitor
Service and the Recording and Playback Service.
A set of Cisco Agent Desktop Base Services plus the additional application servers correspond to a
logical call center (LCC). Table 5-4 lists the maximum number of agents that a single LCC can support
for various sizes of enterprises.

Table 5-4 Maximum Number of Agents Supported by a Logical Call Center (LCC)

Desktop Agents IP Phone Agents


Enterprise Size Only Only Mixed
Small 100 50 33 of each
Medium 300 150 100 of each
Large (multiple PG and CTI Servers) 500 250 333 of each
Maximum LCC capacity with 1000 500 333 of each
multiple PG and CTI Servers

Cisco Agent Desktop VoIP Monitor Service


The VoIP Monitor Service enables the silent monitoring and recording features. For Desktop
Monitoring, the VoIP Monitor Service has no impact on design guidance for Agent PG scalability. When
using Switched Port Analyzer (SPAN) monitoring, the VoIP Monitor Service may be co-located on the
Agent PG for up to 100 agents. When Remote Switched Port Analyzer (RSPAN) monitoring and
recording are required for more than 400 agents, the VoIP Monitor Service must be deployed on a
dedicated server (an MCS-7835 server or equivalent). Each dedicated VoIP Monitor Service can support
up to 400 agents.

Cisco Agent Desktop Recording and Playback Service


The Recording and Playback Service stores the recorded conversations and makes them available to the
Supervisor Log Viewer application.
A co-resident Recording and Playback Service can support up to 32 simultaneous recordings. A
dedicated Recording and Playback Service (which is available in the Premium offering) can support up
to 80 simultaneous recordings. The capacity of the Recording and Playback Service is not dependent on
the codec that is used.
Table 5-5 summarizes the raw Recording and Playback Service capacity.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 5-13
Chapter 5 Sizing IPCC Components and Servers
Summary

Table 5-5 Capacity of Recording and Playback Service

Recording and Playback Service Type Maximum Simultaneous Recordings


Co-resident 32
Dedicated 80

Summary
Proper sizing of IPCC components requires analysis beyond the number of agents and busy hour call
attempts. Configurations with multiple skill groups per agent, significant call queuing, and other factors
contribute to the total capacity of any individual component. Careful planning and discovery in the
pre-sales process should uncover critical sizing variables, and these considerations should be applied to
the final design and hardware selection.
Correct sizing and design can ensure stable deployments for large systems up to 6,000 agents and
180,000 BHCA. For smaller deployments, cost savings can be achieved with careful planning and
co-resident ICM components (for example, Progger, Rogger, and Agent PG).
Additionally, designers should pay careful attention to the sizing variables that will impact sizing
capacities such as skill groups per Agent. While it is often difficult to determine these variables in the
pre-sales phase, it is critical to consider them during the initial design, especially when deploying
co-resident PGs and Proggers. While new versions will scale far higher, the Cisco Agent Desktop
Monitor Server is still limited in the number of simultaneous sessions that can be monitored by a single
server when monitoring and recording are required.

Cisco IP Contact Center Enterprise Edition SRND


5-14 OL-7279-04
C H A P T E R 6
Sizing Cisco CallManager Servers For IPCC

This chapter discusses the concepts, provisioning, and configuration of Cisco CallManager clusters
when used in an IPCC Enterprise environment. Cisco CallManager clusters provide a mechanism for
distributing call processing across a converged IP network infrastructure to support IP telephony,
facilitate redundancy, and provide feature transparency and scalability.
This chapter covers only the IPCC Enterprise operation of clusters within both single and multiple
campus environments and proposes reference designs for implementation. Before reading this chapter,
Cisco recommends that you study the details about the operations of Cisco CallManager clusters
presented in the Call Processing chapter of the Cisco IP Telephony Solution Reference Network Design
(SRND) guide, available at
http://www.cisco.com/go/srnd
The information in this chapter builds upon the concepts presented in the Cisco IP Telephony SRND.
Some duplication is necessary to clearly concepts relating to IPCC as an application supported by the
Cisco CallManager call processing architecture. However, the foundational concepts are not duplicated
here, and you should become familiar with them before continuing with this chapter.
This chapter documents general best practices and scalability considerations for sizing the Cisco
CallManager servers used with your IPCC Enterprise deployments. Within the context of this document,
scalability refers to Cisco CallManager server and/or cluster capacity when used in the IPCC Enterprise
environment.

Call Processing With IPCC Enterprise


Before applying the guidelines presented in this chapter, you should perform the following steps, which
will have an impact on the Cisco CallManager cluster scalability:
• Determine customer call center application requirements (IP IVR, ISN, outbound, multi-channel,
and so forth).
• Determine the types of call center resources and devices used in IPCC Enterprise (route points, CTI
ports, and so forth):
– Number of required IPCC Enterprise agents
– Number of required IP IVR CTI ports or ISN ports (or sessions)
– Number of CTI route points (ICM route points and IVR route points)
– Number of PSTN trunks
– Estimated busy hour call attempts (BHCA) for all agents and devices mentioned above
(Inbound or outbound?)

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-1
Chapter 6 Sizing Cisco CallManager Servers For IPCC
IPCC Clustering Guidelines

– Percent of conferenced and/or transferred calls


• Determine the required deployment model (single site, centralized, distributed, clustering over the
WAN, or remote branches within centralized or distributed deployments).
• Determine the placement of solution components in the network (gateways, agents, ISN, and so
forth).
• Determine the different types of call flows and call disposition, such as:
– Simple call flow (IVR self-service or direct agent transfer without any IVR call treatment)
Simple call flows are those that do not involve multiple call handling (for example, IVR
self-service, incoming calls from a gateway directly to a phone, internal calls, and so forth).
– Complex call flow (IVR call treatment or database lookup prior to agent transfer, call
redirection to a route point, CTI route point, CTI ports, agent-to-agent transfer and conference,
or consultation or conference from an agent to a skill group)
Complex call flows are those that involve multiple call redirects and call handling of the
original call (for example, incoming calls to central route points redirected to CTI route points
and then to IP IVR for call treatment, then transferred or redirected to another target such as an
agent). These multiple call processing segments of the original call consume more CPU
resources compared to simple call handling.

IPCC Clustering Guidelines


The following guidelines apply to all Cisco CallManager clusters with IPCC Enterprise.

Note A cluster may contain a mix of server platforms, but all servers in the cluster must run the same Cisco
CallManager software release and service pack. The publisher server should be of equal or higher
capability than the subscriber servers. (See Table 6-2.)

• Devices (including phones, music on hold, route points, gateway ports, CTI ports, JTAPI Users, and
CTI Manager) should never reside or be registered on the publisher. Any administrative work on
Cisco CallManager will impact call processing and CTI Manager activities if there are any devices
registered with the publisher.
• Do not use a publisher as a failover or backup call processing server unless you have fewer than 50
agent phones and the installation is not mission critical or is not a production environment. The
Cisco MCS-7825H-3000 is the minimum server required. Any deviations will require review by
Cisco Bid Assurance on a case-by-case basis.
• Any deployment with more than 50 agent phones requires a minimum of two subscriber servers and
a combined TFTP and publisher.
• If you require more than one primary subscriber to support your configuration, then distribute all
agents equally among the cluster nodes. This assumes BHCA is uniform across all agents (average
BHCA processed is about the same on all nodes).
• Similarly, distribute all gateway ports and IP IVR CTI ports equally among the cluster nodes.
• If you require more than one ICM JTAPI user (CTI Manager) and more than one primary subscriber,
then group and configure all devices monitored by the same ICM JTAPI User (third-party
application provider), such as ICM route points and agent devices, in the same server if possible.

Cisco IP Contact Center Enterprise Edition SRND


6-2 OL-7279-04
Chapter 6 Sizing Cisco CallManager Servers For IPCC
IPCC Enterprise with Cisco CallManager Releases 3.1 and 3.2

• If you have a mixed cluster with IPCC and general office IP phones, group and configure each type
on a separate server if possible (unless you need only one subscriber server). For example, all IPCC
agents and their associated devices and resources (gateway ports, CTI ports, and so forth) would be
on one or more Cisco CallManager servers, and all general office IP phones and their associated
devices (such as gateway ports) would be on other Cisco CallManager servers, as long as cluster
capacity allows. In this case, the 1:1 redundancy scheme is strongly recommended. (See Call
Processing Redundancy with IPCC, page 6-9, for details)
• Under normal circumstances, place all servers from the Cisco CallManager cluster within the same
LAN or MAN. Cisco does not recommend placing all members of a cluster on the same VLAN or
switch.
• If the cluster spans an IP WAN, you must follow the specific guidelines for clustering over the IP
WAN as described in both the section on Clustering Over the WAN, page 2-15 in this guide, and
the section on Clustering Over the IP WAN in the Cisco IP Telephony Solution Reference Network
Design (SRND) guide, available at
http://www.cisco.com/go/srnd
For additional Cisco CallManager clustering guidelines, refer to the Cisco IP Telephony Solution
Reference Network Design (SRND) guide at
http://www.cisco.com/go/srnd

IPCC Enterprise with Cisco CallManager Releases 3.1 and 3.2


The following guidelines apply to Cisco CallManager releases 3.1 and 3.2. For specific Cisco
CallManager and IPCC supported releases, refer to the Cisco CallManager Compatibility Matrix,
available at
http://www.cisco.com/univercd/cc/td/doc/product/voice/c_callmg/ccmcomp.htm
• Within a cluster, you may enable a maximum of 6 call processing servers (4 primary and 2 backup
servers) with the Cisco CallManager Service. Other servers may be used for more dedicated
functions such as Trivial File Transfer Protocol (TFTP), database publisher, music on hold, and so
forth.
• You can configure a maximum of 800 Computer Telephony Integration (CTI) connections or
associations per server, or a maximum of 3200 per cluster if they are equally balanced among the
four primary servers. This maximum would include IPCC agent phones, IP IVR CTI ports, CTI
route points, and other CTI devices.
• Each H.323 device can support up to 500 H.323 calls with Cisco CallManager Release 3.1 or 1000
calls with Cisco CallManager Release 3.2.
• The default trace setting for Cisco CallManager Release 3.1 or 3.2 is different than the setting for
later releases (Cisco CallManager Release 3.3 and later) and typically has a lesser impact on
disk I/O. When upgrading to Cisco CallManager Release 3.3 or later, ensure that the installed
MCS-7800 Series server is able to handle the maximum rated agent capacity. Servers that do not
have the capability to add the battery-backed write cache (BBWC) enabler kit typically are rated at
half the capacity of the equivalent server with the BBWC installed. (This does not mean that you
can double the agent capacity simply by installing the BBWC because the capacity might be limited
by other server resources such as processor speed and memory. BBWC helps reduce disk I/O
contention, thus allowing the CPU to process a higher transaction load.)

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-3
Chapter 6 Sizing Cisco CallManager Servers For IPCC
IPCC Enterprise with Cisco CallManager Releases 3.3 and Later

• The default trace file location for the Cisco CallManager and signal distribution layer (SDL) is on
the primary drive. This trace file should be redirected to the secondary F drive array, and the CTI
default trace file location should be directed to the C drive array. This configuration will have the
least impact on disk I/O resources.

IPCC Enterprise with Cisco CallManager Releases 3.3 and Later


The following guidelines apply to Cisco CallManager Release 3.3 and later:
• Within a cluster, you may enable a maximum of 8 servers with the Cisco CallManager Service,
including backup servers. Other (additional) servers may be used for more dedicated functions such
as TFTP, publisher, music on hold, and so forth.
• You can configure a maximum of 800 CTI connections or associations per standard server that does
not have battery-backed write cache (BBWC) installed (as defined in Table 6-2), or a maximum of
3200 per cluster, and a maximum of 2000 controlled devices per CTI application if they are equally
balanced among all servers (Cisco MCS-7835 server required).
• You can configure a maximum of 2500 CTI connections or associations per MCS-7845H with
battery-backed write cache (BBWC) or equivalent high-performance server (see Table 6-2), or a
maximum 10,000 per cluster, and a maximum of 2500 controlled devices per CTI application if they
are equally balanced among all servers. Again this maximum would include IPCC agent phones,
IP IVR CTI ports, CTI route points, and other third-party application CTI devices configured in
Cisco CallManager.
• Each Cisco CallManager cluster (four primary and four backup subscriber servers) can support up
to 2,000 IPCC agents and no more than 60,000 BHCA. The BHCA would be spread equally among
the eight call processing servers with 1:1 redundancy. (See Call Processing Redundancy with IPCC,
page 6-9, for redundancy schemes.) Each of the eight Cisco CallManager servers (MCS-7845H High
Performance Servers with BBWC installed) would support a maximum of 250 agents or 7,500
BHCA. In a failover scenario, the primary server would support a maximum of 500 agents and
15,000 BHCA. These capacities can vary, depending on your specific configuration (simple versus
complex call flows), as determined by the Cisco CallManager Capacity Tool. (See Cisco
CallManager Capacity Tool, page 6-5.)

Cisco CallManager Platform Capacity Planning with IPCC


Many types of devices can register with a Cisco CallManager, including IP phones, IP IVR ports,
voicemail ports, CTI (TAPI/JTAPI) devices, gateways, and DSP resources such as transcoding and
conferencing. Each of these devices requires resources from the server platform with which it is
registered.
The required resources can include memory, processor, and I/O. Each device then consumes additional
server resources during transactions, which are normally in the form of calls. A device that handles only
6 calls per hour, such as a standard IP phone, consumes fewer resources than a device handling 30 calls
per hour, such as an IPCC agent phone, a gateway port, or an IP IVR port.
In prior Cisco CallManager software releases, Cisco has utilized various schemes to calculate the
capacity of a system using device weights, BHCA multipliers, and dial plan weights. These simple
schemes have been replaced by a capacity tool to allow for more accurate planning of the system. (See
Cisco CallManager Capacity Tool, page 6-5.)

Cisco IP Contact Center Enterprise Edition SRND


6-4 OL-7279-04
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Cisco CallManager Capacity Tool

Note If your system does not meet the guidelines in this document, or if you consider the system to be complex
(IP Telephony and IPCC mixed with other applications), contact your Cisco Systems Engineer (SE) for
proper sizing of the Cisco CallManager cluster.

Cisco CallManager Capacity Tool


The Cisco CallManager Capacity Tool (CCMCT) requires various pieces of information to provide a
calculation of the minimum size and type of servers required for a system. The information includes the
type and quantity of devices, such as IP phones, gateways, and media resources. For each device type,
the Capacity Tool also requires the average bust hour call rate and the average busy hour traffic
utilization. For example, if all IPCC phones make an average of 25 calls per hour and the average call
lasts 2 minutes, then the BHCA is 25 and the utilization is 0.83. (25 calls of 2 minutes each equals 50
minutes per hour on the phone, which is 50/60 = 0.83 of an hour.) Table 6-1 shows an example of the
input for the Capacity Tool.

Table 6-1 Sample Input for Cisco CallManager Capacity Tool

Average Busy Hour Average Busy Hour


Device or Port Call Rate (BHCA) Traffic Utilization
IP Telephony Input
IP phone 4 0.15
Unity connection port 20 0.8
CTI port – Type #1 (simple call, redirect) 8 0.3
CTI port – Type #2 (transfer, conference) 8 0.3
CTI route point 100
Third-party controlled line 8 0.3
Intercluster trunk gateways
Intercluster trunks 20 0.8
H323 client (phone) 4 0.15
H323 gateways
H323 gateway DS0s (T1 CAS, T1 PRI, E1 PRI, Analog) 20 0.8
MGCP gateways
MGCP gateway DS0s (T1 CAS, T1 PRI, E1 PRI, Analog) 20 0.8
MoH (Music on Hold) stream (coresident, maximum of
20 streams)
Transcoder 20 0.8
MTP resource (hardware, coresident software, or 20 0.8
standalone software)
Conference resource (hardware, coresident software, or 6 0.8
standalone software)
Dial plan
Directory numbers or lines

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-5
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Cisco CallManager Capacity Tool

Table 6-1 Sample Input for Cisco CallManager Capacity Tool (continued)

Average Busy Hour Average Busy Hour


Device or Port Call Rate (BHCA) Traffic Utilization
Route patterns
Translation patterns
IPCC Input
IPCC agents 30 0.8
ISN (prompt and collect or queueing)
ISN (self-service)
CTI ports or IP IVR (prompt and collect or queueing) 30 0.8
CTI ports or IP IVR (self-service) 30 0.8
CTI route points
H323 gateways
H323 gateway DS0s (T1 CAS, T1 PRI, E1 PRI, Analog) 20 0.8
MGCP gateways
MGCP gateway DS0s (T1 CAS, T1 PRI, E1 PRI, Analog) 20 0.8
% Agent-to-agent transfer 10%
% Agent conference 10%
IPCC Outbound
IPCC outbound predictive/preview agents 30 0.8
IPCC outbound direct preview agents 30 0.8
IPCC outbound dialer ports 60 0.8
IPCC outbound IVR ports 20 0.8
H.323 gateways
H.323 gateway DS0s (T1 CAS, T1 PRI, E1 PRI, analog) 20 0.8
MGCP gateways
MGCP gateway DS0s (T1 CAS, T1 PRI, E1 PRI, analog) 20 0.8

In addition to the device information, the Cisco CallManager Capacity Tool also requires information
regarding the dial plan, such as route patterns and translation patterns.
The IPCC input includes entries for agents (inbound and outbound), Internet Service Node (ISN) or
IP IVR ports for gateway ports, and percent of total calls that are transferred and/or conferenced.
When all the details have been entered, the Cisco CallManager Capacity Tool calculates how many
servers of the desired server type are required, as well as the number of clusters if the required capacity
exceeds a single cluster.
At this time, the Cisco CallManager Capacity Tool is available to all Cisco employees and partners at
http://www.cisco.com/partner/WWChannels/technologies/resources/CallManager/

Cisco IP Contact Center Enterprise Edition SRND


6-6 OL-7279-04
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Supported Cisco CallManager Server Platforms for IPCC Enterprise

Supported Cisco CallManager Server Platforms for


IPCC Enterprise
Table 6-2 lists the general types of servers you can use with IPCC Enterprise in a Cisco CallManager
cluster, along with their main characteristics.

Table 6-2 Types of Cisco CallManager Servers that Support IPCC

Server Type Characteristics IPCC Enterprise Recommendation1


Standard server: • Single processor Up to a maximum of 100 agents
MCS-7825H (Not recommended for mission-critical
• Single power supply (not hot-swap)
call centers above 50 agents)
• Non-RAID hard disk (not hot-swap)
High-availability standard server: • Single processor Up to a maximum of 250 agents
MCS-7835H with BBWC (Maximum of 125 agents without
• Redundant power supplies (hot-swap)
BBWC installed)
• Redundant SCSI RAID hard disk array
(hot-swap)
High-performance server: • Dual processors Recommended for all mission-critical
MCS-7845H with BBWC contact centers up to a maximum of
• Redundant power supplies (hot-swap)
500 agents
• Redundant SCSI RAID hard disk arrays (Maximum of 250 agents without
BBWC installed)
1. Agent capacities are based on a maximum of 30 BHCA per agent in the busy hour.

The maximum number of IPCC Enterprise agents that a single Cisco CallManager server can support
depends on the server platform, as indicated in Table 6-3.

Table 6-3 Maximum Number of IPCC Enterprise Agents per Cisco CallManager (Release 3.3 or Later) Server Platform

Cisco CallManager MCS Server Platform and Equivalent Maximum IPCC High-Availability High-Performance
Server Characteristics1 Agents per Server 2 Platform 3 Server
4
• Cisco MCS-7845H-3000 (Dual Prestonia Xeon 3.06 GHz 500 Yes Yes
or higher) 4 GB RAM
• HP DL380-G3 3.06 GHz 2-CPU 3
5
• Cisco MCS-7845H-2400 (Dual Prestonia Xeon 500 Yes Yes
2400 MHz) 4 GB RAM (With the addition of
battery-backed write cache, BBWC, installed separately)
• HP DL380-G3 2400 MHz 2-CPU
• Cisco MCS-7845H-2400 (Dual Prestonia Xeon 250 Yes Yes
2400 MHz) 4 GB RAM (Without BBWC)
• HP DL380-G3 2400 MHz 2-CPU
• Cisco MCS-7835H-3000 (Prestonia Xeon 3.06 GHz) 250 Yes No 5
2 GB RAM (With the addition of battery-backed write
cache, BBWC, installed separately)
• HP DL380-G3 3.06 GHz 1-CPU

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-7
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Supported Cisco CallManager Server Platforms for IPCC Enterprise

Table 6-3 Maximum Number of IPCC Enterprise Agents per Cisco CallManager (Release 3.3 or Later) Server Platform

Cisco CallManager MCS Server Platform and Equivalent Maximum IPCC High-Availability High-Performance
Server Characteristics1 Agents per Server 2 Platform 3 Server
• Cisco MCS-7835H-3000 (Prestonia Xeon 3.06 GHz) 125 Yes No
2 GB RAM (Without BBWC)
• HP DL380-G3 3.06 GHz 1-CPU
• Cisco MCS-7825H-3000 (Pentium 4, 3.06 GHz) 100 No No
2 GB RAM
• HP DL320-G2 3.06 GHz 6
1. For the latest information on server memory requirements, refer to Product Bulletin No. 2864, Physical Memory Recommendations for Cisco
CallManager Version 4.0 and Later, available at http://www.cisco.com/en/US/products/sw/voicesw/ps556/prod_bulletin0900aecd80284099.html.
2. Agent capacities are based on a maximum of 30 BHCA per agent in the busy hour and failover scenario.
3. A high-availability server supports redundancy for both the power supplies and the hard disks.
4. This server has the battery-backed write cache kit (BBWC) installed.
5. This server does not have the battery-backed write cache kit (BBWC) installed. Without this kit, the capacity would be half the stated limit. The kit must
be ordered and installed separately to achieve the maximum stated agent capacity.
6. The maximum number of IPCC agents supported on a single non-high-availability platform (such as the MCS-7825H) is 50 agents in a mission-critical
call center. With a redundant configuration, this limit does not apply.

The following notes also apply to Table 6-3:


• Agent capacities are based on Cisco CallManager Release 3.3 and later, in failover mode.
• The maximum number of IPCC agents is 500 with Cisco CallManager Release 3.3 or later, or 250
IPCC agents with Cisco CallManager Release 3.2 or earlier.
• A single non-high-availability platform supports a maximum of 50 IPCC agents. With a redundant
server configuration, this limit does not apply.
• The Cisco MCS-7845I-3000 is not a supported MCS platform for Cisco CallManager. However, the
IBM server equivalent (IBM x345, 3.06 GHz dual CPU) is supported for IPCC deployments as a
software only-platform with OS 2000.2.6.
• The Cisco MCS-7815I-2000 server is a supported Cisco CallManager platform for Cisco IP
Telephony deployments only. It is not supported with IPCC Enterprise deployments, but lab or
demo setups can use this server.
• Newer MCS-7835H and MCS-7845H server platforms have the same capacities as shown in
Table 6-3.
For the latest information on supported platforms and specific hardware configurations, refer to the
online documentation at
http://www.cisco.com/en/US/products/hw/voiceapp/ps378/prod_brochure_list.html
The capacities outlined in this section provide a design guideline for ensuring an expected level of
performance for normal operating configurations. Higher levels of performance can be achieved by
disabling or reducing other functions that are not directly related to processing calls. Increasing some of
these functions can also have an impact on the call processing capabilities of the system. Some of these
functions include tracing, call detail recording, highly complex dial plans and call flows, and other
services that are coresident on the server. Highly complex dial plans can include multiple line
appearances, many partitions, calling search spaces, route patterns, translations, route groups, hunt
groups, pickup groups, route lists, extensive use of Call Forward, coresident services, and other

Cisco IP Contact Center Enterprise Edition SRND


6-8 OL-7279-04
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Call Processing Redundancy with IPCC

coresident applications. All of these functions can consume additional memory resources within the
Cisco CallManager server. To improve performance, you can install additional certified memory in the
server, up to the maximum supported for the particular platform.
A Cisco CallManager cluster with a very large dial plan containing many gateways, route patterns,
translation patterns, and partitions can take an extended amount of time to initialize when the Cisco
CallManager Service is first started. If the system does not initialize within the default time, there are
service parameters that can be increased to allow additional time for the configuration to initialize. For
details on the service parameters, refer to the online help for Service Parameters in Cisco CallManager
Administration.

Call Processing Redundancy with IPCC


With all versions of Cisco CallManager and IPCC, you can choose from the following redundancy
configurations:
• 2:1 — For every two primary subscribers, there is one shared backup subscriber.
• 1:1 — For every primary subscriber, there is a backup subscriber.
The 1:1 redundancy scheme allows upgrades with only the failover periods impacting the cluster.
Cisco CallManager Release 3.3 and later supports up to eight subscribers (servers with the Cisco
CallManager service enabled), so you may have as many as four primary and four backup subscribers in
a cluster.
The 1:1 redundancy scheme enables you to upgrade the cluster using the following method.

Step 1 Upgrade the publisher server.


Step 2 Upgrade dedicated TFTP and music on hold (MoH) servers.
Step 3 Upgrade all backup subscribers. This step will impact some users if 50/50 load balancing is
implemented. During this step, the Cisco CallManager service is stopped in the backup subscriber, and
the devices move to the primary subscriber.
Step 4 Fail-over the primary subscribers to their backups, and stop the Cisco CallManager service on the
primaries. All users are on primaries and are moved to backup subscribers when the Cisco CallManager
service is stopped. CTI Manager is also stopped, causing the Peripheral Gateway (PG) to switch sides
and inducing a brief outage for agents on that particular node.
Step 5 Upgrade the primaries, and then re-enable the Cisco CallManager service.

With this upgrade method, there is no period (except for the failover period) when devices are registered
to subscriber servers that are running different versions of the Cisco CallManager software. This factor
can be important because the Intra-Cluster Communication Signaling (ICCS) protocol that
communicates between subscribers can detect a different software version and shut down
communications to that subscriber. This action could potentially partition a cluster for call processing,
but SQL and LDAP replication would not be affected.
The 2:1 redundancy scheme allows for fewer servers in a cluster, but it can potentially result in an outage
during upgrades. This is not a recommended scheme for IPCC, although it is supported if it is a customer
requirement and possible outage of call processing is not of concern to the customer.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-9
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Cluster Configurations for Redundancy

The 2:1 redundancy scheme enables you to upgrade the cluster using the following method. If the Cisco
CallManager service does not run on the publisher database server, upgrade the servers in the following
order:

Step 1 Upgrade the publisher database server.


Step 2 Upgrade the Cisco TFTP server if it exists separately from the publisher database server.
Step 3 Upgrade servers, one server at a time, that have only Cisco CallManager-related services (music on hold,
Cisco IP Media Streaming Application, and so on) running on them. Make sure that you upgrade only
one server at a time. Make sure that the Cisco CallManager service does not run on these servers.
Step 4 Upgrade each backup server, one server at a time.

Note Cisco does not recommend that you oversubscribe the backup server(s) during the upgrade.
Cisco strongly recommends that you have no more than the maximum of 500 IPCC agents
registered to the backup server during the upgrade. Cisco strongly recommends that you perform
the upgrade during off-peak hours when low call volume occurs.

Step 5 Upgrade each primary server that has the Cisco CallManager service running on it. Remember to
upgrade one server at a time. During the upgrade of the second primary subscriber, there will be some
outage for users and agents subscribed on that server, until the server is upgraded. Similarly, when you
upgrade the fourth primary subscriber, there will be some outage for users and agents subscribed on that
server, until the server is upgraded.

Cluster Configurations for Redundancy


Figure 6-1 through Figure 6-5 illustrate typical cluster configurations to provide IPCC call processing
redundancy with Cisco CallManager.

Figure 6-1 Basic Redundancy Schemes

2:1 REDUNDANCY SCHEME 1:1 REDUNDANCY SCHEME

M Backup M M
Backup

M M Backup M M

Cost-efficient redundancy High availability during upgrades


Degraded service during upgrades Simplified configuration
126039

Not Recommended

Cisco IP Contact Center Enterprise Edition SRND


6-10 OL-7279-04
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Cluster Configurations for Redundancy

Figure 6-2 1:1 Redundancy Configuration Options

1 2 3
Publisher and TFTP Server(s) Publisher and TFTP Server(s)
MAX 50 AGENTS MAX 500 AGENTS MAX 1000 AGENTS

Publisher and Backup M M Primary


M Primary M
Backup Subscriber
Primary M Backup M Backup M M Primary

4 5
Publisher and TFTP Server(s) Publisher and TFTP Server(s)
MAX 1500 AGENTS MAX 2000 AGENTS

Backup M M Primary Backup M M Primary

Backup M M Primary Backup M M Primary

Backup M M Primary Backup M M Primary

126040
Backup M M Primary

Figure 6-3 2:1 Redundancy Configuration Options

1 2 3
Publisher and TFTP Server(s) Publisher and TFTP Server(s)
MAX 50 AGENTS MAX 500 AGENTS MAX 1000 AGENTS

Publisher and Primary


M M M Primary
Backup Subscriber
Backup M
Primary M Backup M M Primary

4 5
Publisher and TFTP Server(s) Publisher and TFTP Server(s)
MAX 1500 AGENTS MAX 2000 AGENTS

M Primary M Primary
Backup M Backup M
M Primary M Primary

Backup M M Primary M Primary


126041

Backup M
M Primary

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-11
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Cluster Configurations for Redundancy

Figure 6-4 1:1 IPCC Enterprise Redundancy with Cisco CallManager Release 3.3 or Later, with 50/50
Load Balancing (High-Performance Server with BBWC Installed)

500 IPCC AGENTS 1000 IPCC AGENTS 2000 IPCC AGENTS

Publisher and TFTP Server(s) Publisher and TFTP Server(s) Publisher and TP Server(s)

1 to 250: Primary 251 to 1 to 251 to 1 to


M 500 M M 250 500 M M 250
251 to 500: Backup
251 to 500: Primary 751 to 501 to 751 to 501 to
1 to 250: Backup M M M 1000 M M 750
1000 750

1250 to 1001 to
1500 M M 1250

1750 to 1501 to

126042
2000 M M 1750

Note MCS-7845H-2.4 Advanced server does not come with BBWC installed; BBWC must be ordered
separately.

Figure 6-5 1:1 Redundancy for Mixed Office and IPCC Phones with Cisco CallManager Release 3.3 or Later on
MCS-7845H-3000 High-Performance Server with 50/50 Load Balancing

250 IPCC AGENTS AND 500 IPCC AGENTS AND 1000 IPCC AGENTS AND
3750 PHONES 7500 PHONES 15000 PHONES

Publisher and TFTP Server(s) Publisher and TFTP Server(s) Publisher and TP Server(s)

IPCC Agents IPCC Agents IPCC Agents


250 Agents: Primary 251 to 1 to 251 to 1 to
M M M 250 500 M M 250
3750 Phones: Backup 500
751 to 501 to
Phones Phones M M
1000 750
3750 Phones: Primary 3751 to 1 to
250 Agents: Backup M 7500 M M 3750 Phones
3750 to 1 to
7500 M M 3750
11251 to 7501 to
126043

15000 M M 11250

Cisco IP Contact Center Enterprise Edition SRND


6-12 OL-7279-04
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Load Balancing With IPCC

Load Balancing With IPCC


An additional benefit of using the 1:1 redundancy scheme is that it enables you to balance the devices
over the primary and backup server pairs. Normally (as in the 2:1 redundancy scheme) a backup server
has no devices registered unless its primary is unavailable.
With load balancing, you can move up to half of the device load from the primary to the secondary
subscriber by using the Cisco CallManager redundancy groups and device pool settings. In this way, you
can reduce by 50% the impact of any server becoming unavailable.
To plan for 50/50 load balancing, calculate the capacity of a cluster without load balancing and then
distribute the load across the primary and backup subscribers based on devices and call volume. To allow
for failure of the primary or the backup, the total load on the primary and secondary subscribers should
not exceed that of a single subscriber. For example, MCS-7845H-3000 servers have a total server limit
of 500 IPCC agents. In a 1:1 redundancy pair, you can split the load between the two subscribers,
configuring each subscriber with 250 agents. (See the configuration for 500 agents in Figure 6-2.). To
provide for failover conditions when only one server is active, make sure that all capacity limits are
observed so that IPCC agent phones, IP phones, CTI limits, and so on, for the redundant pair do not
exceed the limits allowed for a single server.
For additional information on general call processing topics such as secondary TFTP servers and
gatekeeper considerations, refer to the Cisco IP Telephony Solution Reference Network Design (SRND)
guide, available at
http://www.cisco.com/go/srnd

Impact of IPCC Application on Cisco CallManager Performance


and Scalability
Cisco CallManager system performance is influenced by many factors such as:
• Software release versions
• The type and quantity of devices registered, such as:
– CTI ports
– Gateway ports
– Agent phones
– Route points
– CTI Manager
• The load (BHCA) processed by these devices. As the call rate increases, more CPU resources are
consumed on the Cisco CallManager server.
• Average call duration — Longer average call duration means a lower busy-hour call completion
rate, which lowers CPU usage.
• Special Cisco CallManager configurations and services such as:
– MOH
– Tracing levels

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 6-13
Chapter 6 Sizing Cisco CallManager Servers For IPCC
Impact of IPCC Application on Cisco CallManager Performance and Scalability

• Server platform type:


– Standard
– High-performance
• Application call flow complexity (See the definitions of simple and complex call flows in the section
on Call Processing With IPCC Enterprise, page 6-1.)
– IVR self-service
– Call treatment
– Routing to agents
– Transfers and conferences
CPU consumption varies by type of call flow. For simple call flows, the CPU consumption is
moderate, but CPU consumption for complex call flows is much higher.
• Tests conducted with a complex call flow (call treatment then transfer to agents) using IP IVR with
H323 gateways show an increase in CPU usage compared to the same call flow using ISN (H.323
gateway). This difference is due to the fact that ISN does not require calls to be routed to Cisco
CallManager before call treatment; instead, Cisco CallManager is involved only when calls are
transferred to agents (simple call handling). The trade-off is that ISN gateways have increased
performance demands. (See Sizing ISN Components, page 4-20, for more information).
• Similarly, complex call flows using IP IVR with Media Gateway Control Protocol (MGCP)
gateways show an increase in CPU usage compared to the same call flow using ISN (H.323
gateway). This difference is due to the way ISN routes the calls (as described in the preceding
paragraph) and to the fact that the H.323 gateway protocol uses more CPU resources than MGCP
does.
• ISN configurations, simple call flow configurations, and a lower call arrival rate (BHCA) might be
able to support more than 2,000 agents per Cisco CallManager cluster. Please consult with your
Cisco Systems Engineer for proper sizing of your system requirements.
• Trace level enabled
Cisco CallManager CPU resource consumption varies, depending on the trace level enabled.
Changing the trace level from Default to Full on Cisco CallManager can increase CPU consumption
significantly under high loads. (Changing the tracing level from Default to No tracing can decrease
CPU consumption significantly at high loads, but this is not a recommended configuration and is
not supported by Cisco Technical Assistance Center.) CPU consumption due to Default traces will
vary based on load, Cisco CallManager release, applications installed, call flow complexity, and so
forth.
• Memory consumption and disk I/O resources (battery-backed write cache)
• Phone authentication and encryption
It is important to balance all resources equally as much as possible if you are using more than one
primary Cisco CallManager server. This balancing of resources will prevent overloading one server to
benefit the others.

Cisco IP Contact Center Enterprise Edition SRND


6-14 OL-7279-04
C H A P T E R 7
Agent Desktop and Supervisor Desktop

An agent desktop is a required component of an IPCC deployment. From the agent desktop, the agent
performs agent state control (login, logout, ready, not ready, and wrap-up) and call control (answer,
release, hold, retrieve, make call, transfer, and conference).
Within the Cisco Intelligent Contact Management (ICM) configuration, an IPCC agent desktop is not
statically associated with any specific agent or IP Phone extension. Agents and IP Phone extensions
(device targets) must be configured within the ICM configuration, and both are associated with a specific
Cisco CallManager cluster. When logging in from an agent desktop, the agent is presented with a dialog
box that prompts for agent ID, password (optional, depending upon agent configuration in the ICM), and
the IPCC phone extension to be used for this login session. It is at login time that the agent ID, IP Phone
extension (device target), and agent desktop IP address are all dynamically associated. The association
is released upon agent logout. This mechanism enables an agent to hot-desk from one agent desktop to
another. It also provides for laptop roaming so that an agent can take their laptop to any IP Phone and
log in from that IP Phone (assuming the IP Phone has been configured in the ICM and in Cisco
CallManager to be used in an IPCC deployment). Agents can also log in to other IP Phones using the
extension mobility feature.
All communication from the agent desktop passes through the CTI OS Server (see Figure 7-1). The CTI
OS Server can run on the same Peripheral Gateway (PG) server as the Cisco CallManager PG process
(typical scenario) or on a separate server. If the CTI OS Server runs on its own platform, then that server
is sometimes called a CTI gateway (CG) as opposed to a Peripheral Gateway (PG). The hardware and
third-party software requirements for a CG and PG are the same. Server sizing is discussed in the chapter
on Sizing IPCC Components and Servers, page 5-1.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 7-1
Chapter 7 Agent Desktop and Supervisor Desktop
Types of IPCC Agent Desktops

Figure 7-1 Agent Desktop Communication with CTI OS Server

ICM central controller IPCC Agent


desktops
PG server
PG 1
IP phones
PG Agent
IP
CTI OS server IP
IP
CallManager
Cluster IP
CTI server
M
JTAPI
CCM PIM M
IP IVR 1 JTAPI
M
OPC SCI JTAPI
IVR 1 PIM

IP IVR 2 PSTN
V
SCI
IVR 2 PIM IP voice
TDM Voice

132072
CTI/Call
control data

For each Cisco CallManager PG (and Cisco CallManager cluster), there is one CTI OS Server. The CTI
OS Server and the Cisco CallManager PG communicate with each other via the Open Peripheral
Controller (OPC) process. All communications from the CTI OS Server are passed to the CTI Server,
then via OPC to the Cisco CallManager PG process, then typically to either the ICM Central Controller
or the Cisco CallManager.
There may be one or more CTI OS Servers connecting to the CTI Server. The CTI OS Server interfaces
with the CTI OS desktop and toolkit as well as Cisco Agent Desktop (Release 6.0 and later). All agent
state change requests flow from the agent desktop through CTI OS to the CTI Server to the Cisco
CallManager PG to the ICM Central Controller. The ICM Central Controller monitors the agent state so
that it knows when it can and cannot route calls to that agent and can report on that agent's activities.
Call control (answer, release, hold, retrieve, make call, and so on) flows from the agent desktop through
the CTI OS Server to the CTI Server to the Cisco CallManager PG and then to the Cisco CallManager.
The Cisco CallManager then performs the requested call or device control. It is the role of the Cisco
CallManager PG to keep the IPCC agent desktop and the IP Phone in sync with one another.

Types of IPCC Agent Desktops


There are three types of IPCC agent and supervisor desktops available:
• Cisco Agent Desktop — A packaged agent desktop solution.
• CTI Object Server (CTI OS) — A toolkit for agent desktops that require customization or
integration with other applications on the desktop or with customer databases such as a Customer
Relationship Management (CRM) application.

Cisco IP Contact Center Enterprise Edition SRND


7-2 OL-7279-04
Chapter 7 Agent Desktop and Supervisor Desktop
Types of IPCC Agent Desktops

• Prepackaged CRM integrations — These integrations are available through Cisco CRM Technology
Partners. These integrations are based on the CTI OS toolkit and are not discussed individually in
this document.
In addition to an agent desktop, a supervisor desktop is available with the Cisco Agent Desktop and
CTI OS options.
Cisco Agent Desktop is a packaged agent and supervisor desktop application. It has a system
administration interface that allows configuration of the desktop and workflow automation. Desktop
configuration includes: defining what buttons are visible; specifying call, voice, and data processing
functions for buttons; and specifying what telephony data will appear on the desktop. The workflow
automation enables data processing actions to be scheduled based on telephony events (for example,
popping data into third-party applications on answer and sending email on dropped events). The
workflow automation interfaces with applications written for Microsoft Windows browsers and terminal
emulators. Some customizations can be as simple as using keystroke macros for screen pops.
While CTI OS is a toolkit, it does provide a pre-built, operational agent and supervisor desktop
executable. Source code for these executables is provided. Source code is also provided with a number
of sample applications, which are included with the toolkit to allow for easy customization. The CTI OS
Toolkit does provide the most flexibility. It allows a custom agent or supervisor desktop to be developed,
as well as offering advanced tools for integrating the desktop to a database, CRM, or other applications.
Aside from the differences between configured versus customized applications, one major distinction
between the two desktop solutions is that Cisco Agent Desktop offers all of the following features:
• Ad-hoc recording (CTI OS users must rely on a third-party recording solution.)
• IP Phone Agents (IPPA) — This is an XML application that allows agents on Cisco 7940 and 7960
IP Phones to log in and perform basic ACD functions from their phones.
• SPAN port silent monitoring — A server-based and switch-based silent monitor solution that works
with IPPA as well as Cisco Agent Desktop agents. CTI OS does offer endpoint monitoring, but it
requires a PC to be running at the agent's location, which is not the case with IPPA.
Prepackaged CRM integrations are provided by the major CRM manufacturers. These packages are
based upon either CTI or CTI OS tools.
Cisco Agent Desktop, Supervisor Desktop, and CTI OS cannot co-exist with Cisco CallManager PG; the
configuration of agents and supervisors must be kept separate. Cisco Supervisor Desktop cannot be used
to monitor a CTI OS agent desktop, nor can a CTI OS supervisor monitor a Cisco Agent Desktop agent.
The following sections cover these two desktop options separately. Both rely upon communication with
the CTI Server, as described in the previous section.

Cisco Agent Desktop and Cisco Supervisor Desktop


Throughout this section, statements about Cisco Agent Desktop apply to both Cisco Agent Desktop and
Cisco Supervisor Desktop, except where specifically noted. The Cisco Supervisor Desktop integrates
with Cisco Agent Desktop and allows supervisory functions such as barge-in, intercept, and silent
monitoring.
The Cisco Agent Desktop and Supervisor Desktop Product Suite is a client-server application providing
packaged CTI functionality for Cisco ICM and CTI. Cisco Agent Desktop includes a set of base server
applications as well as a VoIP Monitor server application.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 7-3
Chapter 7 Agent Desktop and Supervisor Desktop
Types of IPCC Agent Desktops

One of the base Cisco Agent Desktop services, the Enterprise server, is a monitor-only CTI application
that provides value-added services to the agent desktop. Similarly, the other Cisco Agent Desktop
services provide value-added features such as recording and chatting. Prior to Release 6.0, Cisco Agent
Desktop received its CTI input from the CTI server; with Release 6.0 and later (except for IPCC
Express), Cisco Agent Desktop receives its CTI input from the CTI OS Server.
The Cisco Agent Desktop servers may be co-resident on the Peripheral Gateway (PG). As the number
of agents increases, the Cisco Agent Desktop servers might require a dedicated server. For more
information on server requirements, refer to the chapter on Sizing IPCC Components and Servers, page
5-1.
Figure 7-2 illustrates the system components.

Figure 7-2 Cisco Agent Desktop

PSTN Agent PC
-Cisco Agent Desktop
IP phone IP

Cisco voice
gateway Cisco Catalyst Administrator PC Agent PC
V
-SPAN port for -Administrative Supervisor PC -Cisco Agent Desktop
Silent Monitor Workstation -Cisco Supervisor Desktop -Media termination (optional)
(optional) -Cisco Agent Desktop

Ethernet LAN

M
Cisco
ICM IP IVR CallManager PG and CTI OS PG and CTI OS
-Cisco Desktop Base Server (optional, for redundancy)
-Cisco Desktop VoIP Monitor Server -Cisco Desktop Base Server

132175
-Cisco Desktop Recording Server -Cisco Desktop VoIP Monitor Server
-Cisco Desktop Recording Server

Cisco Agent Desktop Release 6.0(1) includes the following new features:
• CTI OS-based implementation
• No longer dependent on shares; configuration data is now stored in Directory Services
• Desktops are automatically updated when new versions are detected at startup
• System redundancy
Cisco Desktop Administrator includes the following new features:
• Configuration settings are set up and maintained through the Cisco Agent Desktop Configuration
Setup utility, which can be accessed through the Desktop Administrator (or as a standalone
program). These configuration settings are no longer set up during the installation process.
• HTTP Post/Get action enables interaction between Agent Desktop (Premium version only) and
web-based applications.

Cisco IP Contact Center Enterprise Edition SRND


7-4 OL-7279-04
Chapter 7 Agent Desktop and Supervisor Desktop
Types of IPCC Agent Desktops

Cisco Agent Desktop includes the following new features:


• Improved Agent Desktop interface now includes call control, enterprise data, call activity, and an
integrated browser in one window.
• Cisco Outbound Option now includes the Direct Preview Dialing mode.
• Improved status bar provides more information about the user and system status.
• Accessibility improved with the addition of improved icons, screen reader-compatible tool tips for
all controls, screen reader-compatible shortcut key, and audible tones that sound when a
non-agent-initiated dialog appears (for example, chat windows or supervisor interventions).
Cisco Supervisor Desktop includes the following new features:
• Agent call data now includes the skill group.
• Provides access to agent logs.
• Individual agent statistics combined into a Team Agent Statistics report.
• Improved status bar provides more information about system status.
• Report Preferences allow you to choose which columns are displayed in reports.
Chat capability has been enhanced in the following ways:
• Agents are no longer limited to chatting with conference call participants; agents can chat with
supervisors and other team members.
• Chat Selection window displays agent phone hook states.
• Messages can be tagged as high priority for immediate notice.
Recording capability has been enhanced in the following ways:
• Recording is scalable, with the ability to record more calls simultaneously (32 calls in Enhanced
version or 80 in Premium version)
• Multiple dedicated Recording & Playback servers
For more information, refer to the Cisco Agent Desktop and Supervisor Desktop product documentation,
available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm46doc/ipccdoc/cadall/cad60d/i
ndex.htm

Cisco Agent Desktop


The Cisco Agent Desktop software provides the core component of the Cisco Agent Desktop
application: the softphone, workflow automation, login, call and agent event logging, and agent
real-time statistics.
With Cisco Agent Desktop, you have the option of using either a Cisco IP Phone (7940 or 7960) or media
termination (softphone). The media termination softphone allows the agent to make, answer, transfer,
and conference calls. If your version of Cisco Agent Desktop includes media termination, agents do not
need a physical IP phone; they can use the Cisco Agent Desktop softphone by itself.
For more information on the Cisco Agent Desktop, refer to the Cisco Agent Desktop User’s Guide,
available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm46doc/ipccdoc/cadall/cad60d/i
ndex.htm

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 7-5
Chapter 7 Agent Desktop and Supervisor Desktop
Types of IPCC Agent Desktops

IP Phone Agent (IPPA)


The Cisco IP Phone provides the ability to use the IP Phone as the agent's device. Up until Cisco Agent
Desktop Release 6.0, IPPA could be used only as a backup to the desktop application. With Release 6.0,
IPPA is now supported as the agent’s sole device. IPPA uses the XML display capability of the
Cisco 7940 and 7960 IP Phones to provide a basic text interface to the agents, allowing them to log in
and out, change state, and enter reason codes and wrap up data. The IPPA also provides the agent with
caller data and queue data displays.
For more information on the Cisco IP Phone Agent, refer to the IP Phone Agent User’s Guide, available
at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm46doc/ipccdoc/cadall/cad60d/i
ndex.htm

Cisco Supervisor Desktop


Cisco Supervisor Desktop provides a graphical view of the agent team being managed by the supervisor.
An expandable navigation tree control, similar to Windows Explorer, is used to navigate to and manage
the team's resources.
Cisco Supervisor Desktop requires an instance of Cisco Agent Desktop running co-resident on the
supervisor's PC. This instance of Agent Desktop is the same as the instance of Agent Desktop on the
agent PCs.
The Supervisor Desktop installation includes installation of both the Supervisor Desktop software and
the instance of Agent Desktop software. During the Supervisor Desktop installation process, you are
prompted to choose the option of using either a hardware IP Phone (either the Cisco 7940 or 7960) or
media termination (softphone). The instance of Agent Desktop allows the supervisor to take calls and
enables barge-in, intercept, and retrieval of skill group statistics.
For more information on the Supervisor Desktop, refer to the IPCC Supervisor Desktop User’s Guide,
available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm46doc/ipccdoc/cadall/cad60d/i
ndex.htm

CTI Object Server (CTI OS) Toolkit


Cisco CTI Object Server (CTI OS) is a high-performance, scalable, fault tolerant server-based solution
for deploying CTI applications. It is Cisco's latest version of the CTI implementation. CTI OS serves as
a single point of integration for third-party applications, including Customer Relationship Management
(CRM) systems, data mining, and workflow solutions. Configuration and behavior information is
managed at the server, simplifying customization, updates, and maintenance. Servers can be accessed
and managed remotely. Thin-client and browser-based applications that do not require Cisco software
on the desktop can be developed and deployed with CTI OS.
CTI OS incorporates the following major components:
• CTI OS Toolkit
• Client Interface Library
• CTI OS Agent Phone
• CTI OS Supervisor Phone

Cisco IP Contact Center Enterprise Edition SRND


7-6 OL-7279-04
Chapter 7 Agent Desktop and Supervisor Desktop
Types of IPCC Agent Desktops

Architecturally, CTI OS Server is positioned between the CTI OS agent desktop and the CTI Server.
CTI OS Server provides a mechanism to maintain agent and call state information so that the agent
desktop can be stateless. This architecture provides the necessary support to develop a browser-based
agent desktop if desired.
The CTI OS system consists of three major components (see Figure 7-3):
• CTI OS Server
• CTI OS Agent Desktop
• CTI OS Supervisor Desktop (only on Cisco IPCC for now)

Figure 7-3 CTI OS Basic Architecture

PG/CTI platform
CTIOS server
Cisco CTI
CallManager server CTI CTIOS
PIM server server
driver node

JTAPI TCP/IP

Agent workstation
Desktop computer

M CTIOS agent/supervisor desktop


Voice IP
Cisco Active-X controls
CallManager Telephone
COM (CtiosClient)
C++ CIL

76375

CTI OS Server connects to CTI Server via TCP/IP.


CTI OS typically runs on the same server as the CTI Server and Cisco CallManager PG processes. As
an IPCC site gets larger, the CTI OS server process is the first process you should split off from the
PG/CTI Server. Multiple CTI OS server processes can connect to a CTI Server. A single CTI OS server
can support a maximum of 500 simultaneous agent logins, and additional CTI OS servers can be added
to exceed this limitation. Server sizing for CTI OS is covered in the chapter on Sizing IPCC Components
and Servers, page 5-1.
CTI OS is typically installed in duplex mode, with two CTI OS servers running in parallel for
redundancy. The CTI OS desktop application will randomly connect to either server and automatically
fail over to the other server if the connection to the original CTI OS server fails. CTI OS can also run in
simplex mode with all clients connecting to one server, but Cisco does not recommend this
configuration.
Endpoint Silent Monitoring was introduced in CTI OS Release 5.1. A supervisor can choose to silently
monitor an agent on his/her team. Silent monitoring means that voice packets sent to and received by the
agent's IP hard phone are captured from the network and sent to the supervisor desktop. At the supervisor
desktop, these voice packets are decoded and played on the supervisor's system sound card.
For more information, refer to the Release Notes for CTI OS Release 6.0, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6cti/ctios60/index.h
tm

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 7-7
Chapter 7 Agent Desktop and Supervisor Desktop
Additional Information about Cisco Agent Desktop and Supervisor Desktop

CTI OS also provides the following features:


• CTI OS JavaCIL API — SDK for Java-based desktops for agents and supervisors
• CTI OS Supervisor support for Agent Availability Status for IPCC across multi-media domains —
Agent availability in multi-media channels (email, web) displayed on the supervisor's desktop
• Siebel Support — IPCC Enterprise Adapter certified for use with Siebel 7.05 and 7.53
For more information, refer to the Cisco CTI Object Server product documentation, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6cti/ctios60/index.h
tm

Additional Information about Cisco Agent Desktop and


Supervisor Desktop
The following additional information related to Cisco Agent Desktop and Cisco Supervisor Desktop is
available at the listed URLs:
• CTI Compatibility Matrix
Provides tables outlining ICM Peripheral Gateway (PG) and Object Server (OS) support for versions
of Cisco Agent Desktop, CTI OS Server, CTI OS Client, Data Collaboration Server (DCS),
Siebel 6, and Siebel 7.
http://www.cisco.com/en/US/products/sw/custcosw/ps14/prod_technical_reference_list.html
• Voice-Over IP Monitoring Best Practices Deployment Guide for CAD 6.0/6.1
This document provides information about the abilities and requirements of voice over IP (VoIP)
monitoring for Cisco Agent Desktop (CAD) Releases 6.0 and 6.1. This information is intended to
help you deploy VoIP monitoring effectively.
http://www.cisco.com/application/pdf/en/us/guest/products/ps427/c1225/ccmigration_09186a
008038a48d.pdf
• Integrating Cisco Agent Desktop into a Citrix Thin-Client Environment
This document helps guide a Citrix administrator through the installation of Cisco Agent Desktop
Release 6.0 applications in a Citrix thin-client environment.
http://www.cisco.com/application/pdf/en/us/partner/products/ps427/c1244/cdccont_0900aecd
800e9db4.pdf
• Cisco Agent Desktop Service Information
This document provides release-specific information such as product limitations, service connection
types and port numbers, configuration files, registry entries, event/error logs, error messages, and
troubleshooting.
http://www.cisco.com/en/US/products/sw/custcosw/ps427/prod_technical_reference_list.html
• Cisco ICM Software CTI Server Message Reference Guide (Protocol Version 9)
This document describes the CTI Server message interface between Cisco ICM software and
application programs.
http://www.cisco.com/application/pdf/en/us/guest/products/ps14/c1667/ccmigration_09186a0
080225251.pdf

Cisco IP Contact Center Enterprise Edition SRND


7-8 OL-7279-04
Chapter 7 Agent Desktop and Supervisor Desktop
Additional Information about Cisco Agent Desktop and Supervisor Desktop

• Cisco ICM Software CTI OS Developer's Guide


This document provides a brief overview of the Cisco CTI Object Server (CTI OS), introduces
programmers to developing CTI-enabled applications with CTI OS, and describes the syntax and
usage for CTI OS methods and events.
http://www.cisco.com/application/pdf/en/us/guest/products/ps14/c1667/ccmigration_09186a0
080228190.pdf

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 7-9
Chapter 7 Agent Desktop and Supervisor Desktop
Additional Information about Cisco Agent Desktop and Supervisor Desktop

Cisco IP Contact Center Enterprise Edition SRND


7-10 OL-7279-04
C H A P T E R 8
Bandwidth Provisioning and QoS Considerations

This chapter presents an overview of the IPCC Enterprise network architecture, deployment
characteristics of the network, and provisioning requirements of the IPCC network. Essential network
architecture concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow
categorization, IP-based prioritization and segmentation, and bandwidth and latency requirements.
Provisioning guidelines are presented for network traffic flows between remote components over the
WAN, including recommendations on how to apply proper Quality of Service (QoS) to WAN traffic
flows. For a more detailed description of the IPCC architecture and various component internetworking,
see Architecture Overview, page 1-1.
Cisco IP Contact Center (IPCC) has traditionally been deployed using private, point-to-point leased-line
network connections for both its private (duplexed controller, side-to-side) as well as public (Peripheral
Gateway to Central Controller) WAN network structure. Optimal network performance characteristics
(and route diversity for the fault tolerant failover mechanisms) are provided to the IPCC application only
through dedicated private facilities, redundant IP routers, and appropriate priority queuing.
Enterprises deploying networks that share multiple traffic classes, of course, prefer to maintain their
existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks
offer both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.
Beginning with IPCC Enterprise Release 5.0, application layer Quality of Service (QoS) packet marking
on the IPCC public path is supported from within the IPCC application, thus simplifying WAN
deployment in a converged network environment when that network is enabled for QoS. QoS
deployment on the public network enables remote Peripheral Gateways (PGs) to share a converged
network and, at the same time, guarantees the stringent ICM/IPCC traffic latency, bandwidth, and
traffic-related prioritization requirements inherent in the real-time requirements of the product. This
chapter presents recommendations for configuring QoS for the traffic flows over the WAN. The public
network that connects the remote PGs to the Central Controller is the main focus.
Historically, two QoS models have been used: Integrated Services (IntServ) and Differentiated Services
(DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve
the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state
information of thousands of reservations has to be maintained at every router along the path. DiffServ,
in contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied
to the traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution,
DiffServ is more widely used and accepted. IPCC applications are not aware of RSVP and, therefore,
IPCC does not support IntServ. The QoS considerations in this chapter are based on DiffServ.
Adequate bandwidth provisioning is a critical component in the success of IPCC deployments.
Bandwidth guidelines and examples are provided in this chapter to help with provisioning the required
bandwidth.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-1
Chapter 8 Bandwidth Provisioning and QoS Considerations
IPCC Network Architecture Overview

IPCC Network Architecture Overview


IPCC is a distributed, resilient, and fault-tolerant network application that relies heavily on a network
infrastructure with sufficient performance to meet the real-time data transfer requirements of the
product. A properly designed IPCC network is characterized by proper bandwidth, low latency, and a
prioritization scheme favoring specific UDP and TCP application traffic. These design requirements are
necessary to ensure both the fault-tolerant message synchronization of specific duplexed Cisco
Intelligent Contact Management (ICM) nodes (Central Controller and Peripheral Gateway) as well as
the delivery of time-sensitive system status data (agent states, call statistics, trunk information, and so
forth) across the system. Expeditious delivery of PG data to the Central Controller is necessary for
accurate call center state updates and fully accurate real-time reporting data.
In an IP Telephony environment, WAN and LAN traffic can be grouped into the following categories:
• Voice and video traffic
Voice calls (voice carrier stream) consist of Real-Time Transport Protocol (RTP) packets that
contain the actual voice samples between various endpoints such as PSTN gateway ports, IP IVR
Q-points (ports), and IP phones.
• Call control traffic
Call control consists of packets belonging to one of several protocols (H.323, MGCP, SCCP, or
TAPI/JTAPI), according to the endpoints involved in the call. Call control functions include those
used to set up, maintain, tear down, or redirect calls. For IPCC, control traffic includes routing and
service control messages required to route voice calls to peripheral targets (such as agents, skill
groups, or services) and other media termination resources (such as IP IVR ports) as well as the
real-time updates of peripheral resource status.
• Data traffic
Data traffic can include normal traffic such as email, web activity, and CTI database application
traffic sent to the agent desktops, such as screen pops and other priority data. IPCC priority data
includes data associated with non-real-time system states, such as events involved in reporting and
configuration updates.
This chapter focuses primarily on the types of data flows and bandwidth used between a remote
Peripheral Gateway (PG) and the ICM Central Controller (CC), on the network path between sides A
and B of a PG or of the Central Controller, and on the CTI flows between the desktop application and
CTI OS and/or Cisco Agent Desktop servers. Guidelines and examples are presented to help estimate
required bandwidth and, where applicable, provision QoS for these network segments.
The flows discussed encapsulate the latter two of the above three traffic groups. Because media (voice
and video) streams are maintained primarily between Cisco CallManager and its endpoints, voice and
video provisioning is not addressed here.
For bandwidth estimates for the voice RTP stream generated by the calls to IPCC agents and the
associated call control traffic generated by the various protocols, refer to the Cisco IP Telephony
Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
Data traffic consisting of various HTTP, email, and other non-IPCC mission critical traffic will vary
according to the specific integration and deployment model used, and this type of traffic is not addressed
in this chapter. For information on proper network design for data traffic, refer to the Network
Infrastructure and Quality of Service (QoS) documentation available at
http://www.cisco.com/go/srnd

Cisco IP Contact Center Enterprise Edition SRND


8-2 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
IPCC Network Architecture Overview

Network Segments
The fault-tolerant architecture employed by IPCC requires two independent communication networks.
The private network (or dedicated path) carries traffic necessary to maintain and restore synchronization
between the systems and to allow clients of the Message Delivery Subsystem (MDS) to communicate.
The public network carries traffic between each side of the synchronized system and foreign systems.
The public network is also used as an alternate network by the fault-tolerance software to distinguish
between node failures and network failures.

Note The terms public network and visible network are used interchangeably throughout this document.

A third network, the signaling access network, may be deployed in ICM systems that also interface
directly with the carrier network (PSTN) and that deploy the Hosted ICM/IPCC architecture. The
signaling access network is not addressed in this chapter.
Figure 8-1 illustrates the fundamental network segments for an IPCC Enterprise system with two PGs
(with sides A and B co-located) and two geographically separated CallRouter servers.

Figure 8-1 Example of Public and Private Network Segments for an IPCC Enterprise System

CallRouter A CallRouter B

V V

PG 1A PG 1B PG 2A PG 2B 126999

Public Network
CC/PG Private Network

The following notes apply to Figure 8-1:


• The private network carries ICM traffic between duplexed sides of the CallRouter or a PG pair. This
traffic consists primarily of synchronized data and control messages, and it also conveys the state
transfer necessary to re-synchronize duplexed sides when recovering from an isolated state. When
a router process and its logger process are deployed on separate nodes, most communication
between them is also over the private network.
• When deployed over a WAN, the private link is critical to the overall responsiveness of the Cisco
ICM, and it must meet aggressive latency requirements. The private link must provide sufficient
bandwidth to handle simultaneous synchronizer and state transfer traffic, and it must have enough
bandwidth left over for the case when additional data will be transferred as part of a recovery
operation. IP routers in the private network typically use priority queuing (based on the ICM private
high/non-high IP addresses and, for UDP heartbeats, port numbers) to ensure that high-priority ICM
traffic does not experience excessive queuing delay.
• The public network carries traffic between the Central Controller and call centers (PGs and AWs).
The public network can also serve as a Central Controller alternate path, used to determine which
side of the Central Controller should retain control in the event that the two sides become isolated
from one another. The public network is never used to carry synchronization control traffic.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-3
Chapter 8 Bandwidth Provisioning and QoS Considerations
IPCC Network Architecture Overview

• Remote call centers connect to each Central Controller side via the public network. Each WAN link
to a call center must have adequate bandwidth to support the PGs and AWs at the call center. The
IP routers in the public network use IP-based priority queuing or QoS to ensure that ICM traffic
classes are processed within acceptable tolerances for both latency and jitter.
• Call centers (PGs and AWs) local to one side of the Central Controller connect to the local Central
Controller side via the public Ethernet, and to the remote Central Controller side over public WAN
links. This arrangement requires that the public WAN network must provide connectivity between
side A and side B. Bridges may optionally be deployed to isolate PGs from the AW LAN segment
to enhance protection against LAN outages.
• To achieve the required fault tolerance, the private WAN link must be fully independent from the
public WAN links (separate IP routers, network segments or paths, and so forth). Independent WAN
links ensure that any single point of failure is truly isolated between public and private networks.
Additionally, public network WAN segments traversing a routed network must be deployed so that
PG-to-CallRouter route diversity is maintained throughout the network. Be sure to avoid routes that
result in common path selection (and, thus, a common point of failure) for the multiple
PG-to-CallRouter sessions (see Figure 8-1).

UDP Heartbeat and TCP Keep-Alive


The primary purpose of the UDP heartbeat design is to detect if a circuit has failed. Detection can be
made from either end of the connection, based on the direction of heartbeat loss. Both ends of a
connection send heartbeats at periodic intervals (typically every 100 or 400 milliseconds) to the opposite
end, and each end looks for analogous heartbeats from the other. If either end misses 5 heartbeats in a
row (that is, if a heartbeat is not received within a period that is 5 times the period between heartbeats),
then the side detecting this condition assumes that something is wrong and the application closes the
socket connection. At this point, a TCP Reset message is typically generated from the closing side. Loss
of heartbeats can be caused by various reasons, such as: the network failed, the process sending the
heartbeats failed, the machine on which the sending process resides is shut down, the UDP packets are
not properly prioritized, and so forth.
There are several parameters associated with heartbeats. In general, you should leave these parameters
set to their system default values. Some of these values are specified when a connection is established,
while others can be specified by setting values in the Microsoft Windows 2000 registry. The two values
of most interest are:
• The amount of time between heartbeats
• The number of missed heartbeats (currently hard-coded as 5) that the system uses to determine
whether a circuit has apparently failed
The default value for the heartbeat interval is 100 milliseconds between the central sites, meaning that
one site can detect the failure of the circuit or the other site within 500 ms. Prior to ICM Release 5.0, the
default heartbeat interval between a central site and a peripheral gateway was 400 ms, meaning that the
circuit failure threshold was 2 seconds in this case.
In ICM Releases 5.0 and 6.0, as a part of the ICM QoS implementation, the UDP heartbeat is replaced
by a TCP keep-alive message in the public network connecting a Central Controller to a Peripheral
Gateway. (An exception is that, when an ICM Release 5.0 or 6.0 Central Controller talks to a PG that is
prior to Release 5.0, the communication automatically reverts to the UDP mechanism.) Note that the
UDP heartbeat remains unchanged in the private network connecting duplexed sites.
The TCP keep-alive feature, provided in the TCP stack, detects inactivity and in that case causes the
server/client side to terminate. It operates by sending probe packets (namely, keep-alive packets) across
a connection after the connection has been idle for a certain period, and the connection is considered

Cisco IP Contact Center Enterprise Edition SRND


8-4 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
IPCC Network Architecture Overview

down if a keep-alive response from the other side is not heard. Microsoft Windows 2000 allows you to
specify keep-alive parameters on per-connection basis. For ICM public connections, the keep-alive
timeout is set to 5∗400 ms, meaning that a failure can be detected after 2 seconds, as was the case with
the UDP heartbeat prior to Release 5.0.
The reasons of moving to TCP keep-alive are as follows:
• The use of UDP heartbeats creates deployment complexities in a firewall environment. The dynamic
port allocation for heartbeat communications makes it necessary to open a large range of port
numbers, thus defeating the original purpose of the firewall device.
• In a converged network, algorithms used by routers to handle network congestion conditions have
different effects on TCP and UDP. As a result, delays and congestion experienced by UDP heartbeat
traffic can have, in some cases, little correspondence to the TCP connections.

IP-Based Prioritization and QoS


Simply stated, traffic prioritization is needed because it is possible for large amounts of low-priority
traffic to get in front of high-priority traffic, thereby delaying delivery of high-priority packets to the
receiving end. In a slow network flow, the amount of time a single large (for example, 1500-byte) packet
consumes on the network (and delays subsequent packets) can exceed 100 ms. This delay would cause
the apparent loss of one or more heartbeats. To avoid this situation, a smaller Maximum Transmission
Unit (MTU) size is used by the application for low-priority traffic, thereby allowing a high-priority
packet to get on the wire sooner. (MTU size for a circuit is calculated from within the application as a
function of the circuit bandwidth, as configured at PG setup.)
A network that is not prioritized correctly almost always leads to call time-outs and problems from loss
of heartbeat as the application load increases or (worse) as shared traffic is placed on the network. A
secondary effect often seen is application buffer pool exhaustion on the sending side, due to extreme
latency conditions.
ICM applications use three priorities – high, medium, and low. However, prior to QoS, the network
effectively recognized only two priorities identified by source and destination IP address (high-priority
traffic was sent to a separate IP destination address) and, in the case of UDP heartbeats, by specific UDP
port range in the network. The approach with IP-based prioritization is to configure IP routers with
priority queuing in a way that gives preference to TCP packets with a high-priority IP address and to
UDP heartbeats over the other traffic.
A QoS-enabled network applies prioritized processing (queuing, scheduling, and policing) to packets
based on QoS markings as opposed to IP addresses. ICM Release 6.0 provides marking capability of
both Layer-3 DSCP and Layer-2 802.1p (using the Microsoft Windows Packet Scheduler) for public
network traffic. Traffic marking implies that configuring dual IP addresses on the public Network
Interface Controller (NIC) is no longer necessary if the public network is aware of QoS markings.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-5
Chapter 8 Bandwidth Provisioning and QoS Considerations
IPCC Network Architecture Overview

Traffic Flow
This section briefly describes the traffic flows for the public and private networks.

Public Network Traffic Flow


The active PG continuously updates the Central Controller call routers with state information related to
agents, calls, queues, and so forth, at the respective call center sites. This type of PG-to-Central
Controller traffic is real-time traffic. The PGs also send up historical data each half hour on the half hour.
The historical data is low-priority, but it must complete its journey to the central site within the half hour
(to get ready for the next half hour of data).
When a PG starts, its configuration data is supplied from the central site so that it can know which
agents, trunks, and so forth it has to monitor. This configuration download can be a significant network
bandwidth transient.
In summary, traffic flows from PG to Central Controller can be classified into the following distinct
flows:
• High-priority traffic — Includes routing and Device Management Protocol (DMP) control traffic. It
is sent in TCP with the public high-priority IP address.
• Heartbeat traffic — UDP messages with the public high-priority IP address and in the port range of
39500 to 39999. Heartbeats are transmitted at 400-ms intervals bidirectionally between the PG and
the Central Controller. The UDP heartbeat traffic does not exist unless the Central Controller talks
to a PG that is prior to Release 5.0.
• Medium-priority traffic — Includes real-time traffic and configuration requests from the PG to the
Central Controller. The medium-priority traffic is sent in TCP with the public high-priority IP
address.
• Low-priority traffic — Includes historical data traffic, configuration traffic from the CC, and call
close notifications. The low-priority traffic is sent in TCP with the public high-priority IP address.
Administrative Workstations (AWs) are typically deployed at ACD sites, and they share the physical
WAN/LAN circuits that the PGs use. When this is the case, network activity for the AW must be factored
into the network bandwidth calculations. This document does not address bandwidth sizing for AW
traffic.

Private Network Traffic Flow


Traffic destined for the critical Message Delivery Service (MDS) client (Router or OPC) is copied to the
other side over the private link.
The private traffic can be summarized as follows:
• High-priority traffic — Includes routing, MDS control traffic, and other traffic from MDS client
processes such as the PIM CTI Server, Logger, and so forth. It is sent in TCP with the private
high-priority IP address.
• Heartbeat traffic — UDP messages with the private high-priority IP address and in the port range of
39500 to 39999. Heartbeats are transmitted at 100-ms intervals bidirectionally between the duplexed
sides.

Cisco IP Contact Center Enterprise Edition SRND


8-6 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Bandwidth and Latency Requirements

• Medium-priority and low-priority traffic — For the Central Controller, this traffic includes shared
data sourced from routing clients as well as (non-route control) call router messages, including call
router state transfer (independent session). For the OPC (PG), this traffic includes shared non-route
control peripheral and reporting traffic. This class of traffic is sent in TCP sessions designated as
medium-priority and low-priority, respectively, with the private non-high priority IP address.
• State transfer traffic — State synchronization messages for the Router, OPC, and other synchronized
processes. It is sent in TCP with a private non-high-priority IP address.

Bandwidth and Latency Requirements


The amount of traffic sent between the Central Controllers (call routers) and Peripheral Gateways is
largely a function of the call load at that site, although transient boundary conditions (for example,
startup configuration load) and specific configuration sizes also affect the amount of traffic. A rule of
thumb that works well for ICM software releases prior to 5.0 in steady-state operation is 1,000 bytes
(8 kb) of data is typically sent from a PG to the Central Controller for each call that arrives at a
peripheral. Therefore, if a peripheral is handling 10 calls per second, we would expect to need 10,000
bytes (80 kb) of data per second to be communicated to the Central Controller. The majority of this data
is sent on the low-priority path. The ratio of low to high path bandwidth varies with the characteristics
of the deployment (most significantly, the degree to which post-routing is performed), but generally it
is roughly 10 to 30 percent. Each post-route request generates between 200 and 300 additional bytes of
data on the high-priority path. Translation routes incur per-call data flowing in the opposite direction
(CallRouter to PG), and the size of this data is fully dependent upon the amount of call context presented
to the desktop.
A site that has an ACD as well as a VRU has two peripherals, and the bandwidth requirement
calculations should take both peripherals into account. As an example, a site that has 4 peripherals, each
taking 10 calls per second, should generally be configured to have 320 kbps of bandwidth. The 1,000
bytes per call is a rule of thumb, but the actual behavior should be monitored once the system is
operational to ensure that enough bandwidth exists. (ICM meters data transmission statistics at both the
CallRouter and PG sides of each path.)
Again, the rule of thumb and example described here apply to ICM releases prior to 5.0, and they are
stated here for reference purpose only. Two bandwidth calculators are supplied for ICM releases 5.0 and
6.0, and they can project bandwidth requirements far more accurately. See Bandwidth Sizing, page 8-10,
for more details.
As with bandwidth, specific latency requirements must be guaranteed in order for the ICM to function
as designed. The side-to-side private network of duplexed CallRouter and PG nodes has a maximum
one-way latency of 100 ms (50 ms preferred). The PG-to-CallRouter path has a maximum one-way
latency of 200 ms in order to perform as designed. Meeting or exceeding these latency requirements is
particularly important in an environment using ICM post-routing and/or translation routes.
As discussed previously, ICM bandwidth and latency design is fully dependant upon an underlying IP
prioritization scheme. Without proper prioritization in place, WAN connections will fail. The Cisco ICM
support team has custom tools (for example, Client/Server) that can be used to demonstrate proper
prioritization and to perform some level of bandwidth utilization modeling for deployment certification.
Depending upon the final network design, an IP queuing strategy will be required in a shared network
environment to achieve ICM traffic prioritization concurrent with other non-ICM traffic flows. This
queuing strategy is fully dependent upon traffic profiles and bandwidth availability. As discussed
earlier, success in a shared network cannot be guaranteed unless the stringent bandwidth, latency, and
prioritization requirements of the product are met.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-7
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Network Provisioning
This section covers:
• Quality of Service, page 8-8
• Bandwidth Sizing, page 8-10
• Bandwidth Requirements for CTI OS Agent Desktop, page 8-11
• Bandwidth Requirements for Cisco Agent Desktop Release 6.0, page 8-13

Quality of Service
This section covers:
• QoS Planning, page 8-8
• Public Network Marking Requirements, page 8-8
• Configuring QoS on IP Devices, page 8-9
• Performance Monitoring, page 8-10

QoS Planning
In planning QoS, a question often arises about whether to mark traffic in the application or at the network
edge. Marking traffic in the application saves the access lists for classifying traffic in IP routers and
switches, and it might be the only option if traffic flows cannot be differentiated by IP address, port
and/or other TCP/IP header fields. As mentioned earlier, ICM currently supports DSCP markings on the
public network connection between the Central Controller and the PG. Additionally, when deployed
with Microsoft Windows Packet Scheduler, ICM offers shaping and 802.1p. The shaping functionality
mitigates the bursty nature of ICM transmissions by smoothing transmission peaks over a given time
period, thereby smoothing network usage. The 802.1p capability, a LAN QoS handling mechanism,
allows high-priority packets to enter the network ahead of low-priority packets in a congested Layer-2
network segment.
Traffic can be marked or remarked on edge routers and switches if it is not marked at its source or if the
QoS trust is disabled in an attempt to prevent non-priority users in the network from falsely setting the
DSCP or 802.1p values of their packets to inflated levels so that they receive priority service. For
classification criteria definitions on edge routers and switches, see Table 8-1.

Public Network Marking Requirements


The ICM QoS markings are set in compliance with Cisco IP Telephony recommendations but can be
overwritten if necessary. Table 8-1 shows the default markings of public network traffic, latency
requirement, IP address, and port associated with each priority flow.
For details about Cisco IP Telephony packet classifications, refer to the Cisco IP Telephony Solution
Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Cisco IP Contact Center Enterprise Edition SRND


8-8 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Note Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to
DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31).
Therefore, in the interim, Cisco recommends that you reserve both AF31 and CS3 for call signaling.

Table 8-1 Public Network Traffic Markings (Default) and Latency Requirements

DSCP / 802.1p
Latency DSCP / 802.1p Using Bypassing Packet
Priority IP address & port Requirement Packet Scheduler Scheduler
High High-priority public IP address 200 ms AF31 / 3 AF31 / 3
and high-priority connection
port
Medium High-priority public IP address 1,000 ms AF31 / 3 AF21 / 2
and medium-priority
connection port
Low Non-high-priority public IP 5 seconds AF11 / 1 AF11 / 1
address and low-priority
connection port

Configuring QoS on IP Devices


This section presents some representative QoS configuration examples. For details about Cisco campus
network design, switch selection, and QoS configuration commands, refer to the Cisco Enterprise
Campus documentation available at
http://www.cisco.com/en/US/netsol/ns340/ns394/ns431/networking_solutions_packages_list.html

Note The terms public network and visible network are used interchangeably throughout this document.

Configuring 802.1q Trunks on IP Switches


If 802.1p is an intended feature and the 802.1p tagging is enabled on the visible network NIC card, the
switch port into which the ICM server plugs must be configured as an 802.1q trunk, as illustrated in the
following configuration example:
switchport mode trunk
switchport trunk encapsulation dot1q
switchport trunk native vlan [data/native VLAN #]
switchport voice vlan [voice VLAN #]
switchport priority-extend trust
spanning-tree portfast

Configuring QoS trust


Assuming ICM DSCP markings are trusted, the following commands enable trust on an IP switch port:
mls qos
interface mod/port
mls qos trust dscp

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-9
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Configuring QoS Class to Classify Traffic


If the ICM traffic comes with two marking levels, AF31 for high and AF11 for non-high, the following
class maps can be used to identify the two levels:
class-map match-all ICM_Visible_High
match ip dscp af31
class-map match-all ICM_Visible_NonHigh
match ip dscp af11

Configuring QoS Policy to Act on Classified Traffic


The following policy map puts ICM high priority traffic into the priority queue with the minimum and
maximum bandwidth guarantee of 500 kbps. The non-high-priority traffic is allocated with a minimum
bandwidth of 250 kbps.
policy-map Queuing_T1
class ICM_Visible_High
priority 500
class ICM_Visible_NonHigh
bandwidth 250

Apply QoS Policy to Interface


The following commands apply the QoS policy to an interface in the outbound direction:
interface mod/port
service-policy output Queuing_T1

Performance Monitoring
Once the QoS-enabled processes are up and running, the Microsoft Windows Performance Monitor
(PerfMon) can be used to track the performance counters associated with the underlying links. For
details on using PerfMon for this purpose, refer to the Cisco ICM Enterprise Edition Administration
Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/coreicm6/config60/inde
x.htm

Bandwidth Sizing
This section briefly describes bandwidth sizing for the public (visible) and private networks.

IPCC Private Network Bandwidth


Because IPCC typically dedicates a network segment to the private path flows (both Central Controller
and PG), a specific bandwidth calculation for that segment is usually not required, except when
clustering IPCC over the WAN. Cisco therefore does not provide a bandwidth calculator for this
purpose. A rule of thumb is to provide a minimum of a T1 link for the Central Controller private path
and a minimum of a T1 link for the PG private path.

Cisco IP Contact Center Enterprise Edition SRND


8-10 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

IPCC Public Network Bandwidth


Special tools are available to help calculate the bandwidth needed for the following public network links:

ICM Central Controller to Cisco CallManager PG


A tool is accessible to Cisco partners and Cisco employees for computing the bandwidth needed between
the ICM Central Controller and Cisco CallManager. This tool is called the ACD/CallManager
Peripheral Gateway to ICM Central Controller Bandwidth Calculator, and it is available (with proper
login authentication) through the Cisco Steps to Success Portal at
http://tools.cisco.com/s2slv2/viewProcessFlow.do?method=browseStepsPage&modulename=brow
se&stepKeyId=55|EXT-AS-107287|EXT-AS-107288|EXT-AS-107301&isPreview=null&prevTec
hID=null&techName=IP%20Communications

ICM Central Controller to IP IVR or ISN PG


A tool is accessible to Cisco partners and Cisco employees for computing the bandwidth needed between
the ICM Central Controller and the IP IVR PG. This tool is called the VRU Peripheral Gateway to ICM
Central Controller Bandwidth Calculator, and it is available (with proper login authentication) through
the Cisco Steps to Success Portal at
http://tools.cisco.com/s2slv2/viewProcessFlow.do?method=browseStepsPage&modulename=brow
se&stepKeyId=55|EXT-AS-107287|EXT-AS-107288|EXT-AS-107301&isPreview=null&prevTec
hID=null&techName=IP%20Communications
At this time, no tool exists that specifically addresses communications between the ICM Central
Controller and the ISN PG. Testing has shown, however, that the tool for calculating bandwidth needed
between the ICM Central Controller and the IP IVR PG will also produce accurate measurements for
ISN if you perform the flooring substitution in one field:
For the field labeled Average number of RUN VRU script nodes, substitute the number of ICM script
nodes that interact with ISN.

Bandwidth Requirements for CTI OS Agent Desktop


This section addresses the traffic and bandwidth requirements between CTI OS Agent Desktop and the
CTI OS server. These requirements are important in provisioning the network bandwidth and QoS
required between the agents and the CTI OS server, especially when the agents are remote over a WAN
link. Even if the agents are local over Layer 2, it is important to account for the bursty traffic that occurs
periodically because this traffic presents a challenge to bandwidth and QoS allocation schemes and can
impact other mission-critical traffic traversing the network.

CTI OS Client/Server Traffic Flows and Bandwidth Requirements


CTI OS (Releases 4.6.2, 5.x, and 6.x) sends agent skill group statistics automatically every 10 seconds
to all agents. This traffic presents a challenge to bandwidth and QoS allocation schemes in the case of
centralized call processing with remote IPCC agents over a WAN link.
The statistics are carried in the same TCP connection as agent screen pops and control data.
Additionally, transmission is synchronized across all agents logged into the same CTI OS server. This
transmission results in an order-of-magnitude traffic spike every 10 seconds, affecting the same traffic
queue as the agent control traffic.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-11
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

The network bandwidth requirements increase linearly as a function of agent skill group membership.
The 10-second skill group statistics are the most significant sizing criterion for network capacity, while
the effect of system call control traffic is a relatively small component of the overall network load.
CTI OS provides a bandwidth calculator that examines bandwidth requirements for communications
between the CTI OS Server and the CTI OS Desktop. It calculates Total Bandwidth, Agent Bandwidth,
and Supervisor Bandwidth requirements. This calculator does not take into account the RTP and
multimedia messages; it calculates the bandwidth based on the control flow between the CTI OS Server
and CTI OS Client only. If one site has multiple CTI OS Servers and each server has dedicated agents,
then the bandwidth calculation must to be done separately for each CTI OS Server and added together
to derive the total bandwidth of the whole site. The CTI OS Bandwidth Calculator is available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/bandcalc/index.htm

Best Practices and Options for CTI OS Server and CTI OS Agent Desktop
To mitigate the bandwidth demands, use any combination of the following options:

Configure Fewer Statistics


CTI OS allows the system administrator to specify, in the registry, the statistics items that are sent to all
CTI OS clients. The choice of statistics affects the size of each statistics packet and, therefore, the
network traffic. Configuring fewer statistics will decrease the traffic sent to the agents. The statistics
cannot be specified on a per-agent basis at this time. For more information on agent statistics, refer to
the CTI OS System Manager's Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6cti/ctios60/

Install Another CTI OS Server at the Remote Branch


The bandwidth required between the CTI OS server at the central site and the CTI OS server at the
remote site in this scenario is a fraction of the bandwidth that would be required if each remote agent
had to access the one central CTI OS server every time. This bandwidth (between the CTI OS server at
the central site and the CTI OS server at the remote site) can be calculated as follows:
(3000 bytes) ∗ (Calls per second) = 24 kbps ∗ (Calls per second)
For example, if the call center (all agents, not just remote ones) handles 3600 BHCA (which equates to
1 call per second), then the WAN link bandwidth required to any remote branch, regardless of the
number of remote agents, would be only 24 kbps. This traffic flow should be prioritized and marked as
AF21 or AF11. Any other traffic traversing the link should be added to the bandwidth calculations as
well and should be marked with proper classification.

Turn Off Statistics on a Per-Agent Basis


You can turn off statistics on a per-agent basis by using different connection profiles. For example, if
remote agents use a connection profile with statistics turned off, these client connections would have no
statistics traffic at all between the CTI OS Server and the Agent or Supervisor Desktop. This option
could eliminate the need for a separate CTI OS Server in remote locations.
A remote supervisor or selected agents might still be able to log statistics by using a different connection
profile with statistics enabled, if more limited statistics traffic is acceptable for the remote site.
In the case where remote agents have their skill group statistics turned off but the supervisor would like
to see the agent skill group statistics, the supervisor could use a different connection profile with
statistics turned on. In this case, the volume of traffic sent to the supervisor would be considerably less.
For each skill group and agent (or supervisor), the packet size for a skill-group statistics message is
fixed. So an agent in two skill groups would get two packets, and a supervisor observing five skill groups
would get five packets. If we assume 10 agents at the remote site and one supervisor, all with the same

Cisco IP Contact Center Enterprise Edition SRND


8-12 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

two skill groups configured (in IPCC, the supervisor sees all the statistics for the skill groups to which
any agent in his agent team belongs), then this approach would reduce skill-group statistics traffic by
90% if only the supervisor has statistics turned on to observe the two skill groups but agents have
statistics turned off.
Also, at the main location, if agents want to have their skill-group statistics turned on, they could do so
without impacting the traffic to the remote location if the supervisor uses a different connection profile.
Again, in this case no additional CTI OS servers would be required.
In the case where there are multiple remote locations, assuming only supervisors need to see the
statistics, it would be sufficient to have only one connection profile for all remote supervisors.

Turn Off All Skill Group Statistics in CTI OS


If skill group statistics are not required, turn them all off. Doing so would remove the connections
between the CTI OS Server and the Agent or Supervisor Desktop and would eliminate all statistics
traffic.

Bandwidth Requirements for Cisco Agent Desktop Release 6.0


This section describes the bandwidth requirements for the Cisco Agent Desktop and Supervisor Desktop
applications and the network on which they run. All call scenarios and data presented in this section were
tested using the Cisco Agent Desktop software phone (softphone). The reported bandwidth usage
represents the total number of bytes sent for the specific scenario. It includes bandwidth for call control
and any CTI events returned from the CTI service. By default, all communication between Cisco
Desktop applications and the CTI OS server occurs through server port 42028.

Call Control Bandwidth Usage


This section lists bandwidth usage data for the following types of call control communications between
Cisco Agent Desktop and the CTI OS Server:
• Heartbeats and Skill Statistics, page 8-13
• Agent State Change, page 8-14
• Typical Call Scenario, page 8-15

Heartbeats and Skill Statistics

Table 8-2 shows the bandwidth usage between Cisco Agent Desktop and the CTI OS and Cisco Agent
Desktop servers for heartbeats and skill statistics. This type of data is passed to and from logged-in
agents at set intervals, regardless of what the agent is doing. The refresh interval for these skill group
statistics was the default setting of 10 seconds. This refresh interval can be configured in CTI OS. Skill
group statistics were also configured in CTI OS as described in the Cisco Agent Desktop Installation
Guide, available at
http://www.cisco.com

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-13
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Table 8-2 Bandwidth Usage for Heartbeats and Skill Statistics (Bytes Per Second)

To Cisco Agent Desktop From Cisco Agent Desktop


Server 1 Skill 5 Skills 1 Skill 5 Skills
CTI OS 49 234 7 28
Cisco Agent Desktop Base 2 2 2 2
Cisco Agent Desktop Recording 0 0 0 0
Cisco Agent Desktop VoIP 0 0 0 0
Monitor
Total 51 236 9 30

Bandwidth from CTI OS to Cisco Agent Desktop:


2.1 Bps + (Number of skills ∗ 46.4 Bps)
Bandwidth from Cisco Agent Desktop to CTI OS:
1.1 Bps + (Number of skills ∗ 5.4 Bps)

Example
If there are 25 remote agents with 10 skills per agent, the number of bytes per second (Bps) sent from
the CTI OS server to those desktops across the WAN can be calculated as follows:
25 agents ∗ (2.1 Bps + (10 skills ∗ 46.4 Bps) = 11,653 Bps
11,653 Bps ∗ 8 bits per byte = 93,220 average bits per second = 93.22 kilobits per second (kbps)

Agent State Change

Table 8-3 lists the total bytes of data sent when an agent changes state from Ready to Not Ready and
enters a reason code.

Table 8-3 Bytes of Data Used for Agent State Change

To Cisco Agent Desktop From Cisco Agent Desktop


Server 1 Skill 5 Skills 1 Skill 5 Skills
CTI OS 2043 6883 523 739
Cisco Agent Desktop Base 268 268 638 638
Cisco Agent Desktop Recording 0 0 0 0
Cisco Agent Desktop VoIP 0 0 0 0
Monitor
Total 2311 7151 1161 1377

Example
If there are 25 remote agents with 5 skills per agent, each of whom changes agent state one time, the
total number of bytes sent from the CTI OS server to Cisco Agent Desktop is:
25 ∗ 6883 = 172,075 bytes

Cisco IP Contact Center Enterprise Edition SRND


8-14 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Typical Call Scenario

Table 8-4 lists the total bytes of data required for a typical call scenario. For this call scenario, Cisco
Agent Desktop is used to perform the following functions:
• Transition an agent from the work ready state.
• Answer an incoming ACD call using the softphone controls.
• Put the agent in a work ready state.
• Hang up the call using the softphone controls.
• Select wrap-up data.
This scenario includes presenting Expanded Call Context (ECC) variables to the agent. Each ECC
variable is 20 bytes in length, assuming a worst-case scenario.

Table 8-4 Bytes of Data Used for a Typical Call Scenario

To Cisco Agent Desktop From Cisco Agent Desktop


1 Skill 5 Skills 1 Skill 5 Skills
Service 1 ECC 5 ECCs 1 ECC 5 ECCs 1 ECC 5 ECCs 1 ECC 5 ECCs
CTI OS 19683 20199 30804 31263 2371 2371 2749 2749
Cisco Agent Desktop Base 4274 5882 4674 5942 6716 6832 6726 6894
Cisco Agent Desktop 0 0 0 0 0 0 0 0
Recording
Cisco Agent Desktop VoIP 0 0 0 0 0 0 0 0
Monitor
Total 23957 26081 35478 37205 9087 9203 9475 9643

Example
Assume there are 25 remote agents with 5 skills and 5 ECC variables, who each answer 20 calls in the
busy hour. Also assume a full-duplex network, and use the larger of the To/From bandwidth numbers,
which is 37,205 bytes in this case.
37,205 bytes per call ∗ 25 agents ∗ 20 calls per hour = 18,602,500 bytes per hour.
(18,602,500 bytes per hour) / (3600 seconds per hour) = 5,167 bytes per second (Bps)

Note Access to LDAP is not included in the calculation because both Cisco Agent Desktop and Cisco
Supervisor Desktop read their profiles only once, at startup, and then cache it. The numbers in this
example are not based on calls in progress, but on calls attempted or completed. The amount of
bandwidth used is per call, and does not depend on the length of the call (a 1-minute call and a 10-minute
call typically use the same amount of bandwidth, excluding voice traffic). The example does not take
into account the additional traffic generated if calls are transferred, held, or conferenced.

Be sure to mark RTP packets for monitoring, recording, and playback, in addition to other required RTP
and signaling marking. For details on traffic marking, refer to the Cisco IP Telephony Solution
Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-15
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Silent Monitoring Bandwidth Usage


Starting with Cisco Agent Desktop Release 4.6, desktop monitoring was introduced as a new feature.
Instead of using a centralized VoIP Monitor service, each Cisco Agent Desktop installation includes a
miniature VoIP service called the Desktop Monitor service. This service is responsible for all silent
monitoring and recording requests for the agent logged in on that desktop.
The bandwidth requirements for the Desktop Monitor service are identical to those of the VoIP Monitor
service from the standpoint of monitor requests, but the number of requests sent to the Desktop Monitor
service is much lower.
It is possible to have multiple silent monitoring requests for the same agent extension from different
Cisco Agent Desktop supervisors. In that case, each monitor request requires the bandwidth of an
additional call for the desktop. Unlike the VoIP Monitor service, the maximum number of recording
requests that can be sent to the Desktop Monitor is one.
The maximum number of simultaneous monitoring and recording requests is 21 (one monitoring request
from each of the 20 allowed supervisors per team, plus one recording request). In practice, there are
usually no more than 3 to 5 simultaneous monitoring/recording requests at any one time.
For the purposes of this discussion, 5 simultaneous monitoring/recording sessions are used to calculate
the average bandwidth requirements for a single Cisco Agent Desktop installation to support desktop
monitoring.

Silent Monitoring IP Traffic Flow

Figure 8-2 shows a main office and a remote office. The main office contains the various Cisco Desktop
services and the switch shared with the remote office. Both the main office and the remote office have
Cisco Agent Desktop agents and supervisors. In this diagram, all agents and supervisors belong to the
same logical contact center (LCC) and are on the same team.

Figure 8-2 Contact Center Diagram

Remote Office
Router
Supervisor B
V
Ethernet

WAN

Router Agent B

Main Office

Ethernet
IP Phone IP Phone
Services:
CallManager IP IP
Switch ICM
IPIVR
Supervisor A Agent A
127000

Cisco IP Contact Center Enterprise Edition SRND


8-16 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

In the main office, agents and supervisors use IP phones. In the remote office, agents and supervisors
use media termination softphones.

Bandwidth Requirements for Monitor Services to Cisco Supervisor Desktop

The amount of traffic between the monitor services and the monitoring supervisor is equal to the
bandwidth of one IP phone call (two RTP streams of data). (Monitor services refers to both the VoIP
Monitor service and the Desktop Monitor service.)
When calculating bandwidth, you must use the size of the RTP packet plus the additional overhead of
the networking protocols used to transport the RTP data through the network.
G.711 packets carrying 20 ms of speech data require 64 kbps of network bandwidth. (See Table 8-5.)
These packets are encapsulated by four layers of networking protocols (RTP, UDP, IP, and Ethernet).
Each of these protocols adds its own header information to the G.711 data. As a result, the G.711 data,
once packed into an Ethernet frame, requires 87.2 kbps of bandwidth per data stream as it travels over
the network. An IP phone call consists of two streams, one from A to B and one from B to A. For an IP
phone call using the G.711 codec, both streams require 87.2 kbps.

Table 8-5 Bandwidth Requirements for Two Streams of Data

Average kbps Per Monitoring Maximum kbps Per Monitoring


CODEC Supervisor Supervisor1
G.711 174.4 174.4
G.711 with silence suppression 61 174.4
G.729 62.4 62.4
G.729 with silence suppression 21.8 62.4
1. Maximum instantaneous bandwidth. When silence suppression is used on a physical channel that has fixed capacity, you must
consider this metric because, when a voice signal is present, all of the maximum bandwidth is needed.

For full-duplex connections, the bandwidth speed applies to both incoming and outgoing traffic. (For
instance, for a 100-Mbps connection, there is 100 Mbps of upload bandwidth and 100 Mbps of download
bandwidth.) Therefore, an IP phone call consumes the bandwidth equivalent of a single stream of data.
In this scenario, a G.711 IP phone call with no silence suppression requires 87.2 kbps of the available
bandwidth.
Monitor services send out two streams for each monitored call, both going from the service to the
requestor. This means that, for each monitor session, the bandwidth requirement is for two streams
(174.4 kbps with the G.711 codec).
If a VoIP Monitor service is used to monitor an agent's extension, this bandwidth is required between
the VoIP Monitor service and the supervisor's computer. In Figure 8-2, if supervisor A monitors
agent A, this bandwidth is required on the main office LAN. If supervisor A monitors agent B at the
remote office, another VoIP Monitor service is needed in the remote office (not shown in Figure 8-2).
The bandwidth requirement also applies to the WAN link.
If desktop monitoring is used, the bandwidth requirements are between the agent's desktop and the
supervisor's desktop. If supervisor A monitors agent A, this bandwidth is required on the main office
LAN. If supervisor A monitors agent B in the remote office, the bandwidth requirement also applies to
the WAN link.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-17
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Bandwidth Requirements for Monitor Service to Recording and Statistics Service

The Recording and Statistics service is used to record agent conversations. See Table 8-5 for the
bandwidth requirements between the Recording and Statistics service and the monitor service.

Bandwidth Requirements for Recording Service to Cisco Supervisor Desktop

Cisco Agent Desktop Release 6.0 introduced a new Recording Service. This service sends RTP streams
to supervisors for recording playback. The bandwidth used for the RTP streams is identical to silent
monitoring. See Table 8-5 for details.

Bandwidth Requirements for Desktop Monitor

If a VoIP Monitor service is used to monitor or record a call, the bandwidth requirement on the service's
network connection is two streams of voice data.
If a Desktop Monitor service is used, the additional load of the IP phone call is added to the bandwidth
requirement because the IP phone call comes to the same agent where the Desktop Monitor service is
located.
In either case, the bandwidth requirement is the bandwidth between the monitor service and the
requestor:
• VoIP Monitor service to supervisor
• Agent desktop to supervisor
• VoIP service to Recording and Statistics service
• Agent desktop to Recording and Statistics service
Table 8-6 and Table 8-7 display the percentage of total bandwidth available that is required for
simultaneous monitoring sessions handled by a single Desktop Monitor service.
The following notes also apply to the bandwidth requirements for the Desktop Monitor service shown
in Table 8-6 and Table 8-7:
• The bandwidth values are calculated based on the best speed of the indicated connections. A
connection's true speed can differ from the maximum stated due to various other factors.
• The bandwidth requirements are based on upload speed. Download speed affects only the incoming
stream for the IP phone call.
• The data represents the codecs without silence suppression. With silence suppression, the amount
of bandwidth used may be lower.
• The data shown does not address the quality of the speech of the monitored call. If the bandwidth
requirements approach the total bandwidth available and other applications must share access to the
network, latency (packet delay) of the voice packets can affect the quality of the monitored speech.
However, latency does not affect the quality of recorded speech.
• The data represents only the bandwidth required for monitoring and recording. It does not include
the bandwidth requirements for other Cisco Agent Desktop modules as outlined in other sections of
this document.

Cisco IP Contact Center Enterprise Edition SRND


8-18 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Table 8-6 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.711 Codec
and No Silence Suppression

Number of Simultaneous Percentage of Available Upload Bandwidth Required


Monitoring Sessions 100 Mbps 10 Mbps 1.544 Mbps 640 kbps 256 kbps 128 kbps 64 kbps 56 kbps
1
Call only 0.1 0.9 5.6 13.6 34.1 68.1 NS NS
1 0.3 2.6 16.8 40.9 NS NS NS NS
2 0.4 4.4 28.1 68.1 NS NS NS NS
3 0.6 6.1 39.3 95.4 NS NS NS NS
4 0.8 7.8 50.5 NS NS NS NS NS
5 1.0 9.6 61.7 NS NS NS NS NS
6 1.1 11.3 72.9 NS NS NS NS NS
7 1.3 13.1 84.2 NS NS NS NS NS
8 1.5 14.8 95.4 NS NS NS NS NS
9 1.7 16.6 NS NS NS NS NS NS
10 1.8 18.3 NS NS NS NS NS NS
1. NS = not supported. The bandwidth of the connection is not large enough to support the number of simultaneous monitoring sessions.

Table 8-7 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.729 Codec
and No Silence Suppression

Number of Simultaneous Percentage of Available Bandwidth Required


Monitoring Sessions 100 Mbps 10 Mbps 1.544 Mbps 640 kbps 256 kbps 128 kbps 64 kbps 56 kbps
Call only 0.0 0.3 2.0 4.9 12.2 24.4 48.8 55.7
1
1 0.1 0.9 6.0 14.6 36.6 73.1 NS NS
2 0.2 1.6 10.0 24.4 60.9 NS NS NS
3 0.2 2.2 14.1 34.1 85.3 NS NS NS
4 0.3 2.8 18.1 43.9 NS NS NS NS
5 0.3 3.4 22.1 53.6 NS NS NS NS
6 0.4 4.1 26.1 63.4 NS NS NS NS
7 0.5 4.7 30.1 73.1 NS NS NS NS
8 0.5 5.3 34.1 82.9 NS NS NS NS
9 0.6 5.9 38.1 92.6 NS NS NS NS
10 0.7 6.6 42.2 NS NS NS NS NS
1. NS = not supported. The bandwidth of the connection is not large enough to support the number of simultaneous monitoring sessions.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-19
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Bandwidth Requirements for VoIP Monitor Service

The following notes apply to the bandwidth requirements for the VoIP Monitor service, as listed in
Table 8-8 and Table 8-9:
• Because the VoIP Monitor service is designed to handle a larger load, the number of monitoring
sessions is higher than for the Desktop Monitor service.
• The bandwidth requirements are based on upload speed. Download speed affects only the incoming
stream for the IP phone call.
• Some of the slower connection speeds are not shown in Table 8-8 and Table 8-9 because they are
not supported for a VoIP Monitor service.
• The values in Table 8-8 and Table 8-9 are calculated based on the best speed of the indicated
connections. A connection's true speed can differ from the maximum stated due to various other
factors.
• The data represents the codecs without silence suppression. With silence suppression, the amount
of bandwidth used may be lower.
• The data shown does not address the quality of the speech of the monitored call. If the bandwidth
requirements approach the total bandwidth available and other applications must share access to the
network, latency (packet delay) of the voice packets can affect the quality of the monitored speech.
However, latency does not affect the quality of recorded speech.
• The data represents only the bandwidth required for monitoring and recording. It does not include
the bandwidth requirements for other Cisco Agent Desktop modules as outlined in other sections of
this document.

Table 8-8 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring
Sessions with G.711 Codec and No Silence Suppression

Number of Simultaneous Percentage of Available Upload Bandwidth Required


Monitoring Sessions 100 Mbps 10 Mbps 1.544 Mbps
1 0.2 1.7 11.2
5 0.9 8.7 56.1
10 1.7 17.4 NS1
15 2.6 26.2 NS
20 3.5 34.9 NS
25 4.4 43.6 NS
30 5.2 52.3 NS
35 6.1 61.0 NS
40 7.0 69.8 NS
45 7.8 78.5 NS
50 8.7 87.2 NS
1. NS = not supported. The bandwidth of the connection is not large enough to support the number of
simultaneous monitoring sessions.

Cisco IP Contact Center Enterprise Edition SRND


8-20 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Table 8-9 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring
Sessions with G.729 Codec and No Silence Suppression

Number of Simultaneous Percentage of Available Upload Bandwidth Required


Monitoring Sessions 100 Mbps 10 Mbps 1.544 Mbps
1 0.1 0.6 4.0
5 0.3 3.1 20.1
10 0.6 6.2 40.2
15 0.9 9.4 60.2
20 1.2 12.5 80.3
25 1.6 15.6 NS1
30 1.9 18.7 NS
35 2.2 21.8 NS
40 2.5 25.0 NS
45 2.8 28.1 NS
50 3.1 31.2 NS
1. NS = not supported. The bandwidth of the connection is not large enough to support the number of
simultaneous monitoring sessions.

Bandwidth Requirements for Cisco Supervisor Desktop to Cisco Desktop Base Services

In addition to the bandwidth requirements discussed in the preceding sections, there is traffic from Cisco
Supervisor Desktop to the Cisco Agent Desktop Base Services.
For each agent on the supervisor's team, there is 2 kilobytes (kB) of bandwidth per call sent between
Cisco Supervisor Desktop and the Chat service, as shown in Table 8-10.

Table 8-10 Cisco Supervisor Desktop Bandwidth for a Typical Agent Call

Service To Cisco Supervisor Desktop From Cisco Supervisor Desktop


CTI OS 0 0
Cisco Agent Desktop Base 1650 550
Cisco Agent Desktop Recording 0 0
Cisco Agent Desktop VoIP 0 0
Monitor
Total 1650 550

The same typical call scenario was used to capture bandwidth measurements for both Cisco Agent
Desktop and Cisco Supervisor Desktop. See Typical Call Scenario, page 8-15, for more details.
If there are 10 agents on the supervisor's team and each agent takes 20 calls an hour, the traffic is:
10 agents ∗ 20 calls per hour = 200 calls per hour
200 calls ∗ 1650 bytes per call = 330,000 bytes per hour
(4330,000 bytes) / (3600 seconds per hour) = 92 kBps
92 kBps ∗ 8 bits per byte = 733 kbps.

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-21
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

There is additional traffic sent if the supervisor is viewing reports or if a silent monitor session is started
or stopped.

Agent Detail Report


This report is refreshed automatically every 30 seconds. Table 8-11 lists the bandwidth usage per report
request.

Table 8-11 Bandwidth Usage for Agent Detail Report (Average Bytes per Report)

Agent Detail Report


Service To Cisco Supervisor Desktop From Cisco Supervisor Desktop
CTI OS 0 0
Cisco Agent Desktop Base 220 200
Cisco Agent Desktop Recording 0 0
Cisco Agent Desktop VoIP 0 0
Monitor
Total 220 200

Bandwidth for a supervisor viewing the Agent Detail Report is:


220 bytes per request ∗ 2 requests per minute = 440 bytes per minute
(440 bytes per minute) / (60 seconds per minute) = 7 bytes per second (Bps)

Team Agent Statistics Report


This function is a one-time transfer occurring when the supervisor opens the report (no automatic
refresh). The supervisor can refresh the report manually. Table 8-12 lists the bandwidth usage per report
request.

Table 8-12 Bandwidth Usage for Team Agent Statistics Report (Average Bytes per Report)

Team Agent Statistics Report


Service To Cisco Supervisor Desktop From Cisco Supervisor Desktop
CTI OS 0 0
Cisco Agent Desktop Base 250 per agent 200
Cisco Agent Desktop Recording 0 0
Cisco Agent Desktop VoIP 0 0
Monitor
Total 250 per agent 200

Cisco IP Contact Center Enterprise Edition SRND


8-22 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Team Skill Statistics Report


This report is refreshed automatically every 30 seconds. Table 8-13 lists the bandwidth usage per report
request.

Table 8-13 Bandwidth Usage for Team Skill Statistics Report (Average Bytes per Report)

Team Skill Statistics Report


Service To Cisco Supervisor Desktop From Cisco Supervisor Desktop
CTI OS 0 0
Cisco Agent Desktop Base 250 per skill 200
Cisco Agent Desktop Recording 0 0
Cisco Agent Desktop VoIP 0 0
Monitor
Total 250 per skill 200

Bandwidth for a supervisor viewing the Team Skill Statistics Report with 10 skills in the team is:
250 bytes per skill ∗ 10 skills ∗ 2 requests per minute = 5000 bytes per minute
(5000 bytes per minute) / (60 seconds per minute) = 83 bytes per second (Bps)

Start and Stop Silent Monitoring Requests


Requests to start or stop silent monitor sessions result in a one-time bandwidth usage per request, as
listed in Table 8-14.

Table 8-14 Bandwidth Usage to Start or Stop Silent Monitoring (Average Bytes per Request)

Start Monitoring Stop Monitoring


To Cisco Supervisor From Cisco Supervisor To Cisco Supervisor From Cisco Supervisor
Service Desktop Desktop Desktop Desktop
CTI OS 0 0 0 0
Cisco Agent Desktop Base 775 450 0 0
Cisco Agent Desktop 0 0 0 0
Recording
Cisco Agent Desktop VoIP 275 500 150 325
Monitor
Total 1050 950 150 325

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-23
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Service Placement Recommendations


Table 8-15 summarizes recommendations for service placements that help minimize bandwidth to the
desktop. These recommendations apply to deployments with centralized call processing and remote
agents.

Table 8-15 Service Placement Recommendations

Service Location Reason


Cisco Agent Desktop Base Central Traffic to/from centralized IPCC
Services components outweighs the traffic
to/from desktops.
VoIP Monitor Service Remote, near agents Span of agent traffic to service. This is
a requirement, not a recommendation,
for silent monitoring and recording.
Recording Service Close to agents and No CTI. Lots of traffic to/from desktops
supervisor if in one location; and from the VoIP Monitor Service(s).
central if not
Cisco Desktop Supervisor With the agents Close to the VoIP Monitor service.

For multiple remote locations, each remote location must have a VoIP Monitor service. Multiple VoIP
Monitor services are supported in a single logical contact center. The Recording and Statistics service
can be moved to the central location if the WAN connections are able to handle the traffic. If not, each
site should have its own logical contact center and installation of the Cisco Desktop software.

Quality of Service (QoS) Considerations


When considering which traffic flows are mission-critical and need to be put in a priority queue, rank
them in the following order of importance:
1. Customer experience
2. Agent experience
3. Supervisor experience
4. Administrator experience
With this ranking for the service-to-service flows, traffic between the Enterprise Service and the CTI
service (call events) is the most critical. Based on the service placement recommendations in Table 8-15,
both services should reside in the central location. However, QoS considerations must still be applied.
This traffic should be classified as AF31, similar to voice call control and signaling traffic. The traffic
from Cisco Agent Desktop to and from the CTI service (call events, call control) should also be
prioritized and classified as AF31.
For IP Phone Agent, communications between the IP Phone Agent service and the CTI service are also
important because they affect how quickly agents can change their agent state. This traffic should also
be classified as AF31.

Note Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to
DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31).
Therefore, in the interim, Cisco recommends that you reserve both AF31 and CS3 for call signaling.

Cisco IP Contact Center Enterprise Edition SRND


8-24 OL-7279-04
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

The traffic from Cisco Agent Desktop to and from the Chat service (agent information, call status) is
less critical and should be classified as AF21 or AF11.

Cisco Desktop Component Port Usage


For details on network usage, refer to the Cisco Contact Center Product Port Utilization Guide,
available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/port_uti/index.htm
The Desktop application tags only the RTP packets that are sent from the Desktop Monitor software,
VoIP Monitor Service, or Recording Service for silent monitoring, recording, and recording playback.

Integrating Cisco Agent Desktop Release 6.0 into a Citrix Thin-Client Environment
For guidance on installing Cisco Agent Desktop Release 6.0 applications in a Citrix thin-client
environment, refer to the documentation for Integrating CAD 6.0 into a Citrix Thin-Client Environment,
available at
http://www.cisco.com/application/pdf/en/us/partner/products/ps427/c1244/cdccont_0900aecd800e
9db4.pdf

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 8-25
Chapter 8 Bandwidth Provisioning and QoS Considerations
Network Provisioning

Cisco IP Contact Center Enterprise Edition SRND


8-26 OL-7279-04
C H A P T E R 9
Securing IPCC

This chapter describes the importance of securing the IPCC solution and points to the various security
resources available. It includes the following sections:
• Introduction to Security, page 9-1
• Security Best Practices, page 9-2
• Patch Management, page 9-3
• Antivirus, page 9-4
• Cisco Security Agent, page 9-5
• Firewalls and IPSec, page 9-6
• Security Features in Cisco CallManager Release 4.0, page 9-8

Introduction to Security
Achieving IPCC system security requires an effective security policy that accurately defines access,
connection requirements, and systems management within your contact center. Once you have a good
security policy, you can use many state-of-the-art Cisco technologies and products to protect your data
center resources from internal and external threats and to ensure data privacy, integrity, and system
availability.
Cisco has developed a set of documents with detailed design and implementation guidance for various
Cisco networking solutions in order to assist enterprise customers in building an efficient, secure,
reliable, and scalable network. These Solution Reference Network Design (SRND) guides, which can be
found at http://www.cisco.com/go/srnd, provide proven best practices to build out a network
infrastructure based on the Cisco Architecture for Voice, Video, and Integrated Data (AVVID). Among
them are the following relevant documents relating to Security and IP Telephony that should be used in
order to successfully deploy an IPCC network. Updates and additions are posted periodically, so
frequent site visits are recommended.
• IP Telephony SRND for Cisco CallManager 3.3
• IP Telephony SRND for Cisco CallManager 4.0
• Data Center Networking: Securing Server Farms SRND
• Data Center Networking: Integrating Security, Load Balancing, and SSL Services
An adequately secure IPCC configuration requires a multi-layered approach to protecting systems from
targeted attacks and the propagation of viruses. A first approach is to ensure that the servers hosting the
Cisco contact center applications are physically secure. They must be located in data centers to which

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 9-1
Chapter 9 Securing IPCC
Security Best Practices

only authorized personnel have access. The next level of protection is to ensure the servers are running
antivirus applications with the latest virus definition files and are kept up-to-date with Microsoft and
other third-party security patches. The servers may be hardened according to the guidelines provided in
the security best-practices guides applicable to your release of the application.
Another level of security is the network segmentation of the servers. None of the IPCC servers are meant
to be deployed as internet-facing systems or bastion hosts (with the only exception of the Web
Collaboration option, if deployed). While desktop-based applications such as the CTI OS, Cisco Agent
Desktop, or Cisco Supervisor Desktop tend to be deployed in open corporate VLANs, servers making
up the IPCC solution should be placed in the data center behind a secure network. In cases where the
servers are geographically distributed, proper care should be taken to ensure the network links are
secure.

Security Best Practices

Default (Standard) Windows 2000 Server Operating System Installation


The IPCC solution consists of a number of server applications which are managed differently. The
security best practices for the following servers vary slightly from the other applications in the IPCC
solution:
• ICM Routers
• ICM Loggers
• ICM Peripheral Gateways
• ICM Administrative Workstations (Historical Data Server and WebView)
• CTI-based servers (CTI, CTI OS, and Cisco Agent Desktop servers)
The security best practices for these servers are consolidated in a document that describes security
hardening configuration guidelines for the Microsoft Windows 2000 Server environment. This
document, the Security Best Practices for Cisco Intelligent Contact Management Software, is available
at
http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1001/prod_technical_reference_list.
html
The recommendations contained in the Security Best Practices guide are based in part on hardening
guidelines published by Microsoft, such as those found in the Windows 2000 Security Hardening Guide,
as well as other third-party vendors' hardening recommendations. The purpose of the Security Best
Practices guide is to further interpret and customize those guidelines as they specifically apply to the
contact center server products. Where exceptions or specific recommendations are made, the Security
Best Practices guide strives to present the underlying rationale for the deviation.
The Security Best Practices guide assumes that the reader is an experienced network administrator
familiar with securing Windows 2000 Server. It further assumes that the reader is fully familiar with the
applications that compose the ICM and IPCC solutions, as well as with the installation and
administration of those systems. It is the additional intent of these best practices to provide a
consolidated view of securing the various third-party applications and operating system upon which the
Cisco IP Contact Center applications depend.

Cisco IP Contact Center Enterprise Edition SRND


9-2 OL-7279-04
Chapter 9 Securing IPCC
Patch Management

Cisco-Provided Windows 2000 Server Installation (CIPT OS)


The IP IVR, Internet Service Node (ISN), and Cisco CallManager servers all support a hardened
operating system called the Cisco IP Telephony Operating System. The hardening specifications for this
operating system can be found in the Cisco IP Telephony Solution Reference Network Design (SRND)
guide, available at
http://www.cisco.com/go/srnd

Patch Management

Default (Standard) Windows 2000 Server Operating System Installation


Security Patches
The security updates qualification process for Contact Center products is documented at
http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1001/prod_bulletins_list.html
Upon the release of a Critical or Important security update from Microsoft, Cisco assesses the impact on
the ICM-based applications and releases a field notice with this assessment, typically within 24 hours.
For the security updates categorized as Impacting, Cisco continues to test its products to further
determine if there are any potential conflicts after the initial field notice. A field notice update is released
when those tests are completed.
Customers can set up a profile to be alerted of field notices announcing security updates by going to the
following link:
http://www.cisco.com/cgi-bin/Support/FieldNoticeTool/field-notice
Customers should follow Microsoft's guidelines regarding when and how they should apply these
updates.
Cisco recommends that Contact Center customers separately assess all security patches released by
Microsoft and install those deemed appropriate for their environments. Cisco will continue to provide a
service of separately assessing and, where necessary, validating higher severity security patches that
may be relevant to the Contact Center software products.

Automated Patch Management


ICM-based servers support integration with Microsoft's Software Update Services (SUS), whereby
customers control which and when patches can be deployed to those servers. The servers can be
configured for Automatic Windows Updates, but Cisco recommends that they point to local Software
Update Services (SUS) or Windows Update Services (WUS) servers.

Cisco-Provided Windows 2000 Server Installation (CIPT OS)


Security Patches
The Cisco CallManager Security Patch Process is available at
http://www.cisco.com/application/pdf/en/us/guest/products/ps556/c1167/ccmigration_09186a0080
157c73.pdf

Cisco IP Contact Center Enterprise Edition SRND


OL-7279-04 9-3
Chapter 9 Securing IPCC
Antivirus

A document providing information for tracking Cisco-supported operating system files, SQL Server, and
security files is available at
http://www.cisco.com/univercd/cc/td/doc/product/voice/c_callmg/osbios.htm
This document also provides Cisco recommendations for applying software updates (Cisco
CallManager, IP IVR, and ISN only).
The Security Patch and Hotfix Policy for Cisco CallManager specifies that any applicable patch deemed
Severity 1 or Critical must be tested and posted to http://www.cisco.com within 24 hours as Hotfixes.
All applicable patches are consolidated and posted once per month as incremental Service Releases
A notification tool (email service) for providing automatic notification of new fixes, OS updates, and
patches for Cisco CallManager and associated products is available at
http://www.cisco.com/cgi-bin/Software/Newsbuilder/Builder/VOICE.cgi

Automated Patch Management


The Cisco IP Telephony Operating System configuration and patch process does not currently allow for
an automated patch management process.

Antivirus
Applications Supported
A number of third-party antivirus applications are supported for the IPCC system. For a list of
applications and versions supported on your particular release of the IPCC software, refer to the ICM
platform hardware specifications and related software compatibility data listed in the Cisco Intelligent
Contact Management (ICM) Bill of Materials and the Cisco CallManager product documentation
(available at http://www.cisco.com).

Note Deploy only the supported applications for your environment, otherwise a software conflic