Sie sind auf Seite 1von 54

Chapter 3: Capacity Planning

Microsoft Lync Server 2010


Published: March 2012

This document is provided as-is. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. Copyright 2012 Microsoft Corporation. All rights reserved.

Contents
Capacity Planning........................................................................................................................ 1 Lync Server 2010 User Models................................................................................................ 1 Capacity Planning Requirements and Recommendations.......................................................9 Capacity Planning Using the User Models............................................................................9 Estimating Voice Usage and Traffic.................................................................................14 Deployment Guidelines for Mediation Server..................................................................14 Scenario-Based Capacity Planning.....................................................................................16 Media Traffic Network Usage.................................................................................................29 Database Activity for Capacity Planning.................................................................................33 Collaboration and Application Sharing Capacity Planning......................................................36 Address Book Capacity Planning........................................................................................... 37 Capacity Planning for Response Group.................................................................................38 Capacity Planning for Call Park..............................................................................................38 Capacity Planning for Group Chat Server..............................................................................39 Capacity Planning for Mobility................................................................................................ 46

Capacity Planning
The topics in this section help you understand how to plan and deploy Microsoft Lync Server 2010 communications software so that you can adequately plan for the number of users in your organization and plan for the server load that their activities generate. Two tools are available as free downloads to help you with capacity planning: the Lync Server 2010 Capacity Calculator at http://go.microsoft.com/fwlink/?LinkId=212657 and the Lync Server 2010 Stress and Performance Tool at http://go.microsoft.com/fwlink/?LinkId=212599.

In This Section
Lync Server 2010 User Models Capacity Planning Requirements and Recommendations Media Traffic Network Usage Database Activity for Capacity Planning Collaboration and Application Sharing Capacity Planning Address Book Capacity Planning Capacity Planning for Response Group Capacity Planning for Call Park Capacity Planning for Group Chat Server Capacity Planning for Mobility

Lync Server 2010 User Models


The user models described here provide the basis for the capacity planning measurements and recommendations described later in this section.

Lync Server 2010 User Models


The following table describes the user model for registration, contacts, instant messaging (IM), and presence for Microsoft Lync Server 2010 communications software. Environment and Registration User Model
Category Description

Percentage of Active Directory users

We assume that 70% of all Active Directory users in the organization are enabled for Lync Server 2010. 80% of those enabled users are logged on to Lync Server each day (80% concurrency). The concurrent users are the basis for the numbers in the rest of this section. We assume that the number of Active Directory

Active Directory distribution groups

Category

Description

distribution groups in the organization is equal to three times the number of all users in Active Directory. The distribution groups have the following sizes: Voice over IP (VoIP) users 64% have 2-30 users 13% have 31-50 users 10% have 51-100 users 13% have 101-500 users

50% of Lync Server users are enabled for unified communications (UC) (that is, their phone numbers are owned by Lync Server 2010). 65% of clients running Microsoft Lync 2010 clients, including Microsoft Lync 2010, Microsoft Lync 2010 Phone Edition, and Microsoft Lync 2010 Mobile. 30% of clients running either Office Communicator 2007 R2 clients or Office Communicator 2007 clients, including Communicator Web Access, Communicator Phone Edition, and Communicator Mobile. 5% of clients using Microsoft Lync Web App. Client multipoint of presence (MPOP) is 1:1.5, meaning that 50% of users have two clients signed in at the same time.

Registered client distribution

Remote user distribution

70% of users connecting internally. 30% of users connecting through an Edge Server, and a Director (recommended).

Contact distribution

The maximum number of contacts a user has is 1,000. Less than 1% of users have 1,000 contacts. Less than 25% of users have 100 or more contacts. Average of 80 contacts for users with public cloud connectivity. Of these users: 50% of the contacts are within the organization. 10% of those users are remote users, connecting from outside the firewall.

Chapter 3: Capacity Planning Category Description

40% of the contacts are public cloud users (such as users of AOL, Yahoo!, or MSN). 10% of the contacts are from federated partners. Average of 50 contacts for users without public cloud connectivity. Of these users: 80% of the contacts are within the organization. 10% of those users are remote users, connecting from outside the firewall. 20% of the contacts are from federated partners. Session time The average user logon session lasts 12 hours. All users log on within 60 minutes of the start of the session.

IM and Presence User Model


Category Description

Peer-to-peer IM sessions

Each user averages six peer-to-peer IM sessions per day. 10 instant messages per session.

Presence polling

Overall, we assume presence polling at an average 40 polls per user per hour, with a maximum of eight per user per minute. For each user, assume an average of: One poll per day of the presence of users in the users organization tab (but not Contacts list). Average number of noncontacts in the users organization tab is 15 users. Two contact card viewing operations per day. One presence poll every time the user clicks another user to start a conversation, estimated at once per hour. Six user searches per hour. When the user opens or previews an email in Outlook, a poll of the presence of users in the To: and CC: fields of the email,

Chapter 3: Capacity Planning Category Description

estimated at five emails per hour and four users per email. Presence publication is average of 4 per user per hour, with a maximum of six per user per hour. Presence subscriptions When one user adds another as a contact, the first user is subscribing to five categories of information about the second user. Updates of these categories of information are automatically sent to the first user. The user model assumes a default of 1000 category subscriptions per user. This means that a user could be a contact of as many as 200 other users. 1000 category subscriptions is also the default maximum. If needed, you can raise the maximum allowed to 3000. For details about changing this default, see Planning for IM and Presence in the Planning documentation. The following table describes the user model for address book use. Address Book Usage User Model
Address Book search mode Usage

Address Book Web Query only (all queries performed by Address Book Web Query service)

Four prefix queries per user per day. 60 exact search queries per user per day. 40% of those are batched, with an average of 20 contacts per query. The other 60% of the queries are for a single contact. 25 photo queries per user per day. 24 are for a single photo, the other is a batch query with an average of 20 contacts. One total organization search query per user per day.

Mixed mode, both address book file and web queries used. This is the default mode.

Only two types of queries go to the network, the photo and total organizational search queries. 25 photo queries per user per day. 24 are for a single photo, the other is a batch query with an average of 20 contacts.

Chapter 3: Capacity Planning Address Book search mode Usage

One total organization search query per user per day. The following table describes the conferencing model. Conferencing Model
Category Description

Scheduled meetings versus "Meet now" meetings Conferencing client distribution

60% scheduled, 40% unscheduled. For scheduled meetings: 75% of conferencing users use Lync 2010. 20% of conferencing users use Microsoft Lync Web App. 5% of conferencing users use Microsoft Lync 2010 Attendee. For unscheduled meetings: 65% of conferencing users use Lync 2010. 30% of conferencing users use earlier clients, including Office Communicator 2007 R2, Office Communicator 2007, and Microsoft Office Communicator Web Access (2007 release). 5% of conferencing users use Microsoft Lync Web App.

Meeting concurrency

5% of users will be in conferences during working hours. Thus, in an 80,000-user pool, as many as 4,000 users might be in conferences at any one time. 40% mixed VoIP audio and dial-in conferencing, with a 3:1 ratio of VoIP users to dial-in users. 35% VoIP audio only. 15% dial-in conferencing audio only. 10% no audio (IM-only conferences, with an average of five messages sent per user).

Meeting audio distribution

Media mix for web conferences

75% of conferences are web conferences, with audio plus some other collaboration modalities.

Chapter 3: Capacity Planning Category Description

For these conferences, the other collaboration methods are as follows: Note: These numbers add up to more than 100% because one conference can have multiple collaboration methods. 50% add application sharing. 50% add instant messaging (average of two IMs per user). 20% add data collaboration, including PowerPoint or whiteboard. In these, an average of 2 PowerPoint files presented per conference, with an average PowerPoint file size of 5 MB. (After a PowerPoint file is uploaded, it is converted by Lync Server to another format with an average file size of 25 MB.) Average of 20 annotations per whiteboard. 20% add video. 70% of video sessions have Common Intermediate Format (CIF) resolution and 30% have VGA. Meeting participant distribution 50% internal, authenticated users. 25% remote access, authenticated users. 15% anonymous users. 10% federated users. Meeting join distribution Users are assumed to join a meeting at the following rates: 0-2 minutes after meeting start: 28% 3-6 minutes: 15% 7-10 minutes: 6% 11-27 minutes: 10% 28-29 minutes: 4% 30-33 minutes: 13% 34-36 minutes: 4% 37-40 minutes: 2% 41-54 minutes: 5% 55-57 minutes: 4%

Chapter 3: Capacity Planning Category Description

58-59 minutes: 6%

Lync Server 2010 has a maximum supported meeting size of 250 users. Each pool can host one 250-user meeting at a time. While this large meeting is occurring, the pool can also host other smaller conferences. For details, see the Conferencing Maximums section in Capacity Planning Requirements and Recommendations. The following table shows the distribution of typical meeting sizes and durations Conference Size and Duration Model
Number of attendees (in addition to presenter) % of total conferences Average duration in minutes

1 2 3 4 5 6 7 8 9-15 16-35 36-100

19% 20% 21% 14% 9% 5% 4% 2% 4% 1% <1%

17 36 42 46 49 53 55 56 65 93 100

The following table provides details about the user model for conferences involving dial-in users. Dial-In Conferencing User Model
Category Description

Authenticated/anonymous Transfer Call duration and music on hold

70% authenticated, 30% anonymous (and prompted for a recorded name). 10% of dial-in callers are transferred to another pool. Average call duration without music on hold: 50 seconds. 50% of call-in users hear music on hold, for an average of 5 minutes.

Chapter 3: Capacity Planning Category Description

Dual-tone multifrequency (DTMF)

15% of conferences that are dial-in only have phone leaders. 10% of mixed conferences that include dial-in users also have phone leaders. 20% of phone leaders use 2 DTMF commands per conference. When the phone leader removes the mute everyone command, 50% of users then unmute themselves using DTMF.

The following table provides details about the user model for conference lobbies. Conference Lobby User Model
Category Description

Number of users in lobby Admitting from lobby

5% of dial-in users go through the lobby, and 25% of other users go through the lobby 80% are admitted by presenter before client timeout. 10% are rejected by presenter before client timeout. 10% wait in lobby until client timeout. Average wait time in lobby is 5 minutes.

The following table describes the user model for other peer-to-peer sessions. Peer-to-Peer Sessions User Model
Category Description

Application sharing

Each user participates in 5 peer-to-peer application sharing sessions per month, for an average of 0.25 sessions per day. The average session lasts 16 minutes. Each user participates in 1 peer-to-peer file transfer session per month (as part of an IM session), for an average of 0.05 sessions per day. The average session file size transferred is 1 MB.

File transfer

Chapter 3: Capacity Planning

Busy Hour
For peer-to-peer sessions, peak load is calculated using busy hour call attempts (BHCA). This voice industry term assumes that 50% of all calls for the day will be completed in 20% of the time. It is calculated using the following formula: BHCA=(total calls * 0.5) / 1.6 Performance testing simulated busy hour by running VoIP and other peer-to-peer sessions at a busy hour load for at least 1.6 hours per day. Conferencing peak load assumes that 75% of all conferences for an eight-hour day happen in 4 peak time hours. Those peak hours have 1.5 times the average conferencing load.

Capacity Planning Requirements and Recommendations


The following sections provide guidance for your capacity planning in two ways: Capacity Planning Using the User Models is a general summary of server requirements that assume that the usage of Microsoft Lync Server 2010 in your organization is similar to the usage outlined in the previous section, Lync Server 2010 User Models. If your organization follows typical user models, you can use this section to quickly see the numbers of servers you will need without doing any calculations. Scenario-Based Capacity Planning provides granular capacity numbers for each workload in a variety of situations. If you can project your Lync Server 2010 usage in detail, you can calculate the number of servers you will need and their expected load. This section is especially useful if your usage will differ from the user models in Lync Server 2010 User Models.

Capacity Planning Using the User Models


This section provides guidance for how many servers you will need at a site for the number of users at that site, given usage that is similar to the usage described in Lync Server 2010 User Models. The following table summarizes these recommendations.
Server role Maximum number of users supported

One Standard Edition server Front End pool with eight Front End Servers and one Back End Server One A/V Conferencing Server One Edge Server One Director One Monitoring Server One Archiving Server

5,000 80,000 unique users, plus 50% multiple point of presence (MPOP) for a total of 120,000 endpoints. 20,000 15,000 remote users 15,000 remote users 250,000 users if not collocated with Archiving Server. 100,000 if collocated 500,000 users if not collocated with Monitoring

Chapter 3: Capacity Planning Server role Maximum number of users supported

Server. 100,000 if collocated One Mediation Server See the Mediation Server section later in this topic

Front End Server In a Front End pool, you should have one Front End Server for every 10,000 users homed in the pool, plus an additional Front End Server to maintain performance when one server is unavailable. The maximum number of users in one Front End pool is 80,000. If you have more than 80,000 users at a site, you can deploy more than one Front End pool. When you account for the number of users in a Front End pool, include the users homed on Survivable Branch Appliances and Survivable Branch Servers at branch offices that are associated with this Front End pool. The additional Front End Server helps maintain performance in case one server is unavailable. When an active server is unavailable, its connections are transferred automatically to the other servers in the pool. For example, if you have 30,000 users and three Front End Servers, then if one server is unavailable, the connections of 10,000 users need to be transferred to the other two servers, for an average of 5,000 transfers per server. Additionally, the remaining two servers will each have 15,000 users, which is a larger number than recommended. However, if you start with four Front End Servers for your 30,000 users and then one is unavailable, a total of 7,500 users will be moved to three other servers, for an average of 2,500 transfers per server, and 10,000 users per server. This is a much more manageable load. When you have more servers in the pool, the need for this additional server is less important. For example, if you have 80,000 users in the pool, eight servers are generally sufficient. In this case, if one server is unavailable, there will be about 10,000 users moved to 7 other servers, for an average of about 1,400 per server. The maximum number of users in a Front End pool is 80,000. This is a constraint of the back-end database. The maximum number of Front End Servers in a pool is 10. For a Front End pool with 80,000 users, eight Front End Servers is sufficient for performance, including failover performance, in typical deployments that follow the Lync Server 2010 User Models. You may want to deploy more than eight Front End Servers in these scenarios: The hardware for your Front End Servers does not meet the recommendations in Server Hardware Platforms. Your organizations usage differs significantly from the user models, such as significantly more conferencing traffic. You are collocating A/V Conferencing Server with Front End Server in a Front End pool of more than 10,000 users. The following table shows the average bandwidth for IM and presence, given the user model, as defined in Lync Server 2010 User Models.

10

Chapter 3: Capacity Planning Average bandwidth per user Bandwidth requirements per Front End Server with 10,000 users

1.3 Kpbs

13 Mbps

Conferencing Maximums Given the user model that 5% of users in a pool may be in a conference at any one time, a pool of 80,000 users could have about 4,000 users in conferences at one time. These conferences are expected to be a mix of media (some IM-only, some IM with audio, some audio/video, for example) and number of participants. There is no hard limit for the actual number of conferences allowed, and actual usage determines the actual performance. For example, if your organization has many more mixed-mode conferences than are assumed in the user model, you might need to deploy more Front End Servers or A/V Conferencing Servers than the recommendations in this document. For details about the assumptions in the user model, see Lync Server 2010 User Models. Additionally, if you know the average number of conferences you will be hosting, you can use the tables in Scenario-Based Capacity Planning to estimate your server needs. The maximum supported conference size hosted by Lync Server is 250 participants. While this large conference is going on, the pool can still support other conferences as well. While a 250user conference is happening, the pool still supports a total of 5% of pool users in concurrent conferences. For example, in a pool of eight Front End Servers and 80,000 users, while the 250user conference is happening, Lync Server supports 3,750 other users participating in smaller conferences. Note: 250 is the maximum for shared pool deployments, based on Microsoft testing. For information about supporting meeting with more than 250 participants, see "Microsoft Lync Server 2010 Support for Large Meetings" at http://go.microsoft.com/fwlink/? LinkId=242073. Regardless of the number of users homed on the Front End pool or Standard Edition server, Lync Server supports a minimum of 125 other users participating in smaller conferences while a 250user conference is happening. For example, on a Standard Edition server with 5,000 users, a 250-user conference can be concurrent with other smaller conferences totaling 125 users. A/V Conferencing Server If you have fewer than 10,000 users at a site, and typical A/V usage, you can collocate the A/V Conferencing Server role with your Front End Servers. If you have more than 10,000 users, we recommend you deploy A/V Conferencing Server separately from the Front End pool. Assuming usage similar to what is described in Lync Server 2010 User Models, you should deploy one A/V Conferencing Server for each 20,000 users at a site. At a minimum we recommend two A/V Conferencing Servers for high availability. When you account for the number of users for the A/V Conferencing Servers, include the users homed on Survivable Branch Appliances and Survivable Branch Servers at branch offices that are associated with a Front End pool at this site.

11

Chapter 3: Capacity Planning

Another guide for A/V Conferencing Server capacity planning is that each A/V Conferencing Server can support about 1,000 concurrent A/V conference users. With the user model assumption of 5% of an organizations users in concurrent conferences, this leads to the recommendation of one A/V Conferencing Server per 20,000 users at the site. If your organization has a higher or lower percentage of users participating in A/V conferences, you can adjust the number of servers needed in your organization. Edge Server You should deploy one Edge Server for every 15,000 users who will access a site remotely. At a minimum we recommend two Edge Servers for high availability. When you account for the number of users for the Edge Servers, include the users homed on Survivable Branch Appliances and Survivable Branch Servers at branch offices that are associated with a Front End pool at this site. Note: To improve the performance of the A/V Conferencing Edge service on your Edge Servers, you should enable receive-side scaling (RSS) on the network adapters on your Edge Servers. RSS enables incoming packets to be handled in parallel by multiple processors on the server. For details, see http://go.microsoft.com/fwlink/?LinkId=206013. For information on how to enable RSS, see your network adapter documentation. Director For performance, you should deploy one Director for every 15,000 users who will access a site remotely. At a minimum we recommend two Directors for high availability. When you account for the number of users for the Directors, include the users homed on Survivable Branch Appliances and Survivable Branch Servers at branch offices that are associated with a Front End pool at this site. Mediation Server How many Mediation Servers to deploy depends on many factors, including the hardware used for Mediation Server, the number of VoIP users you have, the number of gateway peers that each Mediation Server pool controls, the busy hour traffic through those gateways, and the percentage of calls that are calls that bypass the Mediation Server. The following tables provide a guideline for how many concurrent calls a Mediation Server can handle. For details about Mediation Server scalability, see Estimating Voice Usage and Traffic and Deployment Guidelines for Mediation Server. Stand-alone Mediation Server Capacity: 90% Internal Users, 10% External Users
Server hardware Maximum number of calls Maximum number of T1 lines Maximum number of E1 lines

Dual processor, quad core, 2.26 GHz hyperthreaded CPU with hyper-threading

950

40

30

12

Chapter 3: Capacity Planning Server hardware Maximum number of calls Maximum number of T1 lines Maximum number of E1 lines

disabled, with 32 GB memory and 4 1GB network adapter cards. Dual processor, quad 1200 core, 2.26 GHz hyperthreaded CPU, with 32 GB memory and 4 1GB network adapter cards. Note: Although servers with 32 GB of memory were used for performance testing, servers with 16 GB of memory are supported for stand-alone Mediation Server, and are sufficient to provide the performance shown in this table. Stand-alone Mediation Server Capacity: 70% Internal Users, 30% External users
Server hardware Maximum number of calls Maximum number of T1 lines Maximum number of E1 lines

50

37

Dual processor, quad core, 2.26 GHz hyperthreaded CPU with hyper-threading disabled, with 32 GB memory and 4 1GB network adapter cards.

800

33

25

Dual processor, quad 1075 core, 2.26 GHz hyperthreaded CPU, with 32 GB memory and 4 1GB network adapter cards. Note:

45

34

Although servers with 32 GB of memory were used for performance testing, servers with 16 GB of memory are supported for stand-alone Mediation Server, and are sufficient to provide the performance shown in this table. Mediation Server Capacity (Mediation Server Collocated with Front End Server)
Server hardware Maximum number of calls

Dual processor, quad core, 2.3 GHz hyper-

226

13

Chapter 3: Capacity Planning Server hardware Maximum number of calls

threaded CPU, with 16 GB memory and 2 1GB network adapter cards.

Monitoring Server and Archiving Server One Monitoring Server can support up to 250,000 users when Monitoring Server and the Monitoring store are located on the same server and not collocated with other server roles. One Archiving Server can support up to 500,000 users when Archiving Server and the Archiving store are located on the same server and not collocated with other server roles. If Monitoring Server and Archiving Server are collocated together, each can support up to 100,000 users. If the number of users allows, a Monitoring Server and Archiving Server at one site each can serve the users at all your central sites and branch offices. Standard Edition Server One Standard Edition server can support up to 5,000 users. In This Section Estimating Voice Usage and Traffic Deployment Guidelines for Mediation Server

Estimating Voice Usage and Traffic The Microsoft Lync Server 2010, Planning Tool uses the following metric to estimate user traffic at each site and the number of ports that are required to support that traffic. For Light traffic (one PSTN call per user per hour) figure 15 users per port For Medium traffic (2 PSTN calls per user per hour) figure 10 users per port For Heavy traffic (3 or more PSTN per user calls per hour) figure 5 users per port The number of ports in turn determines the number of Mediation Servers and gateways that will be required. The PSTN gateways that most organizations consider deploying range in size from 2 to as many as 960 ports. (There are even larger gateways, but these are used mainly by telephony service providers.) For example, an organization with 10,000 users and medium traffic would require 1000 ports. The number of gateways required would equal the total number of ports required as determined by the total capacity of the gateways. Deployment Guidelines for Mediation Server This topic describes some planning guidelines for Mediation Server deployment. After reviewing these guidelines, we recommend that you use the Planning Tool to create and view possible alternative topologies, which can serve as models for what the final tailored topology you decide to deploy could look like. For details about how to access and use the Planning Tool, see Using the Lync Server 2010 Planning Tool to Plan for Enterprise Voice. Collocated or Stand-alone Mediation Servers?

14

Chapter 3: Capacity Planning

Mediation Server is by default collocated on the Standard Edition server or Front End Server in a Front End pool at central sites. The number of PSTN calls that can be handled and the number of machines required in the pool will depend on the following: the number of gateway peers that the Mediation Server pool controls the busy hour traffic through those gateways the percentage of calls that are calls that bypass the Mediation Server

When planning, ensure that after you take into account the processing requirements for nonmedia bypass PSTN calls and for the A/V Conferencing Server (if it is collocated, and therefore running on Front End Servers in the same pool), there is enough processing to handle signaling interactions for the number of busy hour calls that need to be supported. (Allow at least 30% of CPU for this.) If there is not enough CPU, then you must deploy a stand-alone pool of Mediation Servers, and PSTN gateways, IP-PBXs, and SBCs will need to be split into subsets that are controlled by the collocated Mediation Servers in one pool and the stand-alone Mediation Servers in the second, stand-alone pool. If you deployed PSTN gateways, IP-PBXs, or SBCs that do not support the correct capabilities to interact with a pool of Mediation Servers, including the following, then they will need to be associated with a stand-alone pool consisting of a single Mediation Server: Perform network layer DNS load balancing across Mediation Servers in a pool (or otherwise route traffic uniformly to all Mediation Servers in a pool) Accept traffic from any Mediation Server in a pool

Note: We also recommend collocating Mediation Servers with a Front End pool when an IPPBX does not support media bypass, but the Front End pool that is hosting the Mediation Server can handle voice transcoding for calls to which media bypass does not apply, You can use the Microsoft Lync Server 2010, Planning Tool to evaluate whether the Front End pool where you want to collocate the Mediation Server can handle the load. If your environment cannot meet these requirements, then you must deploy a stand-alone Mediation Server pool. Central Site and Branch Site Considerations Mediation Servers at the central site can be used to route calls for IP-PBXs or PSTN gateways at branch sites. If you deploy SIP trunks, however, you must deploy a Mediation Server at the site where each trunk terminates. Having a Mediation Server at the central site route calls for an IPPBX or PSTN gateway at a branch site does not require the use of media bypass. However, if you can enable media bypass, doing so will reduce media path latency and, consequently, result in improved media quality because the media path is no longer required to follow the signaling path. Media bypass will also decrease the processing load on the pool. Note: Media bypass will not interoperate with every PSTN gateway, IP-PBX, and SBC. Microsoft has tested a set of PSTN gateways with certified partners and has done some testing with Cisco IP-PBXs. Certification for SBCs is underway. Media bypass is supported only with products and versions listed on Unified Communications Open Interoperability Program Lync Server at http://go.microsoft.com/fwlink/?LinkId=214406.

15

Chapter 3: Capacity Planning

If branch site resiliency is required, a Survivable Branch Appliance or combination of a Front End Server, a Mediation Server, and a gateway must be deployed at the branch site. (The assumption with branch site resiliency is that there presence and conferencing are not resilient at the site.) For guidance on branch site planning for voice, see Planning for Branch-Site Voice Resiliency. For interactions with an IP-PBX, if the IP-PBX does not correctly support early media interactions with multiple early dialogs and RFC 3960 interactions, there can be clipping of the first "Hello" for incoming calls from the IP-PBX to Lync Server 2010 endpoints. This behavior can be more severe if a Mediation Server at a central site is routing calls for an IP-PBX where the route terminates at a branch site, because more time is needed for signaling to complete. If you experience this behavior, deploying a Mediation Server at the branch site is the only way to reduce clipping of the first "Hello". Finally, if your central site has a TDM PBX, or if your IP-PBX does not eliminate the need for a PSTN gateway, then you must deploy a gateway on the call route connecting Mediation Server and the PBX.

Scenario-Based Capacity Planning


To ensure optimal performance of Microsoft Lync Server 2010, you must provision and deploy an adequate level of hardware resources; however, to maximize the return on your hardware investments, you dont want to provision more hardware than you need. This section provides guidance for hardware allocation based on your analysis of your organizations needs. You use your organizations number of users, user profiles, and workloads deployed to determine the necessary CPU clock speed, server memory requirements, and network bandwidth to and from your server. The results are applicable to both physical and virtual topologies. The information in this section will be especially helpful if your user model or server hardware differ from what is described in Lync Server 2010 User Models. The following sections detail the resources needed for each modality of Lync Server 2010. Following that is an example scenario that illustrates how to use these numbers. All the performance costs in the following tables assume as a baseline that each server has dual quad-core processors with with a clock speed per core of 2.33 GHz. This yields 2,333 megacycles per processor core, or 18,664 megacycles per server. If your servers have different processors, you can adjust the figures accordingly. For details, see Adjusting for Your Processors later in this topic.

Understanding the Results Tables and Formulas Each of the following sections contains a table showing the results of Microsoft performance testing, with the following figures: CPU requirements in megacycles shows the megacycles needed for this workload for this total number of users in the pool, or total number of concurrent calls or sessions in the pool. For example, in the Conferences, Web Conferencing table, the number 1,569 in the 400 row means 400 concurrent data conferencing users in the pool will require a total of 1,569 megacycles combined from the servers in the pool.

16

Chapter 3: Capacity Planning

CPU requirements as a percentage of a Front End Server shows the megacycles needed for this task as a percentage CPU load as if the entire load were being handled by one server of the same specifications as those used in Microsoft testing. Memory requirements shows the total memory needed for this workload for this total number of concurrent users or calls in the pool. For example, in the Conferences, Web Conferencing table, the number 1.5 GB in the 400 row means 400 concurrent data conferencing users in the pool will require a total of 1.5 GB of memory combined from the servers in the pool. This number does not include the memory needed for base system requirements. Network bandwidth shows the total necessary network bandwidth to and from your servers, for this total number of concurrent users or calls of this type in the pool. This is only for traffic to and from the server; peer-to-peer bandwidth is not accounted for. Following the table are some formulas that you can use to calculate the resources you need given your number of users, user model, and workload. These formulas were created using trend analysis of the performance test results. Not every data point from the testing falls exactly on the trend line, so if you use the formula with the number of users or calls from one of the rows of the test results table, the performance numbers calculated may not match the table results precisely. Determining the CPU needs for your pool includes the following calculations: For each workload, you first compute test server CPU% cost. This is the CPU cost expressed as a percentage of the entire CPU ability of one of the servers used in Microsoft performance testing. These servers have 8 cores with 2.33 GHz per core. If you are using servers with the same SPECint rate, you can simply use test server CPU% cost to determine how many servers you will need. Add the test server CPU% costs for each workload to find the total test server CPU% cost that you need from your entire pool. Then deploy enough servers in the pool to ensure that each server can run at 70% capacity or less. For example, if you determine a total test server CPU% cost of 260%, you would want to deploy four servers in the pool. If you are using servers with a different capability, you will want to use the next formula for each workload, which translates the test server CPU% cost to megacycles. For an example of using these calculations, see Example of Calculating Needed Resources later in this section. All performance testing was done assuming that all test servers are within one site. Testing Assumptions The testing, results, and formulas in this section assume the following: All servers are in one site. Mediation Server is collocated with Front End Server. If you deploy a stand-alone Mediation Server, you cannot subtract the entire Enterprise Voice load from the Front End Server, because some signaling traffic is still processed by the Front End Server. For issues and usage not called out in this section, it is assumed that your organization follows the usage in Lync Server 2010 User Models. For example, multiple point of presence (MPOP) is assumed to be 1:1.5.

17

Chapter 3: Capacity Planning

IM and Presence This table shows the test results for the IM and presence workload. The numbers here include distribution list expansion and presence photograph retrieval.
Number of users CPU requirements in megacycles CPU requirements as a percentage of a Front End Server* Memory requirements** Network bandwidth

5,000 10,000 15,000 20,000 25,000

1,043 1,736 2,556 3,528 4,423

5.6% 9.3% 13.7% 18.9% 23.7%

1.1 GB 1.6 GB 2.18 GB 2.33 GB 2.43 GB

7.5 Mbps 14.8 Mbps 22.6 Mbps 38.3 Mbps 52.8 Mbps

* Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. ** Add 7 GB per server for base system memory requirements From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (number of users) * 0.001 Required memory in GB = 7 GB base + ((number of users) * 0.0000678) Megacycles needed = (CPU% cost/100) * 2333 * 8 Required network in Mbps = ((number of users ^2) * 0.0000000637) + (0.000369 * number of users) + 4.15 Address Book Web Query This table shows the test results for the resource usage of address book web query.
Number of users CPU requirements in megacycles CPU requirements as a percentage of a Front End Server* Memory requirements Network bandwidth

5,000 10,000 15,000 20,000 25,000

646 974 1,312 1,631 1,984

3.46% 5.22% 7.03% 8.74% 10.63%

0.265 GB 0.268 GB 0.263 GB 0.263 GB 0.265 GB

3.3 Mbps 6.4 Mbps 9.5 Mbps 13.8 Mbps 18.3 Mbps

* Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = ((number of users) * 0.0004) + 2.0

18

Chapter 3: Capacity Planning

Megacycles needed = (Test server CPU% cost/100) * 2333 * 8 Required memory in GB = 0.300 Required network in Mbps = 0.00075 * number of users

Group IM Conferences This table shows the test results for the resource usage of group IM conferences.
Number of concurrent group IM users* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server** Memory requirements Network bandwidth

100 200 300 400 500

401 358 416 467 538

2.15% 1.92% 2.33% 2.5% 2.88%

0.08 GB 0.15 GB 0.23 GB 0.30 GB 0.40 GB

1.22 Mbps 1.90 Mbps 2.42 Mbps 3.00 Mbps 3.38 Mbps

* Be sure to account for other conference users. The Lync Server 2010 user model assumes that 50% of audio conferences include group IM. When you project the number of group IM users, include an appropriate percentage of conference users, based on the usage in your organization. ** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (number of users * 0.001) + 2.0 Required memory in GB = 0.0008 * number of concurrent group IM users Required network in Mbps = (number of concurrent group IM users * 0.0054) + 0.76 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8

Conferences, Web Conferencing This table shows the resource usage for the web conferencing portion of your conferences.
Number of concurrent web conferencing users* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server** Memory requirements Network bandwidth

100 200 300 400

444 659 845 1,004

2.38% 3.53% 4.53% 5.38%

0.4 GB 0.4 GB 1.5 GB 1.5 GB

24.55 Mbps 49.23 Mbps 66.58 Mbps 81.34 Mbps

19

Chapter 3: Capacity Planning Number of concurrent web conferencing users* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server** Memory requirements Network bandwidth

500

1,191

6.38%

2.2 GB

90.06 Mbps

* The number of web conferencing users can usually be determined as a percentage of the conferencing users. The Lync Server 2010 user model assumes that 20% of conferences include web conferencing (such as the Microsoft PowerPoint presentation graphics program or whiteboarding). You can adjust this percentage to match your own organizations users, to determine what number of concurrent data conferencing users you will have. ** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (concurrent data conferencing users * 0.01) + 1.5 Required memory in GB = 0.0047 * (number of concurrent data conferencing users) Megacycles needed = (Test server CPU% cost/100) * 2333 * 8 Required network in Mbps = (number of concurrent data conferencing users * 0.163) + 13.4 PSTN Conferences This table shows the resource usage for the PSTN users participating in your conferences through Conferencing Attendant application.
Number of concurrent PSTN conferencing users CPU requirements in megacycles CPU requirements as a percentage of a Front End Server* Memory requirements Network bandwidth

50 100 150 200 250

373 560 560 933 1,680

2.0% 3.0% 3.0% 5.00% 9.00%

0.47 GB 0.59 GB 0.71 GB 0.83 GB 1.01 GB

1.0 Mbps 2.1 Mbps 3.2 Mbps 4.4 Mbps 5.6 Mbps

* Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (number of concurrent PSTN conference callers * 0.033) + (number of concurrent PSTN conference callers * 0.0918) Megacycles needed = (Test server CPU% cost/100) * 2333 * 8

20

Chapter 3: Capacity Planning

Required memory in GB = ((2.64 * number of concurrent PSTN conference callers) + 326) / 1000 Required network in Mbps = (0.023 * number of concurrent PSTN conference callers) 0.19 Note that when calculating the CPU cost, you use the number of concurrent PSTN callers twice to account for each caller being both a conference participant and a UC-PSTN caller. Conferences, Application Sharing This table shows the resource usage for the application sharing portion of your conferences.
Number of concurrent application sharing users* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server** Memory requirements Network bandwidth

100 200 300 400

1,680 3,098 4,324 5,192

9.0% 16.60% 23.17% 27.82%

2.7 GB 2.8 GB 2.9 GB 3.3 GB

82.0 Mbps 130.8 Mbps 152.2 Mbps 184.13 Mbps

* The number of application sharing conferencing users can usually be determined as a percentage of the audio conferencing users. The Lync Server 2010 user model assumes that 50% of audio conferences include application sharing. You can adjust this percentage to match your own organizations users, to determine what number of concurrent application sharing users you will have. ** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (concurrent application sharing conferencing users * 0.071) + 2.5 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8 Required memory in GB = (0.0019 * number of concurrent application sharing conferencing users) + 2.45 Required network in Mbps = (number of concurrent application sharing conferencing users * 0.33) + 54 Conferences, Audio Conferencing Audio conferencing is handled by the Audio/Video (A/V) Conferencing service, so this load does not affect the Front End Servers unless you have A/V Conferencing service collocated with the Front End Server. For optimal performance, we recommend that you deploy A/V Conferencing Server separate from Front End Server. Collocating A/V Conferencing Server with Front End Server is supported if you have fewer than 10,000 users.

21

Chapter 3: Capacity Planning

This table shows the resource usage for the audio portion of your conferences. This table assumes the model of 85% of audio conferences including four users, 10% including six users, and 5% including 10 users. Note that in all the conferencing tables, the number of concurrent users refers to the overall number of users participating in all current conferences. The limit for users in a single conference is 250. Note: 250 is the maximum for shared pool deployments, based on Microsoft testing. For information about supporting meeting with more than 250 participants, see "Microsoft Lync Server 2010 Support for Large Meetings" at http://go.microsoft.com/fwlink/? LinkId=242073.
Number of concurrent audio conferencing users CPU requirements in megacycles CPU requirements as a percentage of a Front End Server or A/V Conferencing Server* Memory requirements** Network bandwidth

200 400 600 800 1000

2,463 4,759 6,906 8,884 11,814

13.2% 25.5% 37.0% 47.6% 63.3%

0.42 GB 0.73 GB 1.0 GB 1.29 GB 1.6 GB

29.33 Mbps 58.02 Mbps 86.98 Mbps 115.74 Mbps 144.84 Mbps

* Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. ** On A/V Conferencing Server, add 7 GB per server for base system memory requirements From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (number of concurrent audio conferencing users) * 0.062 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8 Required memory in GB = (number of concurrent audio conferencing users * 0.00146) + 0.132 Required network in Mbps = (number of concurrent audio conferencing users * 0.14435) + 0.36 Conferences, Video Conferencing This table shows the resource usage for the video portion of your conferences. This table assumes the model of 70% of video conferences use CIF, and 30% use VGA. Video conferencing is handled by the A/V Conferencing service, so this load does not affect the Front End Servers unless you have A/V Conferencing service collocated with Front End Server.

22

Chapter 3: Capacity Planning Number of concurrent video conferencing users* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server or A/V Conferencing Server** Memory requirements Network bandwidth

40 80 120 160 200

672 1288 1,792 2,277 3,023

3.6% 6.9% 9.6% 12.2% 16.2

0.03 GB 0.03 GB 0.03 GB 0.03 GB 0.03 GB

18.19 Mbps 29.86 Mbps 50.39 Mbps 63.04 Mbps 80.00 Mbps

* The number of video conferencing users can usually be determined as a percentage of the audio conferencing users. The Lync Server 2010 user model assumes that 20% of audio conferences include video conferencing. You can adjust this percentage to match your own organizations users, to determine what number of concurrent video conferencing users you will have. ** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. Test server CPU% cost = (number of concurrent video conferencing users) * 0.07625 Required memory in GB = 0.03 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8 Required network in Mbps = (number of concurrent video conferencing users * 0.3925) + 1.25 Enterprise Voice, UC-UC Calls This table shows the resource usage for the unified communications to unified communications (UC-UC) calls using Enterprise Voice.
Number of concurrent calls* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server ** Memory requirements Network bandwidth

200 400 600 800 1000

499 721 974 1,212 1,458

2.67% 3.86% 5.22% 6.5% 7.8%

1.36 GB 1.6 GB 1.75 GB 1.95 GB 2.11 GB

2.80 Mbps 5.04 Mbps 7.57 Mbps 9.85 Mbps 12.16 Mbps

23

Chapter 3: Capacity Planning

* The Lync Server 2010 user model assumes four calls per user per hour and an average call duration of three minutes. This four calls per hour in the user model includes both UC-UC calls and UC-PSTN calls. You can get the most accurate numbers if you know what percentage of your users Enterprise Voice calls are UC-UC and what percentage are UC-PSTN. The user model assumes that 60% of calls are UC-PSTN, and 40% are UC-UC. In the following calculations you can use your actual numbers for calls per hour and duration, if they differ. ** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Number of concurrent calls = number of users * average UC-UC calls per user per hour * duration in minutes / 60 Test server CPU% cost = (number of concurrent calls * 0.007) Required memory in GB = (0.00093 * number of concurrent calls) + 1.19 Required network in Mbps = (number of concurrent calls * 0.01175) + 0.43 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8

Enterprise Voice, UC-PSTN Calls This table shows the resource usage for the unified communications to PSTN (UC-PSTN) calls using Enterprise Voice.
Number of concurrent calls* CPU requirements in megacycles CPU requirements as a percentage of a Front End Server** Memory requirements Network bandwidth

200 400 600 800 1000

1,456 3,789 5,226 6,924 8,455

7.8% 20.3% 28% 37.1% 45.3%

0.28 GB 0.43 GB 0.6 GB 0.77 GB 0.89 GB

19.56 Mbps 38.65 Mbps 57.52 Mbps 76.52 Mbps 95.39 Mbps

* The Lync Server 2010 user model assumes four calls per user per hour, and an average call duration of three minutes. The four calls per hour in the user model include both UC-UC calls and UC-PSTN calls. You can get the most accurate numbers if you know what percentage of your users Enterprise Voice calls are UC-UC and what percentage are UC-PSTN. The user model assumes that 60% of calls are UC-PSTN, and 40% are UC-UC. In the following calculations you can use your actual numbers for calls per hour and duration, if they differ. ** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Number of concurrent calls = number of users * average UC-PSTN calls per user per hour * duration in minutes / 60

24

Chapter 3: Capacity Planning

Test server CPU% cost = (number of concurrent calls * 0.007) + (0.0918 * (1-% calls that use media bypass) * number of concurrent calls) Megacycles needed = (Test server CPU% cost/100) * 2333 * 8 Required memory in GB = (0.00156 * number of concurrent calls) + 0.126 Required network in Mbps = (number of concurrent calls * 0.19) * (1 minus % of calls that use media bypass) Response Group Service This table shows the resource usage of Lync Server Response Group service. Note that Response Group does not support more than 1200 agents per pool.
Number of concurrent Response Group calls CPU requirements in megacycles CPU requirements as a percentage of a Front End Server* Memory requirements Network bandwidth

50 60 70 80 90

1,680 1,680 1,866 2,053 2,240

9% 9% 10% 11% 12%

1.2 GB 1.3 GB 1.3 GB 1.3 GB 1.4 GB

0.245 Mbps 0.315 Mbps 0.355 Mbps 0.40 Mbps 0.46 Mbps

** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (number of concurrent Response Group calls * 0.0192) + 7.48 Required memory in GB = (0.0008 * number of concurrent Response Group calls) + 1.18 Required network in Mbps = number of concurrent calls * 0.005 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8

Call Park Service This table shows the resource usage of Lync Server Call Park service.
Number of concurrent calls CPU requirements in megacycles CPU requirements as a percentage of a Front End Server* Memory requirements Network bandwidth

1 25 50

186 186 373

1% 1% 2%

0.130 GB 0.165 GB 0.200 GB

0.100 Mbps 0.280 Mbps 0.550 Mbps

25

Chapter 3: Capacity Planning Number of concurrent calls CPU requirements in megacycles CPU requirements as a percentage of a Front End Server* Memory requirements Network bandwidth

75 100

560 746

3% 4%

0.235 GB 0.270 GB

0.780 Mbps 1.00 Mbps

** Based on the Lync Server baseline processor with eight cores, each with 2,333 megacycles. From these test results, we use trend analysis to yield the following guidelines: Test server CPU% cost = (number of concurrent calls * 0.04) + 0.055 Required memory in GB = ((1.4 * number of concurrent calls) + 1.18) / 1000 Required network in Mbps = (number of concurrent calls * 0.00956) + 0.055 Megacycles needed = (Test server CPU% cost/100) * 2333 * 8

Adjusting For Your Processors All the CPU % performance costs in this section assume a base line of each server having a dual processor, quad-core, with 2.33 GHz. This yields 2,333 megacycles per second per processor core, or 18,664 megacycles per second per server. If your servers have different processors, you can adjust the figures to your hardware. The SPECint processor benchmark for the processors used in these tests is 186 total for the eight cores, or 23.25 per core. To calculate the equivalent processor cycles for your servers, do the following: 1. Use a web browser to open www.spec.org. 2. Pass the mouse cursor over Results, then CPU2006, and then click CPU2006 Results. 3. In Available Configurations, select SPECint2006 Rates and click Go. 4. In the Simple Request area, select search criteria that will help you find your processor, and then click Execute Simple Fetch. 5. Find the server and processor you have deployed, and look at the number in the Result column. 6. Dividing this value by the number of cores in the server gives you the per-core value. For example, if the Result number is 240 on an eight-core server, the per-core value is 30. 7. Use the following formula to determine the per-core megacycles for the server: (Your processors per-core value) X 2,333 / 23.25 8. Multiply the result by the number of cores in the server, and you have the total number of megacycles per server. This compares to the 18,664 megacycles for the baseline server used to produce the numbers in the previous sections of this topic.

26

Chapter 3: Capacity Planning

Example of Calculating Needed Resources The following example shows how you can calculate your resource needs if your organizations use of Lync Server 2010 differs from that in the Lync Server 2010 User Models. In this example, the organizations use is significantly higher than in the user model. 30,000 users, 100% use Enterprise Voice (instead of 50% of users being voice-enabled, as in the user model). Mediation Server is collocated with Front End Server. 75% of UCPSTN calls use media bypass. An average of 7.5% of users are concurrently in conferences (instead of the 5% in the user model), giving us 2250 concurrent users in conferences. Other conferencing uses follow the Lync Server 2010 user model. Enterprise Voice usage is heavier than in the user model, with a busy hour average of five calls per hour lasting an average of 3 minutes (the user model is four calls per hour at busy hour). Following the user model, three of those five calls will be UC-PSTN, and two will be UC-UC. We calculate our CPU needs for the Front End Server as follows:
Front End Server workload Test server CPU% cost Megacycles needed

Base IM and Presence Address Book Web Query Group IM (50% of conferences use group IM)

30,000 users * 0.001 = 30 (30,000 users * 0.0004) + 2 = 14 (1125 users * 0.001) + 2 = 3.125

(30/100) * 2,333 * 8 = 5,599 (14/100) * 2,333 * 8 = 2,613 (3.125/100) * 2,333 * 8 = 583 (4.87/100) * 2,333 * 8 = 909

Web conferencing (75% of all (337 users * 0.01) + 1.5 = conferences include web 4.87 conferencing, and 20% of those conferences include group IM) PSTN conferencing (15% of conference attendees dial in from PSTN phones) (338 users * 0.033) + (338 users * 0.0918) = 42.18

(42.18/100) * 2,333 * 8 = 7,872 (62.353/100) * 2,333 * 8 = 11,638

Application sharing (75% of all (843 users * 0.071) + 2.5 = conferences include web 62.353 conferencing, and 50% of those conferences use application sharing) Enterprise Voice, UC-UC calls 30,000 users * 2 calls * 3 minutes / 60 = 5000 concurrent calls 5000 calls * .007 = 35 Enterprise Voice, UC-PSTN calls 30,000 users * 3 calls * 3 minutes / 60 = 4500

(35/100) * 2,333 * 8 = 6,532

(114.12/100) * 2,333 * 8 =

27

Chapter 3: Capacity Planning Front End Server workload Test server CPU% cost Megacycles needed

concurrent calls (4500 calls * .007) + (4500 calls * 0.0918 * (1 - .8)) = 114.12

21,299

57,045 total megacycles needed on Front End Servers. On the Front End Servers, the total CPU requirement for our heavily-used deployment is 57,045 megacycles. For this example, suppose we are deploying servers with a SPECInt result of 258 for 8 cores, which averages out to 32.25 per core. Using the calculations in the previous section, we see that these servers have 25,888 megacycles each. To determine the number of these servers we need, divide the number of needed megacycles (57,045) by the number of megacycles per server (25,888 in this example). Then divide this result by .7, to ensure that each server runs at no more than 70% of CPU capacity. Take this final result and round it up to the next whole number. In this example, (57,045/25,888)/0.7) = 3.15 We need four of these servers. The four servers provide us with a total of 103,552 megacycles, and 57,045 is about 55% of that, so our four servers should be running at 55% of CPU capacity at peak times. The following table and calculations determine our the A/V Conferencing Server needs in the example scenario.
A/V Conferencing Server workload CPU cost Megacycles needed

Audio conferencing (75% of conferences include Enterprise Voice)

1688 users * 0.062 = 104.625

(104.625/100) * 2,333 * 8 = 19,527 (25.77/100) * 2,333 * 8 = 4,810

Video Conferencing (75% of all 338 users * 0.07625 = 25.77 conferences include web conferencing, and 20% of these conferences include video)

24,337 total megacycles needed on A/V Conferencing Servers. We can deploy two of our servers, at 25,888 cycles each, and run A/V Conferencing Server at about 47% CPU on each. You can perform similar calculations for the memory and network bandwidth needed for your projected workload as well. For workloads or scenarios in which you think your organization has

28

Chapter 3: Capacity Planning

typical usage patterns, refer to Lync Server 2010 User Models to see the user models tested by Microsoft

Media Traffic Network Usage


The media traffic bandwidth usage can be challenging to calculate because of the number of different variables, such as codec usage, resolution, and activity levels. The bandwidth usage is a function of the codec used and the activity of the stream, both of which vary between scenarios. The following table lists the audio codecs commonly used in Microsoft Lync Server 2010 communications software scenarios. Audio Codec Bandwidth
Audio codec Scenarios Audio payload bitrate (KBPS) Bandwidth audio payload and IP header only (Kbps) Bandwidth IP header, UDP, RTP and SRTP (Kbps) Bandwidth IP header, UDP, RTP, SRTP and forward error correction (Kbps) audio payload, audio payload,

RTAudio Wideband RTAudio Narrowband G.722 G.711 Siren

Peer-to-peer Peer-to-peer, PSTN Conferencing PSTN Conferencing

29.0 11.8 64.0 64.0 16.0

45.0 27.8 80.0 80.0 32.0

57.0 39.8 95.6 92.0 47.6

86.0 51.6 159.6 156.0 63.6

The bandwidth numbers in the previous table are based on 20ms packetization (50 packets per second) and for Siren and G.722 include the additional secure real-time transport protocol (SRTP) overhead from conferencing scenarios and assume the stream is 100% active. Forward Error Correction (FEC) is dynamically used when there is packet loss on the link to help maintain the quality of the audio stream. For video, the codec is always RTVideo.The bandwidth required depends on the resolution, quality, and frame rate. For each resolution, there are two interesting bit rates: Maximum payload bitrate This is the bitrate that a Lync 2010 endpoint will use for resolution at the maximum frame rate supported for this resolution. This value is interesting because it allows the highest quality and frame rate video. Minimum payload bitrate This is the bitrate that a Lync 2010 endpoint will use for a resolution of approximately 1 frame per second. This value is interesting so that you can understand the lowest value possible in cases where the maximum bitrate is not available or

29

Chapter 3: Capacity Planning

practical. For some users, 1 frame per second video might be considered an unacceptable video experience, so use caution when considering these bitrates. Video Resolution Bandwidth
Video codec Resolution Maximum video payload bitrate (Kbps) Minimum video payload bitrate (Kbps)

RTVideo RTVideo RTVideo RTVideo

Main Video CIF Main Video VGA Main Video HD Panoramic Video

250 600 1500 350

50 350 800 50

Video FEC is included in the video payload bitrate when it is used so there are not separate values with video FEC and without video FEC. Endpoints do not stream audio or video packets continuously. Depending on the scenario there are different levels of stream activity which indicate how often packets are sent for a stream. The activity of a stream depends on the media and the scenario, and does not depend on the codec being used. In a peer-to-peer scenario: Endpoints send audio streams only when the users speak. Both participants receive audio streams. If video is used, both endpoints send and receive video streams during the entire call. Endpoints send audio streams only when the users speak. All participants receive audio streams.

In a conferencing scenario:

If video is used, only two endpoints send a video stream at a time (the active speaker and the previous active speaker). If video is used, all participants receive video streams. The following table shows stream activity levels based on measurements of customer data. Stream Activity Levels
Scenario Media Estimated stream activity (%)

Peer-to-peer sessions Peer-to-peer sessions Peer-to-peer sessions Peer-to-peer sessions Peer-to-peer sessions Conferencing Conferencing

Audio Main video CIF Main video VGA Main video HD Panoramic video Audio Main video CIF

61 84 83 80 74 43 84

30

Chapter 3: Capacity Planning Scenario Media Estimated stream activity (%)

Conferencing Conferencing Conferencing PSTN

Main video VGA Main video HD Panoramic video Audio

83 80 74 65

In addition to the bandwidth required for the real-time transport protocol (RTP) traffic for audio and video media, bandwidth is required for real-time transport control protocol (RTCP). RTCP is used for reporting statistics and out-of-band control of the RTP stream. For planning, use the bandwidth numbers in the following table for RTCP traffic. These values represent the maximum bandwidth used for RTCP and differ between audio and video streams because of differences in the control data RTCP Bandwidth
Media RTCP maximum bandwidth (Kbps)

Audio Video

5 10

For capacity planning purposes, the following two bandwidths are of interest: Maximum bandwidth without FEC The maximum bandwidth that a stream will consume, including the typical activity of the stream and the typical codec used in the scenario without FEC. This is the bandwidth when the stream is at 100% activity and there is no packet loss triggering the use of FEC. This is interesting for computing how much bandwidth must be allocated to allow the codec to be used in a given scenario. Maximum bandwidth with FEC The maximum bandwidth that a stream consumes, including the typical activity of the stream and the typical codec used in the scenario with FEC. This is the bandwidth when the stream is at 100% activity and there is packet loss triggering the use of FEC to improve quality. This is interesting for computing how much bandwidth must be allocated to allow the codec to be used in a given scenario and allow the use of FEC to preserve quality under packet-loss conditions. The following tables also list an additional bandwidth value, Typical bandwidth. This is the average bandwidth that a stream consumes, including the typical activity of the stream and the typical codec used in the scenario. This bandwidth can be used for approximating how much bandwidth at any given time is being consumed by media traffic but should not be used for capacity planning, because individual calls will exceed this value when the activity level is higher than average. The following tables provide these three bandwidth values for the various scenarios.

31

Chapter 3: Capacity Planning

Audio/Video Capacity Planning for Peer-to-Peer Sessions


Media Codec Typical stream bandwidth (Kbps) Maximum stream bandwidth without FEC Maximum stream bandwidth with FEC

Audio Audio Main video CIF Main video VGA Main video HD Panoramic video

RTAudio Wideband RTAudio Narrowband RTVideo RTVideo RTVideo RTVideo

39.8 29.3 220 508 1210 269

62 44.8 260 610 1510 360

91 56.6 Not applicable Not applicable Not applicable Not applicable

Audio/Video Capacity Planning for Conferences


Media Typical codec Typical stream bandwidth (Kbps) Maximum stream bandwidth without FEC Maximum stream bandwidth with FEC

Audio Audio Main video CIF Main video VGA Panoramic video

G.722 Siren RTVideo RTVideo RTVideo

46.1 25.5 220 508 269

100.6 52.6 260 610 360

164.6 68.6 Not applicable Not applicable Not applicable

Audio Capacity Planning for PSTN


Media Typical codec Typical stream bandwidth (Kbps) Maximum stream bandwidth without FEC Maximum stream bandwidth with FEC

Audio Audio

G.711 RTAudio Narrowband

64.8 30.9

97 44.8

161 56.6

The network bandwidth numbers in these tables represent one-way traffic only and include 5 Kbps for RTCP traffic overhead for each stream. For video the maximum video bit rate is used for computing the maximum stream.

32

Chapter 3: Capacity Planning

Database Activity for Capacity Planning


This section describes the database disk activity for the Microsoft Lync Server 2010 databases.

Front End Pool Databases


The following table lists the peak and average disk activity for the Front End pool databases, with 80,000 users in the pool. Front End Pool Database Usage
Database Peak usage in bytes per read and write Average usage in bytes per read and write Peak usage in reads and writes per second Average usage in reads and writes per second

Back End data drive

Read: 65,536 bytes/read Write: 193,634.45 bytes/write

Read: 972.12 bytes/read Write: 3104.62 bytes/write Read: 2106.05 bytes/read Write: 896.71 bytes/write Read 284.86 bytes/read Write: 4038.77 bytes/write

1.27 reads/second 3,553.79 writes/second

0 reads/second 143.91 writes/second

RTC log

Read: 61,440 bytes/read Write: 5,632 bytes/write

1.20 reads/second 161.41 writes/second 1.93 reads/second 499.3 writes/second 15.73 reads/second 166.89 writes/second

0 reads/second 45.34 writes/second 0 reads/second 330.93 writes/second 0.31 reads/second 65.26 writes/second

RTCdyn log

Read: 61,440 bytes/read Write: 10,011.6 bytes/write

Tempdb data and log (see note following table)

Read: 65,536 bytes/read

Read: 21,267.43 Write: 62,863.64 bytes/read bytes/write Write: 59,383.11 bytes/write Read: 0 Read: 0

RtcAb log (during nightly maintenance)

0 reads/second 586.97 writes/second

1.27 reads/second 43.77 writes/second

Write: 58,868.66 Write: bytes/write 27,530.26 bytes/write

Note: For optimal performance, you should divide tempdb into multiple data files of equal size. These multiple files do not necessarily have to be on different disks or spindles unless you are also encountering I/O bottlenecks as well. The general recommendation is to

33

Chapter 3: Capacity Planning

have one file per CPU because only one thread is active per CPU at one time. For details about best practices, see http://go.microsoft.com/fwlink/?LinkId=205442.

Content Collaboration Database Usage


Category Peak usage in bytes per read and write, 10,000 provisioned users Average usage in bytes per read and write, 10,000 provisioned users Average disk reads and writes per second, 10,000 provisioned users

Overall I/O rate for content collaboration

Read: 65,408 bytes/read Write: 158,067 bytes/write

Read: 1354 bytes/read Write: 13,902 bytes/write

0.17 reads/second 5.914 writes/second

Monitoring Server and Archiving Server Databases


Monitoring Server has separate databases for call detail records (CDRs) and Quality of Experience (QoE) data. With 240,000 users, the average database growth rate is 0.9 GB/hour for the CDR database, and 0.8 GB/hour for the QoE database. These numbers assume the Lync Server 2010 user model, where 50% of users are unified communications (UC) enabled. For an Archiving Server supporting 240,000 users, the average database growth rate is 1.4GB/hour. The following table lists the peak and average disk activity for the Monitoring Server and Archiving Server databases.
Category Average bytes per read and bytes per write (240,000 users) Peak bytes per read and bytes per write (240,000 users) Disk reads and writes per second (240,000 users)

Collocated Monitoring Data drive (CDR data and Archiving file, QoE data file, databases and Archiving data file): 8445 bytes/read 68,823 bytes/write Log drive (CDR log file and Archiving log file) 0 bytes/read 2951

Data drive (CDR data file, QoE data file, and Archiving data file): 60,000 bytes/read 646,323 bytes/write Log drive (CDR log file and Archiving log file) 0 bytes/read 4200 bytes/write Log drive (QoE log file)

Data drive (CDR data file, QoE data file, and Archiving data file): 14.67 reads/second 709 writes/second Log drive (CDR log file and Archiving log file) 0 reads/second 678 writes/second Log drive (QoE log file)

34

Chapter 3: Capacity Planning Category Average bytes per read and bytes per write (240,000 users) Peak bytes per read and bytes per write (240,000 users) Disk reads and writes per second (240,000 users)

bytes/write Log drive (QoE log file) 0 bytes/read 5378 bytes/write Archiving database Data drive: 1462 bytes/read 74,160 bytes/write Log drive 0 bytes/read 1770 bytes/write Monitoring database Data drive (CDR data file and QoE data file): 18,271 bytes/read 47,384 bytes/write Log drive (CDR log file) 0 bytes/read 3402 bytes/write Log drive (QoE log file) 0 bytes/read 5871 bytes/write

0 bytes/read

0 reads/second

6700 bytes/write

141 writes/second

Data drive: 14,336 bytes/read 1,000,000 bytes/write Log drive 0 bytes/read 3184 bytes/write Data drive (CDR data file and QoE data file): 65,405 bytes/read 1,722,052 bytes/write 0 bytes/read

Data drive: 0.01 reads/second 172 writes/second Log drive 0 reads/second 736 writes/second Data drive (CDR data file and QoE data file): 145 reads/second 443 writes/second 0 reads/second

Log drive (CDR log file) Log drive (CDR log file) 4982 bytes/write Log drive (QoE log file) 0 bytes/read 8892 bytes/write 367 writes/second Log drive (QoE log file) 0 reads/second 118 writes/second

The following table shows the disk usage for the archiving of content collaboration.

35

Chapter 3: Capacity Planning

Content Collaboration Archiving Database Usage


Category Peak usage in bytes per read and write, 10,000 provisioned users Average usage in bytes per read and write, 10,000 provisioned users Average disk reads and writes per second, 10,000 provisioned users

Content collaboration archiving

Read: 65,425 bytes/read Write: 106,389 bytes/write

Read: 1073 bytes/read Write: 6843 bytes/write

0.161 reads/second 25.491 writes/second

Collaboration and Application Sharing Capacity Planning


The tables in this section provide measurements for bandwidth and disk usage for application sharing, and for conferencing content collaboration. Application Sharing Capacity Planning
Modality Average bandwidth (Kbps) Maximum bandwidth (Kbps)

Application sharing using Remote Desktop Protocol (RDP) Application sharing using Compatibility Conferencing service

434 Kbps sent per sharer

938 Kbps sent per sharer

713 Kbps sent per sharer 552 Kbps received per viewer

566 Kbps sent per sharer 730 Kbps received per sharer

Application Sharing Capacity Planning for Persistent Shared Object Model (PSOM) Applications
Application sharing usage Sent and received (KBps) Processor time Average bandwidth usage per user (Kbps)

15 conferences, 90 users

Received: 1,370 (2,728 peak) Sent: 6,370 (12,315 peak)

Average: 8.5 Peak: 24.4

Sent per sharer: 713.57 Received per viewer: 552.92

Content Collaboration Capacity Planning


Content Type Average Size Number of Instances per Conference

PowerPoint Handouts

40 MB 10 MB

4 3

36

Chapter 3: Capacity Planning Content Type Average Size Number of Instances per Conference

Total default share per meeting

250 MB

Not applicable

Content Collaboration Upload and Download Rate


Category Peak usage in bytes per read and write, 10,000 provisioned users Average usage in bytes per read and write, 10,000 provisioned users

Web Conferencing service content upload and download

Received: 17,803,480 bytes/read Sent: 19,668,079 bytes/write

Received: 706,655 bytes/read Sent: 860,224

Address Book Capacity Planning


The tables in this section provide measurements for Address Book Server and Address Book Web Query service activities and storage. Address Book Bandwidth
Modality Number of users Average bandwidth (Kbps) Maximum bandwidth (Kbps)

Initial Address Book Server download

80,000

99,000

332,000 (Fresh deployment with 2000 users on-boarding every hour 60,000 240

Overall Address Book Web Query service Bandwidth utilization per query

80,000 1

40,000 160

Storage Rate for Address Book Server Download


Storage Size for one day Size for 30 days

File share size for Address Book Server for a pool of 80,000 users.

1 GB

26 GB

37

Chapter 3: Capacity Planning

Database Storage Rate for Address Book Server and Address Book Web Query Service
Storage Database size

Address Book Server database size

3 GB

Capacity Planning for Response Group


The following table describes the Response Group user model that you can use as the basis for capacity planning requirements. Note: The numbers in the following table assume that you use 16 kHz, mono, 16-bit Wave (.wav) files for all response group audio files. If you use other file formats, such as Windows Media Audio (.wma), the numbers may vary. Response Group User Model
Metric Per Enterprise Edition pool (With 8 Front End Servers) Per Standard Edition server

Incoming calls per second Concurrent calls connected to IVR or MoH Concurrent anonymous sessions (without IM) Concurrent anonymous sessions (with IM) Active agents (formal and informal) Number of hunt groups Number of IVR groups (use speech recognition)

16 480 224 64 1200 400 200

2 60 28 8 1200 400 200

Capacity Planning for Call Park


The following table describes the Call Park user model that you can use as the basis for capacity planning requirements.

38

Chapter 3: Capacity Planning

Call Park User Model


Metric Per Front End pool (with 8 Front End Servers) Per Standard Edition server

Park rate Retrieve parked call rate Average park duration

8 per minute 8 per minute 60 seconds

1 per minute 1 per minute 60 seconds

Capacity Planning for Group Chat Server


Microsoft Lync Server 2010, Group Chat provides persistent chat sessions. Unlike instant messaging (IM), a Lync Server 2010, Group Chat session is saved, along with the messages, files, URLs, and other data that are part of an ongoing conversation. Capacity planning is an important part of preparing to deploy Group Chat Server. This topic provides details about supported Group Chat Server topologies and capacity planning tables that you can use to determine the best configuration for your deployment. It also describes how to best manage Group Chat Server deployments that require greater capacity at peak times. To download Group Chat Server, see Microsoft Lync Server 2010 Group Chat at http://go.microsoft.com/fwlink/?LinkId=209539. For details about installing Group Chat Server, see Installing and Configuring Group Chat Server in the Deployment documentation.

Group Chat Server Supported Topologies


You can deploy Group Chat Server in a single-server topology or in a multiple-server topology. Note: For additional details about both topologies, see Planning for Group Chat Server in this documentation set and Deploying Group Chat Server in the Deployment documentation. Note: Some combinations of Microsoft Lync Server 2010 and Microsoft Office Communications Server 2007 R2 can coexist. For details, see Migrating Group Chat Server in the Migration documentation. Single-Server Topology The minimum configuration and simplest deployment for Group Chat Server is a single-server topology. This topology can support as many as 20,000 users. It requires a server to run Microsoft Lync Server 2010, a Group Chat Server, a server to host the Group Chat database, and workstations to host Microsoft Lync 2010 Group Chat. If compliance is required, you need an additional server to host the Compliance service and an additional database to store the compliance data. The compliance database can be collocated with the Compliance service.

39

Chapter 3: Capacity Planning

Note: Your Lync Server 2010 deployment, Group Chat Server, and the Compliance service must all reside in the same Active Directory Domain Services (AD DS) domain. The following figure shows all the components of the single-server topology with the optional Compliance service. Single Group Chat Server

Multiple-Server Topology To provide greater capacity and reliability, you can deploy a multiple-server topology, as described in Planning for Group Chat Server. The multiple-server topology can include as many as three Group Chat Servers, each of which can support as many as 20,000 users, for a total of 60,000 users. A multiple-server topology is the same as the single-server topology except that multiple servers host Group Chat Server. Multiple Group Chat Servers should reside in the same AD DS domain as Lync Server and the Compliance service. The following figure shows all the components of a multiple-server topology with multiple Group Chat Servers, the optional Compliance service, and a separate compliance database.

40

Chapter 3: Capacity Planning

Multiple Group Chat Servers

In a three-server Group Chat Server deployment, where 60,000 users can be simultaneously signed in to and using Lync 2010 Group Chat, the load is distributed evenly at 20,000 users per server. If one server becomes unavailable, the users who are connected to that server will lose their access to Group Chat Server. The disconnected users will be automatically transferred to the remaining servers until the unavailable server is restored. Depending on the amount of Group Chat traffic on the network, this transfer can take a few minutes or as long as an hour. Because each of the remaining servers might be hosting as many as 30,000 users, we recommended that you restore the unavailable server as quickly as possible to avoid performance issues. The load on the Group Chat Servers is balanced by the Lookup service. Group Chat Servers cannot be located behind a hardware load balancer. If the load becomes unbalanced after a

41

Chapter 3: Capacity Planning

server becomes unavailable, the Lookup service will rebalance the load when clients sign in and out, but it will not attempt to balance existing connections.

Group Chat Server Capacity Planning


The following tables can help you with capacity planning for Group Chat Server. They model how changing various Group Chat Server settings impacts capacity capabilities. The italic numbers represent variables that you can change based on your deployment. Planning Your Maximum Capacity for Group Chat Server Use the following sample table to determine the number of users you will be able to support. Group Chat Server Maximum Capacity Sample Channel service instances Active users 3 60,000

In the preceding sample, the plan is to support the maximum number of users Group Chat Server allows: three servers/instances of the Channel service and 20,000 users per server, for a total of 60,000 active users. Capacity Planning for Managing Chat Room Access The following sample table can help you plan for managing chat room access in Group Chat Server. Managing Chat Room Access Sample
30 users per room 150 users per room 12,000 users per room Total

Chat rooms Active users per chat room

24,000 30

800 150 2 10

10 12,000 2 15

Chat rooms per user 12 User groups in each chat rooms membership list Rooms managed by user groups User-group-based membership entities across all chat rooms User-based 10

50% 120,000

50% 4000

50% 252

360,000

60,000

18,000

42

Chapter 3: Capacity Planning 30 users per room 150 users per room 12,000 users per room Total

membership entities across all chat rooms Users and user groups in each chat room's manager, presenter, and scope lists Users and user groups across all chat room's manager, presenter, and scope lists Access control entries Maximum access control entries 6 6 6

144,000

4800

144

624,000 50

68,800 50

18,396 50

711,196 1,000,000

In the preceding sample, when you deploy the Group Chat Servers according to the recommended guidelines, they can handle up to 60,000 active users across a three-server pool with compliance enabled. This sample shows chat rooms categorized as small (30 active users at any given time), medium (150 active users), and large (12,000 active users). The number of chat rooms of a certain size is computed based on the total number of: Active users in the system Active users in chat rooms of the given size Chat rooms of the given size that a single user joins

You can modify the italic numbers in the previous table to estimate how many chat rooms of a certain size will be created in the system and the rate of outbound chat messages that the system is likely to generate. In the sample, where there are 60,000 active users in the system, if each user simultaneously joins 12 small, 2 medium, and 2 large chat rooms, there will be 24,000 small, 800 medium, and 10 large chat rooms created in the system. For each chat room, the preceding capacity planning table specifies the number of access control entries that are associated with the chat room, including entries that are inherited from parent categories and entries that are assigned directly to the chat room. You can control access to individual chat rooms by using access control lists (ACLs). You can also control access at the category level. In an ACL, an individual access control entry can be either a user groupfor example, a security group, distribution list, or federated user groupor a single user. You can define access control entries for chat room managers, presenters, and members.

43

Chapter 3: Capacity Planning

For planning purposes, you must estimate the percentage of chat rooms that will be managed by assigning user groups instead of individual users. The data in the preceding sample assumes that the ACLs consisting of 50% of small chat rooms, 50% of medium chat rooms, and 50% of large chat rooms are made up exclusively of user groups, and the remaining chat rooms are made up of individual users. In the preceding sample, the ACLs for the manager group, the presenter group, and the scope of a chat room category are constant across all chat room sizes. The sample assumes that there are six access control entries per chat room in each of these lists. Important: In planning your strategy for managing chat rooms, keep in mind that the total number of allowed access control entries is 1 million. If the calculated access control entries exceed 1 million, server performance could degrade significantly. To avoid this issue, whenever possible, ensure that your access control entries are user groups instead of individual users. Capacity Planning for Managing Chat Room Access by Invitation You can use the following capacity planning table to compute the number of invitations that Group Chat Server creates and stores in the Group Chat database when it is configured to send invitations. You manage invitations from the Chat room settings page either in the Microsoft Lync Server 2010, Group Chat Admin Tool or Group Chat Server client (Group Chat). The sample data in the following table assumes that, on the Chat room settings page for 50% of all chat rooms, the Invitations option is set to Yes, and the chat rooms are operating at full capacity. Important: If the calculated value for the number of invitations generated by the server exceeds 1 million, server performance could degrade significantly. To avoid this issue, ensure that you minimize the number of chat rooms that are configured to send invitations or restrict the number of users who can join chat rooms that have been configured to send invitations. Chat Room Access by Invitation Sample
30 users per room 150 users per room 12,000 users per room Total

Chat rooms configured to send invitations Users who can access the chat room Invitations generated by

12,000

400

30

150

12,000

360,000

60,000

60,000

480,000

44

Chapter 3: Capacity Planning 30 users per room 150 users per room 12,000 users per room Total

Group Chat Server Maximum allowable number of invitations 1,000,000

Group Chat Server Performance User Model The following table describes the user model for Group Chat Server. It provides the basis for the capacity planning requirements and represents a typical organization with 60,000 concurrent users. Group Chat Server Performance User Model Number of active users Number of Channel Servers Size of small chat rooms Size of medium chat rooms Size of large chat rooms Total number of chat rooms Number of small chat rooms Number of medium chat rooms Number of large chat rooms Total number of chat rooms per user Number of small chat rooms per user Number of medium chat rooms per user Number of large chat rooms per user Peak join rate Total chat rate Chat rate for small chat rooms Chat rate for medium chat rooms Chat rate for large chat rooms Percentage of chat rooms configured for invitations 60,000 3 30 users 150 users 12,000 users 24,810 24,000 800 10 16 12 2 2 10/second 20/second 18/second 1.8/second 0.2/second 50%

45

Chapter 3: Capacity Planning

Percentage of direct memberships Percentage of group memberships Average number of ancestor affiliations in AD DS Number of subscribed contacts per user Average number of visible chat rooms Number of participants polled per interval Length of polling interval Number of participants polled per second Number of presence changes per hour per user Number of presence changes per second

50% 50% 100 - 200 80 1.5 (50% at 1 and 50% at 2) 15 per visible chat room 5 minutes 4500 4 66.66

Capacity Planning for Mobility


Determining the amount of capacity that you need for mobility is an iterative process of estimating your mobility usage, measuring your current capacity, planning for additional capacity, and monitoring key indicators for performance. The following figure illustrates the phases involved in capacity planning and the factors involved in each phase. Mobility Capacity Planning Workflow

This section describes the factors you need to consider as you estimate your mobility usage and provides guidelines for determining the sizing you need to deploy mobility. For details about monitoring usage and performance indicators, see the following: For details about monitoring server memory, see Monitoring for Server Memory Capacity Limits.

46

Chapter 3: Capacity Planning

For details about monitoring mobility usage, see Monitoring Mobility Service Usage.

For details about monitoring IIS tracing log files, see Monitoring IIS Request Tracing Log Files. For details about performance counters you can use to monitor mobility, see Mobility Performance Counters. For details about configuring Mobility Service for high performance, see Configuring Mobility Service for High Performance.

Factors Influencing Capacity


Three factors influence your capacity planning for Front End Servers running the Microsoft Lync Server 2010 Mobility Service: User model Mobile device characteristics Available RAM

User Model The user model described in this section provides the basis for the capacity planning measurements and recommendations for mobility.

Average Number of Contacts per User


Category of contacts Average number per user

All contacts Enterprise contacts Federated contacts PIC contacts Distribution groups Most frequently used contacts Most recently used contacts

80 64 8 6 2 15 10

Daily Activity per User


Daily activity Number during working hours Number outside working hours

Sign-ins to mobile application Phone calls (number) Phone calls (duration)

10 10 2 minutes per call

2 2 2 minutes per call

47

Chapter 3: Capacity Planning Daily activity Number during working hours Number outside working hours

Conferences Participants per conference Status note changes Views of contact card Views of Contacts list Scrolls through Contacts list Global address list (GAL) searches Manual presence updates Total presence updates per contact Call forwarding Instant messaging (IM) sessions (number) IM sessions (duration) Calendar lookups (connections to Exchange Web Services)

1 per week <10 1 6 9 3 5* 0.5 6 0.5 3 6 minutes per session; 1 message per 30 seconds 11

0 0 0 1 1 0 0 0 0 1 6 minutes per session; 1 message per 30 seconds 3

* Number of GAL searches = 1 manual search per day + automatic searches on half of outgoing instant messages and calls. That is, 1 + 2 (instant messages) + 2 (calls) =5. Mobile Device Characteristics The mobile devices supported for mobility run a variety of operating systems. The way in which an operating system manages applications when a user switches to a different application influences capacity planning. Operating systems can be divided into the following two categories for capacity planning: Background enabled When a user switches to a different mobile application on Apple and Windows Phone mobile devices, the Lync 2010 mobile application goes to the background and loses the connection to Lync Server 2010. The mobile application is reactivated by a push notification or when the user manually brings the application to the foreground. Always connected When a user switches to a different mobile application on Android and Nokia mobile devices, the Lync 2010 mobile application maintains the connection to Lync Server 2010 as long as the user stays signed in. Android and Nokia mobile devices create a bigger load on servers because they maintain the connection to the server even when the user is not actively using the mobile application.

48

Chapter 3: Capacity Planning

Available Memory
The sizing guidelines described later in this section will help you define the amount of memory you need for mobility. To determine the amount of physical memory that is available on your servers, use the Memory, Available Mbytes performance counter. This performance counter indicates the amount of physical memory in megabytes that is available for running processes. If the amount of memory available for running processes is less than five percent (5%) of the total physical memory, the server has insufficient memory, which can increase paging.

Sizing Guidelines
The Mobility Service uses additional memory, processor resources, and network resources on Front End Servers. When you plan for capacity, you need to understand the impact of the mobility load on the Front End pool and decide whether you need additional hardware for the mobility load. Use the sizing examples in the table that follows to help determine whether, and how much, additional hardware you need. The examples in the table are based on some formulas and assumptions. The formulas and assumptions use the following definitions: AC stands for the number of mobile applications that are always connected in the user model. BE stands for the number of mobile applications that are enabled for the background in the user model. The formulas and assumptions are as follows: Target memory (TM) in Mbytes = 164 + (AC * 534 + BE * 400) / 1024. TM is the minimum required memory. With the user model described previously, the number of connections to the Front End pool is AC * 2 + BE * .20. Measured memory may be higher (up to 1 MB per endpoint) when there is no memory pressure on the server. Target memory can be higher if your user model is more aggressive, such as when there are more audio/video (A/V) calls or contacts, and so forth. The number of connections created per second is less than or equal to 30/second per 1,000 users. This number depends on connection pooling settings on the hardware load balancer and on keep-alive settings. The following table shows examples of sizing for 50% always-connected users in the user model. Sizing Examples
Number of users Memory (MB) Average CPU Maximum CPU

1,000 2,000 3,000 4,000

620.05 1076.11 1532.16 1988.22

1% 6% 14% 14%

2.5% 8% 18% 18%

49

Chapter 3: Capacity Planning Number of users Memory (MB) Average CPU Maximum CPU

5,000

2444.27

14%

18%

Scenario Examples
The following examples show how the sizing guidelines apply to a fictional large enterprise and a Front End pool that includes two servers. Large enterprise Contoso has deployed 75,000 users across three pools with four Front End Servers in each pool. Contoso plans to enable mobility services to 30,000 users. During the planning phase, Contoso administrators capture the following data: Each Front End Server has 3 GB of available memory. CPU utilization is less than 60%. All users have either an iPhone or a Windows Phone mobile device.

The user model for Contoso is similar to the capacity planning user model described earlier in this section. The minimum required memory for each server is 164 + 2500 * 400 / 1024 = 1133 MB. When there is memory available, more memory may be allocated to the mobile applications because memory is freed up as needed, up to 2.7 GB. In either situation, Contoso does not need to upgrade the Front End Server hardware. Note: The goal for mobility CPU utilization is an average of 10%. Contoso needs to monitor the CPU processor time as it gets close to the server limit of 70%. Front End pool with two servers Contoso has deployed 8,000 users in a Front End pool with two servers. Contoso plans to enable mobility services to all users. During the planning phase, Contoso administrators capture the following data: Each Front End Server has 2.5 GB of available memory. CPU utilization is less than 60%. All users have either a Nokia or an Android mobile device.

The user model for Contoso is similar to the capacity planning user model described earlier in this section. The minimum required memory for each server is 164 + 4000 * 534 / 1024 = 2242 MB. In theory a server could support the load. However, a server will not support the load if a failover occurs between the two servers. Additionally, the mobility CPU utilization will be above 10% and the server 70% CPU limit will be reached. In this scenario, the recommendation is to add a server to the pool. The new load distribution is 2667 (that is, 8000/3) users per Front End Server. The additional mobility cost is 2667 * 534 / 1024 = 1390 MB.

50

Chapter 3: Capacity Planning

With the addition of a server, in the event of a server failure, each of the three servers in the pool will accept 1,300 more users and the increase in load will be 600 MB. With the new load distribution, the CPU utilization will also remain below the 70% server limit.

51

Das könnte Ihnen auch gefallen