Beruflich Dokumente
Kultur Dokumente
1 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Summary
In Exchange 2010 Tested Solutions, Microsoft and participating server, storage, and network partners examine common customer scenarios and key design
decision points facing customers who plan to deploy Microsoft Exchange Server 2010. Through this series of white papers, we provide examples of well-designed,
cost-effective Exchange 2010 solutions deployed on hardware offered by some of our server, storage, and network partners.
You can download this document from the Microsoft Download Center.
Applies To
Microsoft Exchange Server 2010 release to manufacturing (RTM)
Microsoft Exchange Server 2010 with Service Pack 1 (SP1)
Windows Server 2008 R2
Windows Server 2008 R2 Hyper-V
Table of Contents
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
2 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Introduction
Solution Summary
Customer Requirements
Mailbox Profile Requirements
Geographic Location Requirements
Server and Data Protection Requirements
Design Assumptions
Server Configuration Assumptions
Storage Configuration Assumptions
Solution Design
Determine High Availability Strategy
Estimate Mailbox Storage Capacity Requirements
Estimate Mailbox I/O Requirements
Determine Storage Type
Choose Storage Solution
Determine Number of EqualLogic Arrays Required
Estimate Mailbox Memory Requirements
Estimate Mailbox CPU Requirements
Determine Whether Server Virtualization Will Be Used
Determine Whether Client Access and Hub Transport Server Roles Will Be Deployed in Separate Virtual Machines
Determine Server Model for Hyper-V Root Server
Determine the CPU Capacity of the Virtual Machines
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
3 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
4 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Introduction
This document provides an example of how to design, test, and validate an Exchange Server 2010 solution for environments with 9,000 mailboxes deployed on
Dell server and storage solutions and F5 load balancing solutions. One of the key challenges with designing Exchange 2010 environments is examining the current
server and storage options available and making the right hardware choices that provide the best value over the anticipated life of the solution. Following the
step-by-step methodology in this document, we walk through the important design decision points that help address these key challenges while ensuring that the
customer's core business requirements are met. After we have determined the optimal solution for this customer, the solution undergoes a standard validation
process to ensure that it holds up under simulated production workloads for normal operating, maintenance, and failure scenarios.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
5 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Solution Summary
The following tables summarize the key Exchange and hardware components of this solution.
Exchange components
Exchange component
Value or description
9000
750megabytes (MB)
None
Site resiliency
Yes
Virtualization
Hyper-V
Hardware components
Hardware component
Value or description
Server partner
Dell
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
Server model
PowerEdge M610
Server type
Blade
Processor
Storage partner
Dell EqualLogic
Storage type
Disk type
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Customer Requirements
One of the most important first steps in Exchange solution design is to accurately summarize the business and technical requirements that are critical to making
the correct design decisions. The following sections outline the customer requirements for this solution.
Return to top
6 of 133
Value
9000
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Projected growth percent (%) in mailbox count (projected increase in mailbox count over the life of the solution)
0%
100%
Value
750MB (742)
Yes
450 @ 4gigabytes (GB)
900 @ 1GB
7650 @ 512MB
Projected growth (%) in mailbox size in MB (projected increase in mailbox size over the life of the solution)
included
750MB
Value
Target message profile (average total number of messages sent plus received per user per day)
Yes
450 @ 150 messages per day
8550 @ 100 messages per day
7 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
75
100
% in Microsoft Office Outlook Web App (Outlook Web Access in Exchange 2007 and previous versions)
% in Exchange ActiveSync
Return to top
Value
9000
The following table outlines the geographic distribution of datacenters that could potentially support the Exchange e-mail infrastructure.
8 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Value or description
9000
Yes
Return to top
Value
9 of 133
Value or
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
description
Requirement to maintain a backup of the Exchange databases outside of the Exchange environment (for example, third-party backup
solution)
No
Requirement to maintain copies of the Exchange databases within the Exchange environment (for example, Exchange native data
protection)
Yes
Yes
Yes
No
Not applicable
14 days
Return to top
Design Assumptions
This section includes information that isn't typically collected as part of customer requirements, but is critical to both the design and the approach to validating the
design.
Return to top
10 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Value
<70%
<70%
<70%
Normal operating for multiple server roles (Client Access, Hub Transport, and Mailbox servers)
<70%
Normal operating for multiple server roles (Client Access and Hub Transport servers)
<70%
<80%
<80%
<80%
Node failure for multiple server roles (Client Access, Hub Transport, and Mailbox servers)
<80%
Node failure for multiple server roles (Client Access and Hub Transport servers)
<80%
Return to top
11 of 133
Value or description
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
20%
1%
No
20%
Yes
Yes
Value or description
20%
None
Return to top
Solution Design
The following section provides a step-by-step methodology used to design this solution. This methodology takes customer requirements and design assumptions
and walks through the key design decision points that need to be made when designing an Exchange 2010 environment.
Return to top
12 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
that you determine your high availability strategy as the first step in the design process. We highly recommend that you review the following information prior
to starting this step:
Step 2: Determine relationship between mailbox user locations and datacenter locations
In this step, we look at whether all mailbox users are located primarily in one site or if they're distributed across many sites and whether those sites are
associated with datacenters. If they're distributed across many sites and there are datacenters associated with those sites, you need to determine if there's a
requirement to maintain affinity between mailbox users and the datacenter associated with that site.
*Design Decision Point*
In this example, all of the active users are located in one primary location. The primary location is in geographic proximity to the primary datacenter and
therefore there's a desire for all active mailboxes to reside in the primary datacenter during normal operating conditions.
13 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
14 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Active/Passive distributionActive mailbox database copies are deployed in the primary datacenter and only passive database copies are deployed
in a secondary datacenter. The secondary datacenter serves as a standby datacenter and no active mailboxes are hosted in the datacenter under
normal operating conditions. In the event of an outage impacting the primary datacenter, a manual switchover to the secondary datacenter is
performed and active databases are hosted there until the primary datacenter returns online.
Active/Passive distribution
Active/Active distribution (single DAG)Active mailbox databases are deployed in the primary and secondary datacenters. A corresponding passive
copy is located in the alternate datacenter. All Mailbox servers are members of a single database availability group (DAG). In this model, the wide area
network (WAN) connection between two datacenters is potentially a single point of failure. Loss of the WAN connection results in Mailbox servers in
one of the datacenters going into a failed state due to loss of quorum.
Active/Active distribution (single DAG)
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
15 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Active/Active distribution (multiple DAGs)This model leverages multiple DAGs to remove WAN connectivity as a single point of failure. One DAG
has active database copies in the first datacenter and its corresponding passive database copies in the second datacenter. The second DAG has active
database copies in the second datacenter and its corresponding passive database copies in the first datacenter. In the event of loss of WAN
connectivity, the active copies in each site continue to provide database availability to local mailbox users.
Active/Active distribution (multiple DAGs)
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
16 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Disaster recoveryIn the event of a hardware or software failure, multiple database copies in a DAG enable high availability with fast failover and no
data loss. DAGs can be extended to multiple sites and can provide resilience against datacenter failures.
Recovery of accidentally deleted itemsWith the new Recoverable Items folder in Exchange2010 and the hold policy that can be applied to it, it's
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
17 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
possible to retain all deleted and modified data for a specified period of time, so recovery of these items is easier and faster. For more information, see
Messaging Policy and Compliance, Understanding Recoverable Items, and Understanding Retention Tags and Retention Policies.
Long-term data storageSometimes, backups also serve an archival purpose. Typically, tape is used to preserve point-in-time snapshots of data for
extended periods of time as governed by compliance requirements. The new archiving, multiple-mailbox search, and message retention features in
Exchange2010 provide a mechanism to efficiently preserve data in an end-user accessible manner for extended periods of time. For more information,
see Understanding Personal Archives, Understanding Multi-Mailbox Search, and Understanding Retention Tags and Retention Policies.
Point-in-time database snapshotIf a past point-in-time copy of mailbox data is a requirement for your organization, Exchange provides the ability
to create a lagged copy in a DAG environment. This can be useful in the rare event that there's a logical corruption that replicates across the databases
in the DAG, resulting in a need to return to a previous point in time. It may also be useful if an administrator accidentally deletes mailboxes or user
data.
There are technical reasons and several issues that you should consider before using the features built into Exchange2010 as a replacement for traditional
backups. Prior to making this decision, see Understanding Backup, Restore and Disaster Recovery.
*Design Decision Point*
In this example, maintaining tape backups has been difficult, and testing and validating restore procedures hasn't occurred on a regular basis. Therefore,
using Exchange native data protection in place of traditional backups as the database resiliency strategy is preferred.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
High availability database copyThis database copy is configured with a replay lag time of zero. As the name implies, high availability database
copies are kept up-to-date by the system, can be automatically activated by the system, and are used to provide high availability for mailbox service
and data.
Lagged database copyThis database copy is configured to delay transaction log replay for a period of time. Lagged database copies are designed
to provide point-in-time protection, which can be used to recover from store logical corruptions, administrative errors (for example, deleting or
purging a disconnected mailbox), and automation errors (for example, bulk purging of disconnected mailboxes).
18 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
19 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Design for all copies activatedIn this model, the Mailbox server role is sized to accommodate the activation of all database copies on the server.
For example, a Mailbox server may host four database copies. During normal operating conditions, the server may have two active database copies
and two passive database copies. During a failure or maintenance event, all four database copies would become active on the Mailbox server. This
solution is usually deployed in pairs. For example, if deploying four servers, the first pair is servers MBX1 and MBX2, and the second pair is servers
MBX3 and MBX4. In addition, when designing for this model, you will size each Mailbox server for no more than 40percent of available resources
during normal operating conditions. In a site resilient deployment with three database copies and six servers, this model can be deployed in sets of
three servers, with the third server residing in the secondary datacenter. This model provides a three-server building block for solutions using an
active/passive site resiliency model.
This model can be used in the following scenarios:
Active/Passive multisite configuration where failure domains (for example, racks, blade enclosures, and storage arrays) require easy isolation of
database copies in the primary datacenter
Active/Passive multisite configuration where anticipated growth may warrant easy addition of logical units of scale
Configurations that aren't required to survive the simultaneous loss of any two Mailbox servers in the DAG
This model requires servers to be deployed in pairs for single site deployments and sets of three for multisite deployments. The following table
illustrates a sample database layout for this model.
Design for all copies activated
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
20 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
21 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
22 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
different. In this step, a preliminary result is obtained. The final number of Mailbox servers will be determined in a later step.
*Design Decision Point*
This example uses three high availability database copies. To support three copies, a minimum of three Mailbox servers in the DAG is required. In an
active/passive configuration, two of the servers will reside in the primary datacenter, and the third server will reside in the secondary datacenter. In this
model, the number of servers in the DAG should be deployed in multiples of three. The following table outlines the possible configurations.
Secondary datacenter
12
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
23 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Whitespace = 100 messages per day 75 1024MB = 7.3MB
Dumpster = (100 messages per day 75 1024MB 14 days) + (512MB 0.012) + (512MB 0.058) = 138MB
Mailbox size on disk = mailbox limit + whitespace + dumpster
= 512MB + 7.3MB + 138MB
= 657MB
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Whitespace = 100 messages per day 75 1024MB = 7.3MB
Dumpster = (100 messages per day 75 1024MB 14 days) + (1024MB 0.012) + (1024MB 0.058) = 174MB
Mailbox size on disk = mailbox limit + whitespace + dumpster
= 1024MB + 7.3MB + 174MB
= 1205MB
Tier 3 (4096MB mailbox quota, 150 messages per day message profile, 75KB average message size)
Whitespace = 150 messages per day 75 1024MB = 11MB
Dumpster = (150 messages per day 75 1024MB 14 days) + (4096MB 0.012) + (4096MB 0.058) = 441MB
Mailbox size on disk = mailbox limit + whitespace + dumpster
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
24 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Average size on disk = [(657 7650) + (1205 900) + (4548 450)] 9000
= 907MB
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Database size = (number of mailboxes mailbox size on disk database overhead growth factor) (20% data overhead)
= (7650 657 1) 1.2
= 6031260MB
= 5890GB
Database index size = 10% of database size
= 589GB
Total database capacity = (database size + index size) 0.80 to add 20% volume free space
= (5890 + 589) 0.8
= 8099GB
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Database size= (number of mailboxes mailbox size on disk database overhead growth factor) x (20% data overhead)
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
25 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
26 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Log files size = (log file size number of logs per mailbox per day number of days required to replace failed infrastructure number of
mailbox users) + (1% mailbox move overhead)
= (1MB 20 3 7650) + (7650 0.01 512)
= 498168MB
= 487GB
Total log capacity = log files size 0.80 to add 20% volume free space
= (487) 0.80
= 608GB
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Log files size = (log file size number of logs per mailbox per day number of days required to replace failed infrastructure number of
mailbox users) + (1% mailbox move overhead)
= (1MB 20 3 900) + (900 0.01 1024)
= 63216MB
= 62GB
Total log capacity = log files size 0.80 to add 20% volume free space
= (62) 0.80
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
= 77GB
Tier 3 (4096MB mailbox quota, 150 messages per day message profile, 75KB average message size)
Log files size = (log file size number of logs per mailbox per day number of days required to replace failed infrastructure number of
mailbox users) + (1% mailbox move overhead) = (1MB 30 3 450) + (450 0.01 4096)
= 58932MB
= 58GB
Total log capacity = log files size 0.80 to add 20% volume free space
= (58) 0.80
= 72GB
27 of 133
Value
907
13147
757
13904
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
41712
41
Return to top
28 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
29 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
30 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Storage
controllers:
Hard disk
drives:
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Volumes
Up to 1024.
Up to 1024.
RAID support
Network
interfaces
Reliability
31 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
For a list of supported disk types, see "Physical Disk Types" in Understanding Storage Configuration.
To help determine which disk type to choose, see "Factors to Consider When Choosing Disk Types" in Understanding Storage Configuration.
Return to top
32 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
33 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
50
3MB
100
6MB
150
9MB
200
12MB
In this step, you determine high level memory requirements for the entire environment. In a later step, you use this result to determine the amount of
physical memory needed for each Mailbox server. Use the following information:
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Database cache = profile specific database cache number of mailbox users
= 6MB 7650
= 45900MB
= 45GB
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Database cache = profile specific database cache number of mailbox users
= 6MB 900
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
= 5400MB
= 6GB
Tier 3 (4096MB mailbox quota, 150 messages per day message profile, 75KB average message size)
Database cache = profile specific database cache number of mailbox users
= 9MB 450
= 4050MB
= 4GB
34 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
35 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Megacycle estimates
Messages sent or received per
mailbox per day
50
0.1
0.15
100
0.2
0.3
150
0.3
0.45
200
0.4
0.6
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Active mailbox megacycles required = profile specific megacycles number of mailbox users
= 2 7650
= 15300
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Active mailbox megacycles required = profile specific megacycles number of mailbox users
= 2 900
= 1800
Tier 3 (4096MB mailbox quota, 150 messages per day message profile, 75KB average message size)
Active mailbox megacycles required = profile specific megacycles number of mailbox users
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
= 3 450
= 1350
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Remote copy megacycles required = profile specific megacycles number of mailbox users number of remote copies
= (0.2) (7650) 2
= 3060
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Remote copy megacycles required = profile specific megacycles number of mailbox users number of remote copies
= (0.2) (900) 2
= 360
Tier 3 (4096MB mailbox quota, 150 messages per day message profile, 75KB average message size)
Remote copy megacycles required = profile specific megacycles number of mailbox users number of remote copies
= (0.3) (450) 2
= 270
36 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
37 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Tier 1 (512MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Passive mailbox megacycles required = profile specific megacycles number of mailbox users number of passive copies
= 0.3 7650 2
= 4590
Tier 2 (1024MB mailbox quota, 100 messages per day message profile, 75KB average message size)
Passive mailbox megacycles required = profile specific megacycles number of mailbox users number of passive copies
= 0.3 900 2
= 540
Tier 3 (4096MB mailbox quota, 150 messages per day message profile, 75KB average message size)
Passive mailbox megacycles required = profile specific megacycles number of mailbox users number of passive copies
= 0.45 450 2
= 405
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
38 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
If you expect server capacity to be underutilized and anticipate better utilization, you may purchase fewer servers as a result of virtualization.
You may want to use Windows Network Load Balancing when deploying Client Access, Hub Transport, and Mailbox server roles on the same physical
server.
If your organization is using virtualization in all server infrastructure, you may want to use virtualization with Exchange, to be in alignment with corporate
standard policy.
Determine Whether Client Access and Hub Transport Server Roles Will Be Deployed in Separate
Virtual Machines
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
When using virtualization for the Client Access and Hub Transport server roles, you may consider deploying both roles on the same VM. This approach reduces
the number of VMs to manage, the number of server operating systems to update, and the number of Windows and Exchange licenses you need to purchase.
Another benefit to combining the Client Access and Hub Transport server roles is to simplify the design process. When deploying roles in isolation, we
recommend that you deploy one Hub Transport server logical processor for every four Mailbox server logical processors, and that you deploy three Client
Access server logical processors for every four Mailbox server logical processors. This can be confusing, especially when you have to provide sufficient Client
Access and Hub Transport servers during multiple VM or physical server failures or maintenance scenarios. When deploying Client Access, Hub Transport, and
Mailbox servers on like physical servers or like VMs, you can deploy one server with the Client Access and Hub Transport server roles for every one Mailbox
server in the site.
*Design Decision Point*
In this solution, co-locating the Hub Transport and Client Access server roles in the same VM is wanted. The Mailbox server role is deployed separately in a
second VM. This will reduce the number of VMs and operating systems to manage as well as simplify planning for server resiliency.
Return to top
39 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Components
Description
Chassis\enclosure
Form factor: 10U modular enclosure holds up to sixteen half-height blade servers
44.0cm (17.3") height 44.7cm (17.6") width 75.4cm (29.7") depth
Weight:
Power supplies
Cooling fans
Input device
Up to six total I/O modules for three fully redundant fabrics, featuring Ethernet FlexIO technology providing on-demand stacking
and uplink scalability. Dell FlexIO technology delivers a level of I/O flexibility, bandwidth, investment protection, and capabilities
unrivaled in the blade server market.
FlexIO technologies include:
Completely passive, highly available midplane that can deliver greater than 5terabytes per second (TBps) of total I/O
bandwidth
Support for up to two ports of up to 40gigabits per second (Gbps) from each I/O mezzanine card on the blade server
Management
40 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
41 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Dell EqualLogic PS series, Dell/EMC AX series, Dell/EMC CX series, Dell/EMC NS series, Dell PowerVault MD series, Dell PowerVault
NX series
Description
Processors (x2)
Latest quad-core or six-core Intel Xeon processors 5500 and 5600 series
Form factor
Memory
12 DIMM slots
1GB/2GB/4GB/8GB/16GB ECC DDR3
Support for up to 192GB using 12 16GB DIMMs
Drives
I/O slots
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Description
Processors (x2)
Latest quad-core or six-core Intel Xeon processors 5500 and 5600 series
Form factor
Memory
18 DIMM slots
1GB/2GB/4GB/8GB/16GB ECC DDR3
Support for up to 192GB using 12 16GB DIMMs
Drives
I/O slots
42 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
eighteen DIMM slots, and the option of either eight 2.5", or six 3.5" internal hard disk drives. Although limited in internal disk capacity compared to the other
server models presented, it scales beyond the R510 in memory (eighteen DIMMS compared to eight) and provides more I/O options. Storage capabilities
may be expanded by using Dell PowerVault MD1200 or MD1220 direct attached storage arrays. The MD1200 provides twelve 3.5" hard disk drives in a 2u
rack mounted form factor, while the MD1220 provides twenty-five 2.5" hard disk drives in the same 2u rack mounted form factor. These 6Gbps SAS
connected arrays can be daisy chained, up to four arrays per RAID controller, and also support redundant connections from the server. This storage option
satisfies requirements for lower cost storage and simplicity while giving each node the ability to scale in the number of supported mailboxes.
Description
Processors (x2)
Latest quad-core or six-core Intel Xeon processors 5500 and 5600 series
Form factor
2U rack
Memory
Up to 192GB (18 DIMM slots*): 1GB/2GB/4GB/8GB/16GB DDR3, 800megahertz (MHz), 1066MHz, or 1333MHz
Drives
Eight 2.5" hard disk drive option or six 3.5" hard disk drive option with optional flex bay expansion to support half-height TBU
Up to six 3.5" drives with optional flex bay or up to eight 2.5" SAS or SATA drives with optional flex bay
Peripheral bay options include slim optical drive bay with choice of DVD-ROM, combo CD-RW/DVD-ROM, or DVD + RW
I/O slots
43 of 133
Components
Description
Processors (x4)
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
44 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Form factor
2U rack
Memory
Drives
Hot-swap option available with up to six 2.5" SAS or SATA drives, including SATA SSD
I/O slots
6 PCIe G2 slots:
Five x8 slot
One x4 slot
One storage x4 slot
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Processor and server platform = Intel X5550 2.6gigahertz (GHz) in a Dell M610
SPECint_rate2006 value = 234
SPECint_rate2006 value per processor core = 234 8
= 29.25
Adjusted megacycles per core = (new platform per core value) (hertz per core of baseline platform) (baseline per core value)
= (29.25 3330) 18.75
= 5195
Adjusted megacycles per server = adjusted megacycles per core number of cores
= 5195 8
= 41558
45 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
from server to server and under different workloads. A conservative estimate of 10percent of available megacycles will be used. Use the following calculation:
Available megacycles per VM = adjusted available megacycles per server number of VMs
= 37403 2
= 18701
46 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
47 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Step 1: Determine the maximum number of mailboxes supported by the MBX virtual machine
To determine the maximum number of mailboxes supported by the MBX VM, use the following calculation:
Step 2: Determine the minimum number of mailbox virtual machines required in the primary
site
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
48 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
To determine the minimum number of mailbox VMs required in the primary site, use the following calculation:
Number of VMs required = total mailbox count in site active mailboxes per VM
= 9000 4250
= 2.2
Based on processor capacity, minimum of three Mailbox server VMs to support the anticipated peak work load during normal operating conditions is
required.
Step 3: Determine number of Mailbox server virtual machines required to support the mailbox
resiliency strategy
In the previous step, you determined that a minimum of three Mailbox server VMs to support the target workload are needed. In an active/passive database
distribution model, you need a minimum of three Mailbox server VMs in the secondary datacenter to support the workload during a site failure event. The
DAG design will have nine Mailbox server VMs with six in the primary site and three in the secondary site.
Secondary datacenter
12
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
49 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Step 1: Determine number of active mailboxes per server during normal operation
To determine the number of active mailboxes per server during normal operation, use the following calculation:
Number of active mailboxes per server = total mailbox count server count
= 9000 6
= 1500
Step 2: Determine number of active mailboxes per server worst case failure event
To determine the number of active mailboxes per server worst case failure event, use the following calculation:
Number of active mailboxes per server = total mailbox count server count
= 9000 3
= 3000
Return to top
Step 1: Determine database cache requirements per server for the worst case failure scenario
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
50 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
In a previous step, you determined that the database cache requirements for all mailboxes was 55GB and the average cache required per active mailbox was
6.2MB.
To design for the worst case failure scenario, you calculate based on active mailboxes residing on three of six Mailbox servers. Use the following calculation:
Memory required for database cache = number of active mailboxes average cache per mailbox
= 3000 6.2MB
= 18600MB
= 18.2GB
Step 2: Determine total memory requirements per mailbox virtual machine server for the worst
case failure scenario
In this step, reference the following table to determine the recommended memory configuration.
Memory requirements
Server physical memory (RAM)
24GB
17.6GB
32GB
24.4GB
48GB
39.2GB
The recommended memory configuration to support 18.2GB of database cache for a mailbox role server is 32GB.
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Determine Number of Client Access and Hub Transport Server Combo Virtual Machines
Required
In a previous step, it was determined that nine Mailbox server VMs are required. We recommend that you deploy one Client Access and Hub Transport server
combo VM for every MBX VM. Therefore, the design will have nine Client Access and Hub Transport server combo VMs.
Number of Client Access and Hub Transport server combo VMs required
Server role configuration
Mailbox server role:Client Access and Hub Transport combined server role
1:1
Determine Memory Required per Combined Client Access and Hub Transport Virtual Machines
To determine the memory configuration for the combined Client Access and Hub Transport server role VM, reference the following table.
Memory configurations for Exchange 2010 servers based on installed server roles
Minimum
supported
Recommended
maximum
4GB
4GB
Client Access and Hub Transport combined server role (Client Access and Hub Transport server roles
running on the same physical server)
4GB
Based on the preceding table, each combination Client Access and Hub Transport server VM requires a minimum of 8GB of memory.
51 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
The correct distribution is one Client Access and Hub Transport server role VM on each of the physical host servers and one Mailbox server role VM on each of
the physical host servers. So in this solution there will be nine Hyper-V root servers each supporting one Client Access and Hub Transport server role VM and
one Mailbox server role VM.
Virtual machine distribution (correct)
Return to top
52 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
53 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
To determine the memory required for each root server, use the following calculation:
Root server memory = Client Access and Hub Transport server role VM memory + Mailbox server role VM memory
= 8GB + 32GB
= 40GB
In this solution, a minimum of 12 databases will be used. The exact number of databases may be adjusted in future steps to accommodate the database copy
layout.
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
In the previous step, it was determined that PS6500E represents three failure domains. Consider when all six blades in the first enclosure to the two PS6500Es
in the primary datacenter are connected. In the event that there is an issue impacting the enclosure, there are no other servers in the primary datacenter and
you're forced to conduct a manual site switchover to the secondary datacenter. A better design is to deploy three blade enclosures, each with three of the
nine server blades. Pair the servers in the first enclosure with the first PS6500E, the servers in the second enclosure with the second PS6500E, and the three
servers in the secondary site with the PS6500E in the secondary site. By aligning the server and storage failure domains, the database copies are set in a
manner that protects against issues with either the storage array or an entire blade enclosure.
54 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
55 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Unique database count = total number of Mailbox servers in primary datacenter number of Mailbox servers in failure domain
=63
=18
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
56 of 133
DB
MBX1
MBX2
MBX3
MBX4
MBX5
MBX6
DB1
C1
DB2
C1
DB3
C1
DB4
C1
DB5
C1
DB6
C1
DB7
C1
DB8
C1
DB9
C1
DB10
C1
DB11
C1
DB12
C1
DB13
C1
DB14
C1
DB15
C1
DB16
C1
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
DB17
C1
DB18
C1
Next distribute the C2 database copies (or the copies with an activation preference value of 2) to the servers in the second failure domain. During the
distribution, you distribute the C2 copies across as many servers in the alternate failure domain as possible to ensure that a single server failure has a minimal
impact on the servers in the alternate failure domain.
57 of 133
DB
MBX1
MBX2
MBX3
MBX4
MBX5
MBX6
DB1
C1
C2
DB2
C1
C2
DB3
C1
C2
DB4
C1
C2
DB5
C1
C2
DB6
C1
C2
DB7
C1
C2
DB8
C1
C2
DB9
C1
C2
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
58 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Consider the opposite configuration for the other failure domain. Again, you distribute the C2 copies across as many servers in the alternate failure domain as
possible to ensure that a single server failure has a minimal impact on the servers in the alternate failure domain.
Database copy layout with C2 database copies distributed in the opposite configuration
DB
MBX1
MBX2
MBX3
MBX4
MBX5
MBX6
DB10
C2
C1
DB11
C2
C1
DB12
C2
C1
DB13
C2
C1
DB14
C2
C1
DB15
C2
C1
DB16
C2
C1
DB17
C2
C1
DB18
C2
C1
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Step 3: Determine database layout during server failure and maintenance conditions
Before the secondary datacenter and distribute the C3 copies are considered, examine the following server failure scenario. In the following example, if server
MBX1 fails, the active database copies will automatically move to servers MBX4, MBX5, and MBX6. Notice that each of the three servers in the alternate failure
domain are now running with four active databases and the active databases are equally distributed across all three servers.
Database copy layout during server maintenance or failure
59 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
In a maintenance scenario, you could move the active mailbox databases from the servers in the first failure domain (MBX1, MBX2, MBX3) to the servers in
60 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
61 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
the second failure domain (MBX4, MBX5, MBX6), complete maintenance activities, and then move the active database copies back to the C1 copies on the
servers in the first failure domain. You can conduct maintenance activities on all servers in the primary datacenter in two passes.
Database copy layout during server maintenance
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
62 of 133
DB
MBX1
MBX2
MBX3
MBX4
MBX5
MBX6
MBX7
MBX8
MBX9
DB1
C1
C2
C3
DB2
C1
C2
C3
DB3
C1
C2
C3
DB4
C1
C2
C3
DB5
C1
C2
C3
DB6
C1
C2
C3
DB7
C1
C2
C3
DB8
C1
C2
C3
DB9
C1
C2
C3
DB10
C2
C1
C3
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
63 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
DB11
C2
C1
C3
DB12
C2
C1
C3
DB13
C2
C1
C3
DB14
C2
C1
C3
DB15
C2
C1
C3
DB16
C2
C1
C3
DB17
C2
C1
C3
DB18
C2
C1
C3
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
64 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
The following table summarizes the storage requirements that have been calculated or determined in a previous design step.
Value
907
13147
757
13904
41712
41
Step 2: Determine whether logs and databases will be co-located on the same LUN
In previous Exchange releases, it was a recommended best practice to separate database files and log files from the same mailbox database to different
volumes backed by different physical disks for recoverability purposes. This is still a recommended best practice for stand-alone architectures and
architectures using VSS-based backups. If you're using Exchange native data protection and have deployed a minimum of three database copies, isolation of
logs and databases isn't necessary.
*Design Decision Point*
With the EqualLogic array, the RAID-10 set spans across all 46 disks. Because this architecture doesn't offer spindle isolation, there is no reason to create
separate LUNs for database and log files, therefore subsequent design decisions will be based on a single LUN for each database and log set.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
65 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
database copies. Therefore there will be a total of nine LUNs for each primary datacenter Mailbox server.
Active databases
Passive databases
Lagged databases
Total LUNs
27
Database capacity = [(number of mailbox users average mailbox size on disk) + (20% data overhead factor)] + (10% content indexing overhead)
= [(500 907) + (90700)] + 54420
= 598620MB
= 585GB
Log capacity = (log size number of logs per mailbox per day number of days required to replace hardware number of mailbox users) + (mailbox
move percent overhead)
= (1MB 20.5 3 500) + (500 0.01 907MB)
=35285MB
=35GB
LUN size = [(database capacity) + (log capacity)] +20% volume free space
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
= [(585) + (35)] .8
= 775GB
21299 27 = 789GB
The actual LUN size will be 789GB, which will support the required LUN size of 775GB.
Value
Usable capacity
21299GB
27
775GB
789GB
66 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
67 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Array1
Database
Array2
Database
Array3
DB1
C1
DB1
C2
DB1
C3
DB2
C1
DB2
C2
DB2
C3
DB3
C1
DB3
C2
DB3
C3
DB4
C1
DB4
C2
DB4
C3
DB5
C1
DB5
C2
DB5
C3
DB6
C1
DB6
C2
DB6
C3
DB7
C1
DB7
C2
DB7
C3
DB8
C1
DB8
C2
DB8
C3
DB9
C1
DB9
C2
DB9
C3
DB10
C2
DB10
C1
DB10
C3
DB11
C2
DB11
C1
DB11
C3
DB12
C2
DB12
C1
DB12
C3
DB13
C2
DB13
C1
DB13
C3
DB14
C2
DB14
C1
DB14
C3
DB15
C2
DB15
C1
DB15
C3
DB16
C2
DB16
C1
DB16
C3
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
68 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
DB17
C2
DB17
C1
DB17
C3
DB18
C2
DB18
C1
DB18
C3
Return to top
Plan Namespaces
When you plan your Exchange 2010 organization, one of the most important decisions that you must make is how to arrange your organization's external
namespace. A namespace is a logical structure usually represented by a domain name in Domain Name System (DNS). When you define your namespace, you
must consider the different locations of your clients and the servers that house their mailboxes. In addition to the physical locations of clients, you must evaluate
how they connect to Exchange 2010. The answers to these questions will determine how many namespaces you must have. Your namespaces will typically align
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
with your DNS configuration. We recommend that each ActiveDirectory site in a region that has one or more Internet-facing Client Access servers have a
unique namespace. This is usually represented in DNS by an A record, for example, mail.contoso.com or mail.europe.contoso.com.
For more information, see Understanding Client Access Server Namespaces.
There are a number of different ways to arrange your external namespaces, but usually your requirements can be met with one of the following namespace
models:
Consolidated datacenter modelThis model consists of a single physical site. All servers are located within the site, and there is a single namespace, for
example, mail.contoso.com.
Single namespace with proxy sitesThis model consists of multiple physical sites. Only one site contains an Internet-facing Client Access server. The
other sites aren't exposed to the Internet. There is only one namespace for the sites in this model, for example, mail.contoso.com.
Single namespace and multiple sitesThis model consists of multiple physical sites. Each site can have an Internet-facing Client Access server.
Alternatively, there may be only a single site that contains Internet-facing Client Access servers. There is only one namespace for the sites in this model,
for example, mail.contoso.com.
Regional namespacesThis model consists of multiple physical sites and multiple namespaces. For example, a site located in New York City would have
the namespace mail.usa.contoso.com, a site located in Toronto would have the namespace mail.canada.contoso.com, and a site located in London would
have the namespace mail.europe.contoso.com.
Multiple forestsThis model consists of multiple forests that have multiple namespaces. An organization that uses this model could be made up of two
partner companies, for example, Contoso and Fabrikam. Namespaces might include mail.usa.contoso.com, mail.europe.contoso.com,
mail.asia.fabrikam.com, and mail.europe.fabrikam.com.
69 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
70 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
role to the Client Access server role. Therefore, both internal and external Outlook connections must now be load balanced across all Client Access servers in the
site to achieve fault tolerance. To associate the MAPI endpoint with a group of Client Access servers rather than a specific Client Access server, you can define a
Client Access server array. You can only configure one array per Active Directory site, and an array can't span more than one Active Directory site. For more
information, see Understanding RPC Client Access and Understanding Load Balancing in Exchange 2010.
*Design Decision Point*
In a previous step, it was determined that Client Access servers would be deployed in two physical locations in two Active Directory sites. Therefore, you need to
deploy two Client Access server arrays. A single namespace will be load balanced across the Client Access servers in the primary active Client Access server array
using redundant hardware load balancers. In a site failure, the namespace will be load balanced across the Client Access servers in the secondary Client Access
server array.
Return to top
BIG-IP Local Traffic Manager (LTM)BIG-IP LTM is designed to monitor and manage traffic to Client Access, Hub Transport, Edge Transport, and
Unified Messaging servers, while ensuring that users are always sent to the best performing resource. Whether your users are connecting via MAPI,
Outlook Web App, ActiveSync, or Outlook Anywhere, BIG-IP LTM will load balance the connections appropriately, allowing you to seamlessly scale to
any size deployment. BIG-IP LTM now offers several modules that also provide significant value in an Exchange environment, which include:
Access Policy Manager (APM)Designed to secure access to Exchange resources, APM can authenticate users before they attach to your
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
71 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
For more information about these technologies, see F5 Solutions for Exchange Server.
Sizing the appropriate F5 hardware model for your Exchange 2010 deployment is an exercise best done with the guidance of your local F5 team. F5 offers
production hardware-based and software-based BIG-IP platforms that range from supporting up to 200megabits per second (Mbps) all the way up to
80Gbps. To learn more about the specifications for each of the F5 BIG-IP LTM hardware platforms, see BIG-IP System Hardware Datasheet.
Option 1: BIG-IP 1600 series
The BIG-IP 1600 offers all the functionality of TMOS in a cost-effective, entry-level platform for intelligent application delivery.
Value or description
Traffic throughput
1Gbps
Software compression
Included: 50Mbps
Maximum: 1Gbps
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Processor
Memory
4GB
Power supply
Typical consumption
Value or description
Traffic throughput
4Gbps
Hardware SSL
Software compression
Included: 50Mbps
Maximum: 3.8Gbps
72 of 133
Processor
Memory
8GB
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Power supply
Typical consumption
Value or description
Traffic throughput
6Gbps
Hardware SSL
FIPS SSL
Software compression
Included: 50Mbps
Maximum: 5Gbps
73 of 133
Processor
Memory
8GB
16
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
74 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Power supply
Typical consumption
This information can be used to ensure the right BIG-IP LTM platform is selected.
*Design Decision Point*
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
The BIG-IP 3900 is selected for this solution. The 3900 4GB capacity and connection count limits are enough to cover normal usage as well as unexpected
traffic spikes for 15,000 active mailboxes with a 50 message per day profile. The quad core CPU is also capable enough to handle the processing associated
with connection and persistence handling.
Return to top
Connection mirroringThis ensures the connection table in each BIG-IP LTM is mirrored to its peer. This means that in case of a BIG-IP LTM failure, no
connections are dropped because the BIG-IP LTM failover partner is already aware of the previously established connections and it assumes
responsibilities for the network.
Network-based outage detectionThis ensures that a network outage is just as critical as a server outage for the BIG-IP LTM, and that proper
remediation steps need to be taken to attempt to remedy the situation.
Software-based and hardware-based watchdog functionalityThis ensures proper failover when a BIG-IP LTM isn't functioning properly.
Besides deploying BIG-IP LTMs in redundant pairs, customers often build redundancy into the architecture by building a multiple datacenter environment.
BIG-IP GTM is designed to add datacenter load balancing so that wide area resiliency is also achieved. For more information about GTM, see Global Load
Balancing Solutions.
Return to top
75 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Client Access server affinity. Others work without it, but display performance improvements from such affinity. Other Exchange protocols don't require client to
Client Access server affinity, and performance doesn't decrease without affinity. For additional information, see Load Balancing Requirements of Exchange
Protocols and Understanding Load Balancing in Exchange 2010.
For more information about configuring F5 BIG-IP LTMs, see Deploying F5 with Microsoft Exchange Server 2010.
Return to top
Solution Overview
The previous section provided information about the design decisions that were made when considering an Exchange 2010 solution. The following section
provides an overview of the solution.
Return to top
76 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
77 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
78 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
79 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Description
Server vendor
Dell
Server model
Processor
Chipset
Intel 5520/5500/X58
Memory
48GB
Operating system
Virtualization
Microsoft Hyper-V
Internal disk
RAID-1
RAID controller
Network interface
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Description
Physical or virtual
Hyper-V VM
Virtual processors
Memory
8GB
Storage
Operating system
Exchange version
Return to top
80 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Component
Description
Physical or virtual
Hyper-V VM
Virtual processors
Memory
32GB
Storage
Pass-through storage
9 volumes 789GB
Operating system
Exchange version
Third-party software
None
Return to top
Database Layout
The following diagram illustrates the database layout across the primary and secondary datacenters.
Database layout
81 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
82 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Description
Storage vendor
Dell
Storage model
EqualLogic PS6500E
Category
iSCSI
Disks
Active disks
46
Spares
RAID level
10
Usable capacity
20.8terabytes
Storage Configuration
Each of the Dell EqualLogic PS6500E storage arrays used in the solution were configured as illustrated in the following table.
Storage configuration
83 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Component
Description
Storage enclosures
27
LUN size
798GB
RAID level
RAID-10
The following table illustrates how the available storage was designed and allocated between the three PS6500E storage arrays.
84 of 133
Database
Array1
Database
Array2
Database
Array3
DB1
C1
DB1
C2
DB1
C3
DB2
C1
DB2
C2
DB2
C3
DB3
C1
DB3
C2
DB3
C3
DB4
C1
DB4
C2
DB4
C3
DB5
C1
DB5
C2
DB5
C3
DB6
C1
DB6
C2
DB6
C3
DB7
C1
DB7
C2
DB7
C3
DB8
C1
DB8
C2
DB8
C3
DB9
C1
DB9
C2
DB9
C3
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
85 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
DB10
C2
DB10
C1
DB10
C3
DB11
C2
DB11
C1
DB11
C3
DB12
C2
DB12
C1
DB12
C3
DB13
C2
DB13
C1
DB13
C3
DB14
C2
DB14
C1
DB14
C3
DB15
C2
DB15
C1
DB15
C3
DB16
C2
DB16
C1
DB16
C3
DB17
C2
DB17
C1
DB17
C3
DB18
C2
DB18
C1
DB18
C3
Return to top
Description
Vendor
Dell
Model
Ports
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
86 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Port bandwidth
128Gbps
For more information, download a .pdf file about the PowerConnect M6220 Ethernet Switch.
Return to top
Description
Vendor
F5
Model
BIG-IP 3900
Traffic throughput
4Gbps
Hardware SSL
Software compression
Included: 50Mbps
Maximum: 3.8Gbps
Processor
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
87 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Memory
8GB
Power supply
Typical consumption
Return to top
Performance tests
Storage performance validation (Jetstress)
Server performance validation (Loadgen)
Functional tests
Database switchover validation
Server switchover validation
Server failover validation
Datacenter switchover validation
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Return to top
Tool Set
For validating Exchange storage sizing and configuration, we recommend the Microsoft Exchange Server Jetstress tool. The Jetstress tool is designed to
simulate an Exchange I/O workload at the database level by interacting directly with the ESE, which is also known as Jet. The ESE is the database technology
that Exchange uses to store messaging data on the Mailbox server role. Jetstress can be configured to test the maximum I/O throughput available to your
storage subsystem within the required performance constraints of Exchange. Or, Jetstress can accept a target profile of user count and per-user IOPS, and
validate that the storage subsystem is capable of maintaining an acceptable level of performance with the target profile. Test duration is adjustable and can
be run for a minimal period of time to validate adequate performance or for an extended period of time to additionally validate storage subsystem reliability.
The Jetstress tool can be obtained from the Microsoft Download Center at the following locations:
The documentation included with the Jetstress installer describes how to configure and execute a Jetstress validation test on your server hardware.
With DAS or internal disk scenarios, there's only one server accessing the disk subsystem, so the performance capabilities of the storage subsystem can be
validated in isolation.
In SAN scenarios, the storage utilized by the solution may be shared by many servers and the infrastructure that connects the servers to the storage may
88 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
89 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
also be a shared dependency. This requires additional testing, as the impact of other servers on the shared infrastructure must be adequately simulated to
validate performance and functionality.
Validation of worst case database switchover scenarioIn this test case, the level of I/O is expected to be serviced by the storage subsystem in a
worst case switchover scenario (largest possible number of active copies on fewest servers). Depending on whether the storage subsystem is DAS or
SAN, this test may be required to run on multiple hosts to ensure that the end-to-end solution load on the storage subsystem can be sustained.
Validation of storage performance under storage failure and recovery scenario (for example, failed disk replacement and rebuild)In this test
case, the performance of the storage subsystem during a failure and rebuild scenario is evaluated to ensure that the necessary level of performance is
maintained for optimal Exchange client experience. The same caveat applies for a DAS vs. SAN deployment: If multiple hosts are dependent on a
shared storage subsystem, the test must include load from these hosts to simulate the entire effect of the failure and rebuild.
The average value should be less than 20 milliseconds (msec) (0.020 seconds),
and the maximum values should be less than 50 msec.
Log disk writes are sequential, so average write latencies should be less than
10msec, with a maximum of no more than 50msec.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
%Processor Time
Average should be less than 80%, and the maximum should be less than 90%.
The report file shows various categories of I/O performed by the Exchange system:
Transactional I/O PerformanceThis table reports I/O that represents user activity against the database (for example, Outlook generated I/O). This
data is generated by subtracting background maintenance I/O and log replication I/O from the total I/O measured during the test. This data
provides the actual database IOPS generated along with I/O latency measurements required to determine whether a Jetstress performance test
passed or failed.
Background Database Maintenance I/O PerformanceThis table reports the I/O generated due to ongoing ESE database background
maintenance.
Log Replication I/O PerformanceThis table reports the I/O generated from simulated log replication.
Total I/O PerformanceThis table reports the total I/O generated during the Jetstress test.
Return to top
Tool Set
For validation of end-to-end solution performance and scalability, we recommend the Microsoft Exchange Server Load Generator tool (Loadgen). Loadgen is
designed to produce a simulated client workload against an Exchange deployment. This workload can be used to evaluate the performance of the Exchange
system, and can also be used to evaluate the effect of various configuration changes on the overall solution while the system is under load. Loadgen is
90 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
91 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
capable of simulating Microsoft Office Outlook 2007 (online and cached), Office Outlook 2003 (online and cached), POP3, IMAP4, SMTP, ActiveSync, and
Outlook Web App (known in Exchange 2007 and earlier versions as Outlook Web Access) client activity. It can be used to generate a single protocol workload,
or these client protocols can be combined to generate a multiple protocol workload.
You can get the Loadgen tool from the Microsoft Download Center at the following locations:
The documentation included with the Loadgen installer describes how to configure and execute a Loadgen test against an Exchange deployment.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
In this Performance Monitor snapshot, which displays various counters that represent the amount of Exchange work being performed over time on a
production Mailbox server, the average value for RPC operations per second (the highlighted line) is about 2,386 when averaged across the entire day. The
average for this counter during the peak period from 10:00 through 11:00 is about 4,971, giving a peak-to-average ratio of 2.08.
To ensure that the Exchange solution is capable of sustaining the workload generated during the peak average, modify Loadgen settings to generate a
constant amount of load at the peak average level, rather than spreading out the workload over the entire simulated work day. Loadgen task-based
simulation modules (like the Outlook simulation modules) utilize a task profile that defines the number of times each task will occur for an average user
within a simulated day.
The total number of tasks that need to run during a simulated day is calculated as the number of users multiplied by the sum of task counts in the
configured task profile. Loadgen then determines the rate at which it should run tasks for the configured set of users by dividing the total number of tasks
to run in the simulated day by the simulated day length. For example, if Loadgen needs to run 1,000,000 tasks in a simulated day, and a simulated day is
equal to 8 hours (28,800 seconds), Loadgen must run 1,000,000 28,800 = 34.72 tasks per second to meet the required workload definition. To increase
the amount of load to the desired peak average, divide the default simulated day length (8 hours) by the peak-to-average ratio (2) and use this as the new
simulated day length.
Using the task rate example again, 1,000,000 14,400 = 69.44 tasks per second. This reduces the simulated day length by half, which results in doubling
the actual workload run against the server and achieving our goal of a peak average workload. You don't adjust the run length duration of the test in the
Loadgen configuration. The run length duration specifies the duration of the test and doesn't affect the rate at which tasks will be run against the Exchange
server.
92 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
93 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Normal operating conditionsIn this test case, the basic design of the solution is validated with all components in their normal operating state (no
failures simulated). The desired workload is generated against the solution, and the overall performance of the solution is validated against the metrics
that follow.
Single server failure or single server maintenance (in site)In this test case, a single server is taken down to simulate either an unexpected failure
of the server or a planned maintenance operation for the server. The workload that would normally be handled by the unavailable server is now
handled by other servers in the solution topology, and the overall performance of the solution is validated.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
50
10
40
100
20
80
150
30
120
200
40
160
The following example assumes that each Mailbox server has 5,000 active mailboxes with a 150 messages per day profile (30 messages sent and 120
messages received per day).
Calculation
Value
Message profile
120
Not applicable
5000
5000 120
600000
600000 28800
20.83
20.83 2
41.67
You expect 41.67 messages per second delivered on each Mailbox server running 5,000 active mailboxes with a message profile of 150 messages per day
during peak load.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
The actual message delivery rate can be measured using the following counter on each Mailbox server: MSExchangeIS Mailbox(_Total)\Messages
Delivered/sec. If the measured message delivery rate is within one or two messages per second of the target message delivery rate, you can be confident
that the desired load profile was run successfully.
95 of 133
Counter
Target
<80%
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
If you're interested in what percentage of processor time is spent servicing the guest VMs, you can examine the Hyper-V Hypervisor Logical Processor\%
Guest Run Time counter. If you're interested in what percentage of processor time is spent in hypervisor, you can look at the Hyper-V Hypervisor Logical
Processor\% Hypervisor Run Time counter. This counter should be below 5percent. The Hyper-V Hypervisor Root Virtual Processor\% Guest Run Time
counter shows the percentage of processor time spent in the virtualization stack. This counter should also be below 5percent. These two counters can be
used to determine what percentage of your available physical processor time is being used to support virtualization.
Counter
Target
<80%
<5%
<5%
Memory
You need to ensure that your Hyper-V root server has enough memory to support the memory allocated to VMs. Hyper-V automatically reserves 512MB
(this may vary with different Hyper-V releases) for the root operating system. If you don't have enough memory, Hyper-V will prevent the last VM from
starting. In general, don't worry about validating the memory on a Hyper-V root server. Be more concerned with ensuring that sufficient memory is
allocated to the VMs to support the Exchange roles.
Application Health
An easy way to determine whether all the VMs are in a healthy state is to look at the Hyper-V Virtual Machine Health Summary counters.
96 of 133
Counter
Target
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Mailbox Servers
When validating whether a Mailbox server was properly sized, focus on processor, memory, storage, and Exchange application health. This section
describes the approach to validating each of these components.
Processor
During the design process, you calculated the adjusted megacycle capacity of the server or processor platform. You then determined the maximum
number of active mailboxes that could be supported by the server without exceeding 80percent of the available megacycle capacity. You also determined
what the projected CPU utilization should be during normal operating conditions and during various server maintenance or failure scenarios.
During the validation process, verify that the worst case scenario workload doesn't exceed 80percent of the available megacycles. Also, verify that actual
CPU utilization is close to the expected CPU utilization during normal operating conditions and during various server maintenance or failure scenarios.
For physical Exchange deployments, use the Processor(_Total)\% Processor Time counter and verify that this counter is less than 80percent on average.
Counter
Target
<80%
For virtual Exchange deployments, the Processor(_Total)\% Processor Time counter is measured within the VM. In this case, the counter isn't measuring the
physical CPU utilization. It's measuring the utilization of the virtual CPU provided by the hypervisor. Therefore, it doesn't provide an accurate reading of the
physical processor and shouldn't be used for design validation purposes. For more information, see Hyper-V: Clocks lie... which performance counters can
you trust.
For validating Exchange deployments running on Microsoft Hyper-V, use the Hyper-V Hypervisor Virtual Processor\% Guest Run Time counter. This
provides a more accurate value for the amount of physical CPU being utilized by the guest operating system. This counter should be less than 80percent
on average.
97 of 133
Counter
Target
<80%
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
98 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Memory
During the design process, you calculated the amount of database cache required to support the maximum number of active databases on each Mailbox
server. You then determined the optimal physical memory configuration to support the database cache and system memory requirements.
Validating whether an Exchange Mailbox server has sufficient memory to support the target workload isn't a simple task. Using available memory counters
to view how much physical memory is remaining isn't helpful because the memory manager in Exchange is designed to use almost all of the available
physical memory. The information store (store.exe) reserves a large portion of physical memory for database cache. The database cache is used to store
database pages in memory. When a page is accessed in memory, the information doesn't have to be retrieved from disk, reducing read I/O. The database
cache is also used to optimize write I/O.
When a database page is modified (known as a dirty page), the page stays in cache for a period of time. The longer it stays in cache, the better the chance
that the page will be modified multiple times before those changes are written to the disk. Keeping dirty pages in cache also causes multiple pages to be
written to the disk in the same operation (known as write coalescing). Exchange uses as much of the available memory in the system as possible, which is
why there aren't large amounts of available memory on an Exchange Mailbox server.
It may not be easy to know whether the memory configuration on your Exchange Mailbox server is undersized. For the most part, the Mailbox server will
still function, but your I/O profile may be much higher than expected. Higher I/O can lead to higher disk read and write latencies, which may impact
application health and client user experience. In the results section, there isn't any reference to memory counters. Potential memory issues will be identified
in the storage validation and application health result sections, where memory-related issues are more easily detected.
Storage
If you have performance issues with your Exchange Mailbox server, those issues may be storage-related issues. Storage issues may be caused by having an
insufficient number of disks to support the target I/O requirements, having overloaded or poorly designed storage connectivity infrastructure, or by factors
that change the target I/O profile like insufficient memory, as discussed previously.
The first step in storage validation is to verify that the database latencies are below the target thresholds. In previous releases, logical disk counters
determined disk read and write latency. In Exchange 2010, the Exchange Mailbox server that you are monitoring is likely to have a mix of active and passive
mailbox database copies. The I/O characteristics of active and passive database copies are different. Because the size of the I/O is much larger on passive
copies, there are typically much higher latencies on passive copies. Latency targets for passive databases are 200msec, which is 10 times higher than
targets on active database copies. This isn't much of a concern because high latencies on passive databases have no impact on client experience. But if you
are using the traditional logical disk counters to measure latencies, you must review the individual volumes and separate volumes containing active and
passive databases. Instead, we recommend that you use the new MSExchange Database counters in Exchange 2010.
When validating latencies on Exchange 2010 Mailbox servers, we recommend you use the counters in the following table for active databases.
Counter
Target
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
99 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
<20msec
<20msec
<1msec
We recommend that you use the counters in the following table for passive databases
Counter
Target
<200msec
<200msec
<200msec
Note:
To view these counters in Performance Monitor, you must enable the advanced database counters. For more information, see How to Enable Extended
ESE Performance Counters.
When you're validating disk latencies for Exchange deployments running on Microsoft Hyper-V, be aware that the I/O Database Average Latency counters
(as with many time-based counters) may not be accurate because the concept of time within the VM is different than on the physical server. The following
example shows that the I/O Database Reads (Attached) Average Latency is 22.8 in the VM and 17.3 on a physical server for the same simulated workload. If
the values of time-based counters are over the target thresholds, your server may be running correctly. Review all health criteria to make a decision
regarding server health when your Mailbox server role is deployed within a VM.
Values of disk latency counters for virtual and physical Mailbox servers
Counter
MSExchange Database/
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
100 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
22.792
17.250
17.693
18.131
34.215
27.758
10.829
8.483
0.944
0.411
10.184
10.963
MSExchangeIS
1.966
1.695
334.371
341.139
180.656
183.360
2.062
2.065
0.511
0.514
MSExchangeIS Mailbox
In addition to disk latencies, review the Database\Database Page Fault Stalls/sec counter. This counter indicates the rate of page faults that can't be
serviced because there are no pages available for allocation from the database cache. This counter should be 0 on a healthy server.
Counter
Target
<1
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Also, review the Database\Log Record Stalls/sec counter, which indicates the number of log records that can't be added to the log buffers per second
because the log buffers are full. This counter should average less than 10.
Counter
Target
<10
Counter
Target
<10msec on average
MSExchangeIS\RPC Requests
Next, make sure that the transport layer is healthy. Any issues in transport or issues downstream of transport affecting the transport layer can be detected
with the MSExchangeIS Mailbox(_Total)\Messages Queued for Submission counter. This counter should be less than 50 at all times. There may be
temporary increases in this counter, but the counter value shouldn't grow over time and shouldn't be sustained for more than 15 minutes.
Counter
101 of 133
Target
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Next, ensure that maintenance of the database copies is in a healthy state. Any issues with log shipping or log replay can be identified using the
MSExchange Replication(*)\CopyQueueLength and MSExchange Replication(*)\ReplayQueueLength counters. The copy queue length shows the number of
transaction log files waiting to be copied to the passive copy log file folder and should be less than 1 at all times. The replay queue length shows the
number of transaction log files waiting to be replayed into the passive copy and should be less than 5. Higher values don't impact client experience, but
result in longer store mount times when a handoff, failover, or activation is performed.
Counter
Target
MSExchange Replication(*)\CopyQueueLength
<1
MSExchange Replication(*)\ReplayQueueLength
<5
Counter
Target
<80%
For validating Exchange deployments running on Microsoft Hyper-V, use the Hyper-V Hypervisor Virtual Processor\% Guest Run Time counter. This
102 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
103 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
provides an accurate value for the amount of physical CPU being utilized by the guest operating system. This counter should be less than 80percent on
average.
Counter
Target
<80%
Application Health
To determine whether the MAPI client experience is acceptable, use the MSExchange RpcClientAccess\RPC Averaged Latency counter. This counter should
be below 250msec. High latencies can be associated with a large number of RPC requests. The MSExchange RpcClientAccess\RPC Requests counter
should be below 40 on average.
Counter
Target
<250msec
<40
Transport Servers
To determine whether a transport server is healthy, review processor, disk, and application health. For an extended list of important counters, see Transport
Server Counters.
Processor
For physical Exchange deployments, use the Processor(_Total)\% Processor Time counter. This counter should be less than 80percent on average.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
<80%
For validating Exchange deployments running on Microsoft Hyper-V, use the Hyper-V Hypervisor Virtual Processor\% Guest Run Time counter. This
provides an accurate value for the amount of physical CPU being utilized by the guest operating system. This counter should be less than 80percent on
average.
Counter
Target
<80%
Disk
To determine whether disk performance is acceptable, use the Logical Disk(*)\Avg. Disk sec/Read and Write counters for the volumes containing the
transport logs and database. Both of these counters should be less than 20msec.
Counter
Target
<20msec
<20msec
Application Health
To determine whether a Hub Transport server is sized properly and running in a healthy state, examine the MSExchangeTransport Queues counters
outlined in the following table. All of these queues will have messages at various times. You want to ensure that the queue length isn't sustained and
growing over a period of time. If larger queue lengths occur, this could indicate an overloaded Hub Transport server. Or, there may be network issues or an
overloaded Mailbox server that's unable to receive new messages. You will need to check other components of the Exchange environment to verify.
104 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
105 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
<3000
<250
<250
<100
<100
Return to top
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
106 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Success criteria: The active mailbox database is mounted on the specified target server. This result can be confirmed by running the following command.
Get-MailboxDatabaseCopyStatus <DatabaseName>
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
A server switchover is the process by which all active databases on a DAG member are activated on one or more other DAG members. Like database
switchovers, a server switchover can occur both within a datacenter and across datacenters, and it can be initiated by using both the EMC and the Shell.
To validate that all passive copies of databases on a server can be successfully activated on other servers hosting a passive copy, run the following
command.
Success criteria: The active mailbox databases are mounted on the specified target server. This can be confirmed by running the following command.
Get-MailboxDatabaseCopyStatus <DatabaseName>
To validate that one copy of each of the active databases will be successfully activated on another Mailbox server hosting passive copies of the
databases, shut down the server by performing the following action.
Turn off the current active server.
Success criteria: The active mailbox databases are mounted on another Mailbox server in the DAG. This can be confirmed by running the following
command.
Get-MailboxDatabaseCopyStatus <DatabaseName>
107 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
108 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Press and hold the power button on the server until the server turns off.
Pull the power cables from the server, which results in the server turning off.
Success criteria: The active mailbox databases are mounted on another Mailbox server in the DAG. This can be confirmed by running the following command.
Return to top
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
109 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
When the DAG is in DAC mode, the specific actions to terminate any surviving DAG members in the primary datacenter depend on the state of the failed
datacenter. Perform one of the following:
If the Mailbox servers in the failed datacenter are still accessible (usually not the case), run the following command on each Mailbox server.
If the Mailbox servers in the failed datacenter are unavailable but Active Directory is operating in the primary datacenter, run the following command
on a domain controller.
Note:
Failure to either turn off the Mailbox servers in the failed datacenter or to successfully perform the Stop-DatabaseAvailabilityGroup command against
the servers will create the potential for split brain syndrome to occur across the two datacenters. You may need to individually turn off computers through
power management devices to satisfy this requirement.
Success criteria: All Mailbox servers in the failed site are in a stopped state. You can verify this by running the following command from a server in the failed
datacenter.
Get-DatabaseAvailabilityGroup | Format-List
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
The purpose of this step is to inform the servers in the secondary datacenter about which Mailbox servers are available to use when restoring service.
Success criteria: All Mailbox servers in the failed datacenter are in a stopped state. To verify this, run the following command from a server in the secondary
datacenter.
Get-DatabaseAvailabilityGroup | Format-List
1. Stop the cluster service on each DAG member in the secondary datacenter. You can use the Stop-Service cmdlet to stop the service (for example,
Stop-Service ClusSvc), or use net stop clussvc from an elevated command prompt.
2. To activate the Mailbox servers in the secondary datacenter, run the following command.
If this command succeeds, the quorum criteria are shrunk to the servers in the secondary datacenter. If the number of servers in that datacenter is an
even number, the DAG will switch to using the alternate witness server as identified by the setting on the DAG object.
3. To activate the databases, run one of the following commands.
110 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
111 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
or
4. Check the event logs and review all error and warning messages to ensure that the secondary site is healthy. Any indicated issues should be followed
up and corrected prior to mounting the databases.
5. To mount the databases, run the following command.
Success criteria: The active mailbox databases are mounted on Mailbox servers in the secondary site. To confirm, run the following command.
Get-MailboxDatabaseCopyStatus <DatabaseName>
Clients will continue to try to connect, and should automatically connect after Time to Live (TTL) has expired for the original DNS entry, and after the
entry is expired from the client's DNS cache. Users can also run the ipconfig /flushdns command from a command prompt to manually clear their
DNS cache. If using Outlook Web App, the Web browser may need to be closed and restarted to clear the DNS cache used by the browser. In
Exchange 2010 SP1, this browser caching issue can be mitigated by configuring the FailbackURL parameter on the Outlook Web App virtual directory
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
owa.
Clients starting or restarting will perform a DNS lookup on startup and will get the new IP address for the service endpoint, which will be a Client
Access server or array in the second datacenter.
1. Change the DNS entry for the Client Access server array to point to the virtual IP address of the hardware load balancing in the secondary site.
2. Run the ipconfig /flushdns command on all Loadgen servers.
3. Restart the Loadgen test.
4. Verify that the Client Access servers in the secondary site are now servicing the load.
To validate the scenario with an Outlook 2007 client, perform the following:
1. Change the DNS entry for the Client Access server array to point to the VIP of the hardware load balancing in the secondary site.
2. Run the ipconfig /flushdns command on the client or wait until TTL expires.
3. Wait for the Outlook client to reconnect.
Return to top
112 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
1. To reincorporate the DAG members in the primary site, run the following command.
2. To verify the state of the database copies in the primary datacenter, run the following command.
Get-MailboxDatabaseCopyStatus
After the Mailbox servers in the primary datacenter have been incorporated into the DAG, they will need some time to synchronize their database copies.
Depending on the nature of the failure, the length of the outage, and actions taken by an administrator during the outage, this may require reseeding the
database copies. For example, if during the outage, you remove the database copies from the failed primary datacenter to allow log file truncation to occur
for the surviving active copies in the secondary datacenter, reseeding will be required. At this time, each database can be synchronized individually. After a
replicated database copy in the primary datacenter is healthy, you can proceed to the next step.
1. During the datacenter switchover process, the DAG was configured to use an alternate witness server. To reconfigure the DAG to use a witness server
in the primary datacenter, run the following command.
2. The databases being reactivated in the primary datacenter should now be dismounted in the secondary datacenter. Run the following command.
Get-MailboxDatabase | Dismount-Database
3. After the databases have been dismounted, the Client Access server URLs should be moved from the secondary datacenter to the primary datacenter.
113 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
114 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
To do this, change the DNS record for the URLs to point to the Client Access server or array in the primary datacenter.
Important:
Don't proceed to the next step until the Client Access server URLs have been moved and the DNS TTL and cache entries have expired. Activating
the databases in the primary datacenter prior to moving the Client Access server URLs to the primary datacenter will result in an invalid
configuration (for example, a mounted database that has no Client Access servers in its Active Directory site).
4. To activate the databases, run one of the following commands.
or
Success criteria: The active mailbox databases are successfully mounted on Mailbox servers in the primary site. To confirm, run the following command.
Get-MailboxDatabaseCopyStatus <DatabaseName>
Return to top
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
115 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
The following tables summarize the Jetstress storage validation results. This solution achieved higher than target transactional I/O while maintaining database
latencies well under the 20msec target.
Pass
Overall throughput
Overall throughput
Result
383
540
Instance1
42.2
18.9
Instance2
42.7
17.9
Instance3
42.9
17.4
Instance4
42.0
17.9
Instance5
42.0
18.0
Instance6
41.8
17.0
Instance7
42.8
17.7
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
Instance8
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
42.6
17.4
Instance1
25.9
25.9
Instance2
26.4
25.1
Instance3
26.4
21.7
Instance4
26.1
22.6
Instance5
25.9
23.8
Instance6
25.5
19.8
Instance7
26.3
21.2
Instance8
26.5
18.5
116 of 133
Database
Instance1
23.8
3.8
Instance2
23.7
3.7
Instance3
24.0
3.3
Instance4
23.5
3.8
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Instance5
23.7
3.8
Instance6
23.7
3.5
Instance7
23.7
3.7
Instance8
24.3
3.3
Return to top
117 of 133
Counter
Target
Tested result
8.54
8.63
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
<70%
54
Storage
The storage results look good. The average read latency for the active databases is 19.3 when measured in the VM and 15.9 when measured on the
EqualLogic storage array. As discussed in "Server Validation: Performance and Health Criteria" earlier in this white paper, time-based counters measured in
a VM may not be accurate because the VM has a different concept of time than the physical server. The difference between these counters is likely the
result of a combination of iSCSI network latency (generally <1msec) and inaccurate counter values in the VM.
Counter
Target
Tested result
<20msec
19.3
<20msec
15.9
<20msec
6.8
<Reads average
118 of 133
<20msec
2.5
<20msec
5.2
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
<200msec
23.7
<200msec
7.6
<200msec
7.5
Application Health
Exchange is healthy, and all of the counters used to determine application health are well under target values.
Counter
Target
Tested result
MSExchangeIS\RPC Requests
<70
2.7
<10msec
2.4
1.5
MSExchange Replication(*)\CopyQueueLength
<1
0.1
MSExchange Replication(*)\ReplayQueueLength
<5
2.1
119 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
<70%
19
Storage
The storage results look good. The very low latencies should have no impact on message transport.
Counter
Target
Tested result
<20msec
0.012
<20msec
0.012
Application Health
The low RPC Averaged Latency values confirm a healthy Client Access server with no impact on client experience.
Counter
Target
Tested result
<250msec
<40
120 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
121 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
<3000
1.5
<250
<250
1.1
<100
<100
0.4
Counter
Target
Tested result
<75%
42
<5%
<80%
44
<5%
Application Health
The VM health summary counters indicate that all VMs are in a healthy state.
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
Test Case: Single Server Failure or Single Server Maintenance (In Site)
Validation of Expected Load
The message delivery rate verifies that tested workload matched the target workload. The actual message delivery rate is slightly higher than target.
Counter
Target
Tested result
17.08
17.3
122 of 133
Counter
Target
Tested result
<70%
69
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Storage
In this test case, the average read latency for the active databases is 26.2 when measured in the VM and 16.2 when measured on the EqualLogic storage
array. As discussed in "Server Validation: Performance and Health Criteria" earlier in this white paper, time-based counters measured in a VM may not be
accurate because the VM has a different concept of time than the physical server. The difference between these counters is likely the result of a
combination of iSCSI network latency (generally <1msec) and inaccurate counter values in the VM. Because the read latency measured on the EqualLogic
array is less than 20, there's no concern about the counter measured in the VM being over target.
Counter
Target
Tested result
<20msec
26.2
<20msec
16.2
<20msec
7.4
<Reads average
EqualLogic Average Disk Write Latency
<20msec
2.1
<20msec
5.2
<200msec
Not applicable
<200msec
Not applicable
<200msec
Not applicable
Application Health
Exchange is very healthy, and all of the counters used to determine application health are well under target values.
123 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
MSExchangeIS\RPC Requests
<70
8.0
<10msec
3.7
3.3
MSExchange Replication(*)\CopyQueueLength
<1
Not applicable
MSExchange Replication(*)\ReplayQueueLength
<5
Not applicable
Counter
Target
Tested result
<70%
26.3
Storage
The storage results look good. The very low latencies should have no impact on message transport.
124 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
<20msec
0.0041
<20msec
0.0005
Application Health
The low RPC Averaged Latency values confirm a healthy Client Access server with no impact on client experience.
Counter
Target
Tested result
<250msec
13.2
<40
6.1
The Transport Queue counters are all well under target, confirming that the Hub Transport server is healthy and able to process and deliver the required
messages.
125 of 133
Counter
Target
Tested result
<3000
4.7
<250
<250
3.6
<100
<100
1.1
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
<75%
49.9
<5%
1.3
<80%
51.2
<5%
3.6
Application Health
The VM health summary counters indicate that all VMs are in a healthy state.
Counter
Target
Tested result
126 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
17.08
17.4
Counter
Target
Tested result
<70%
64%
Storage
In this test case, the average read latency for the active databases is 24.0 when measured in the VM and 15.9 when measured on the EqualLogic storage
array. As discussed in "Server Validation: Performance and Health Criteria" earlier in this white paper, time-based counters measured in a VM may not be
accurate because the VM has a different concept of time than the physical server. The difference between these counters is likely the result of a
combination of iSCSI network latency (generally <1msec) and inaccurate counter values in the VM. Because the read latency measured on the EqualLogic
array is less than 20, there's no concern about the counter measured in the VM being over target.
Counter
127 of 133
Target
Tested result
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
<20msec
24.0
<20msec
15.9
<20msec
7.2
<Reads average
EqualLogic Average Disk Write Latency
<20msec
2.0
<20msec
5.0
<200msec
Not applicable
<200msec
Not applicable
<200msec
Not applicable
Application Health
Exchange is healthy, and all of the counters used to determine application health are well under target values.
128 of 133
Counter
Target
Tested result
MSExchangeIS\RPC Requests
<70
7.8
<10msec
3.5
3.0
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
MSExchange Replication(*)\CopyQueueLength
<1
Not applicable
MSExchange Replication(*)\ReplayQueueLength
<5
Not applicable
Counter
Target
Tested result
<70%
25
Storage
The storage results look good. The very low latencies should have no impact on message transport.
Counter
Target
Tested result
<20msec
0.003
<20msec
0.001
Application Health
The low RPC Averaged Latency values confirm a healthy Client Access server with no impact on client experience.
129 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Counter
Target
Tested result
<250msec
13.0
<40
5.9
The Transport Queue counters are all well under target, confirming that the Hub Transport server is healthy and able to process and deliver the required
messages.
Counter
Target
Tested result
<3000
4.2
<250
<250
3.4
<100
<100
0.6
Counter
130 of 133
Target
Tested result
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
131 of 133
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
<75%
47.5
<5%
1.2
<80%
48.7
<5%
3.5
Application Health
The VM health summary counters indicate that all VMs are in a healthy state.
Counter
Target
Tested result
Return to top
Conclusion
This white paper provides an example of how to design, test, and validate an Exchange 2010 solution for customer environments with 9,000 mailboxes deployed
on Dell server and storage solutions. The step-by-step methodology in this document walks through the important design decision points that help address key
challenges while ensuring that core business requirements are met.
Return to top
Additional Information
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
If you need more information, contact the F5 Microsoft Partnership Team at microsoftpartnership@f5.com.
This document is provided "as-is." Information and views expressed in this document, including URL and other Internet Web site references, may change without
notice. You bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your
internal, reference purposes.
Return to top
132 of 133
5/29/2016 6:47 PM
Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M61...
https://technet.microsoft.com/en-us/library/gg513522(d=printer).aspx
Community Additions
2016 Microsoft
133 of 133
5/29/2016 6:47 PM