Sie sind auf Seite 1von 51

Contents

Overview 2
Introduction to Exchange 2000 Clustering 3
Reviewing Key Concepts of Microsoft
Windows 2000 Clustering 8
Examining Key Concepts of
Exchange 2000 Clustering 16
Exchange 2000 Clustering Best Practices 23
Appendix
Windows 2000 Clustering Best Practices 52
References 61

Exchange 2000
Clustering Best
Practices

-1-
 Overview

 IntroductiontoExchange2000Clustering
 ReviewingKeyConcepts of MicrosoftWindows 2000
Clustering
 ExaminingKeyConcepts of Exchange2000
Clustering
 Exchange2000ClusteringBestPractices
 Appendix: Windows 2000ClusteringBestPractices

One important decision that you must make when designing your Microsoft®
Exchange 2000 organization is the desired level of availability of your
Exchange 2000 messaging system. If your goal is to maximize uptime for your
messaging system, you need to consider using Exchange 2000 server clusters.
Clustering offers server and application redundancy that effectively reduces the
downtime of your messaging system.
To design an appropriate Exchange 2000 clustering strategy for your company,
you need a solid understanding of server clustering architecture. Because
Exchange 2000 server clusters are built on Microsoft Windows® 2000 server
clusters, you must first understand how Windows 2000 server clusters work.
You must also understand how Exchange 2000 server clusters work in the
clustering environment. Finally, you must choose both a clustering model and a
data storage strategy that help to ensure the high availability of your
Exchange 2000 messaging system.

 Introduction to Exchange 2000


Clustering

An appropriate Exchange 2000 clustering strategy ensures that you will have
the desired server and application redundancy to achieve high availability for
your Exchange 2000 messaging system. However, before you design an
Exchange 2000 clustering strategy, you must understand the architecture of
server clustering, and the benefits that server clustering offers. It is also
imperative that you ensure that your decision to use a clustering solution is
justified by the business needs of your company.

-2-
Architecture of Server Clustering

A server cluster is a group of servers and storage devices that can be accessed
by clients as a single system. The individual servers in the cluster are referred to
as nodes, which function together to provide automatic recovery when clustered
services and applications fail.
An active node is a node that is actively supporting the clustered services and
applications. A passive node is a node that remains idle until it takes over the
clustered services and applications from a failed node. Depending on the
number of nodes it has, a server cluster can have one of several configurations
such as active/passive, active/active, 2 active/1 passive, or 3 active/1 passive.
There are two types of network communications in a server cluster: private and
public. The nodes communicate with each other over a high performance,
reliable, private network, and share one or more common storage devices.
Clients communicate to logical servers, referred to as virtual servers, through a
public network to gain access to grouped resources, services, and applications.
When a client connects to a virtual server, the virtual server routes the request
to the node controlling the requested resource, service, or application. If the
controlling node fails, all clustered services or applications running on the
failed node will restart on an alternate designated node.

Advantages of Server Clustering

By connecting multiple servers into server clusters, clustering technology


provides improved availability of data and applications running within the
cluster. This improved availability is achieved by enabling services and
applications in the server cluster to continue providing services during hardware
or software component failure or during planned maintenance. By using a
clustering solution, you can effectively improve the availability of your
Exchange 2000 messaging system to allow your users continuous access to
their e-mail.
In addition to the improved availability, clustering makes it possible for you to
perform rolling upgrades of your servers running Exchange 2000. During a
rolling upgrade, you first upgrade the hardware or software on the passive node
in the cluster; then, you restart all clustered services or applications that were
running on an active node in the cluster on this newly upgraded node; next, you
upgrade the hardware and software on the new passive node. You continue this
process until you have upgraded all nodes in the cluster.

Identifying Business Needs for


Using Exchange 2000
Clustering

-3-
Implementing Exchange 2000 server clusters involves significant infrastructure
investment such as hardware and data storage devices. As a result, you must
make sure that your clustering solution can be justified by the business needs of
your company.

Determining If High Availability Is


Required
Availability is a measure of time during which clients can successfully use a
resource, application, or system. Availability is normally expressed as a
percentage of uptime.
For example, a messaging system that is required 24 hours a day, every day of
the year, may have the levels of availability with their corresponding
downtimes (per year) that are described in the following table.
Availability level Downtime (per year)

99.62 percent 24 hours


99.967 percent 3.19 hours
99.99 percent 53 minutes
99.999 percent 5.3 minutes

A messaging system with high availability will optimally provide continuous


service without any interruptions that are caused by software or hardware
failures. Based on the business objectives of your messaging system, you must
decide if you desire such high availability.

Weighing the Cost of High Availability


There are different levels of availability, and each level has its appropriate
implementation. Because of the significant infrastructure investment that is
involved in achieving high availability, it is imperative that you should
determine the level of acceptable downtime that can be weighed against the
expense of improved uptime through infrastructure investment.
For example, if a server that supports 1,000 users costs your company $20,000
and a cluster that supports those same 1,000 users costs your company $45,000,
the question you should ask yourself is if the additional 2 hours of uptime a
year warrant the additional $25,000 investment to your company?

Determining If Clustering Is the Right


Solution
Before committing to a clustering solution for your high availability needs, you
must perform a risk assessment of your system. When you identify risks, you
identify the possible failures that can interrupt access to services and resources.
You can then choose the most appropriate solutions to mitigate the risks.
A single point of failure is any component in your environment that would
block data or applications if it failed. A single point of failure can be caused by
hardware, software, or external dependencies, such as power supplied by a
utility company and dedicated lines for wide area network (WAN). You can
provide improved reliability when you minimize the number of single points of
failure in your environment.

-4-
The following table shows some of the commonly encountered points of failure
and their possible solutions. It is clear that clustering is not always a solution to
prevent a point of failure.
Point of failure Cluster service solution Possible other solutions

Network component, such None Spare components or


as a hub or router redundant routes
Power failure None Uninterruptible power
supply (UPS)
Server hardware, such as Failover process of taking None
CPU, memory, or resources offline on one
network card node and bringing them
back online on another node
Disk – non shared Failover None
Disk – shared None Redundant array of
independent disks (RAID)
Server connection Failover None
Sever software, such as Failover None
the operating system, a
service, or an application

Note Clustering cannot eliminate all possible points of failure. Clustering is


designed to protect availability to data; however, it cannot protect the data
itself. Therefore, it is still important to back up data regularly.

 Reviewing Key Concepts of Microsoft


Windows 2000 Clustering

When you create Exchange 2000 server clusters, you use the services that are
provided by Windows 2000 Cluster service. In fact, you run Microsoft
Exchange 2000 Server Setup on a node in the Windows 2000 server cluster to
install the cluster-aware version of Exchange 2000.
Because Exchange 2000 server clusters are built on Windows 2000 server
clusters, it is imperative that you understand the key concepts of Windows 2000
server clusters before designing your Exchange 2000 server clusters.

Clustering Components

Windows 2000 server clustering has such components as nodes, cluster disks, a
quorum resource, virtual servers, resources, and groups or resource groups.
Cluster service refers to the collection of components on each node that
performs cluster-specific activities.

-5-
Nodes
Nodes are the individual servers that comprise a server cluster. Nodes are units
of management for the server cluster. A node can be online or offline,
depending on whether it is currently in communication with the other nodes in
the cluster.

Note Microsoft Windows 2000 Advanced Server supports two-node server


clusters. Microsoft Windows 2000 Datacenter Server supports three-node and
four-node server clusters.

Cluster Disks
Cluster disks are shared hard drives to which all server cluster nodes attach by
means of a shared bus or storage area network (SAN). You store data,
applications, resources, and services on the shared disks.
Cluster disks are configured by using RAID, with redundant power supplies and
network interface cards. Cluster disks come in several packaged solutions such
as shared small computer system interface (SCSI), SANs, and Network Area
Storage appliances.

Quorum Resource
The vital function of the quorum resource is allowing a node to form a cluster
and maintaining consistency of the cluster configuration for all nodes. The
quorum resource holds the cluster management data and recovery log, and is
used to arbitrate among nodes to determine which node controls the cluster.
The quorum resource resides on a shared cluster disk. It is best to use a
dedicated cluster disk for the quorum resource so that it will not be affected by
failover policies of other resources, or by the space that other applications
require. It is recommended that the quorum resource be on a disk partition of at
least 500 megabytes (MB).

Virtual Servers
Cluster service uses a physical server, or a cluster node, to host one or more
virtual servers. Virtual servers have server names and network configurations
that appear as physical servers to clients. Important facts about a virtual server
are:
 Each virtual server has an Internet Protocol (IP) address and a network
name that are published to clients on the network. This information allows
network clients to interact with the virtual server as if it were a physical
server.
 Clients access applications or services on a virtual server in the same way
that they would if the application or service were on a physical server. They
do not know which node is actually hosting the virtual server.
 Because clients connect to the virtual server directly and are not concerned
about the node that hosts the virtual server, you can move the virtual server
from one node to node without affecting your clients.

-6-
Note Virtual servers in a clustering environment should not be confused
with Exchange 2000 virtual servers such as SMTP virtual servers. Although
they are both called virtual servers, they represent two different concepts. For
more information about Exchange 2000 virtual servers, see Course 1572C,
Implementing and Managing Microsoft Exchange 2000.

Resources
Resources are the basic management and failure units of Cluster service.
Examples of resources are physical hardware devices, such as disk drives, or
logical items, such as IP addresses, network names, applications, and services.
A cluster resource can only run on a single node at any given time and is
identified as online when it is available for a client to use. Under the control of
Cluster service, resources may migrate to another node as part of a group
failover. For example, when Cluster service detects that a single resource has
failed on a node, it then moves the whole group to another surviving node in the
cluster.
Cluster service uses resource monitors to track the status of the resources.
Cluster service will attempt to restart or migrate resources when they fail or
when one of the resources that they depend on fails.

Resource Groups
Groups or resource groups are logical collections of cluster resources that
Cluster service manages as single units for configuration purposes. Typically, a
resource group is made up of logically related resources such as applications
and their associated peripherals and data. However, resource groups can contain
cluster entities that are related only by administrative needs, such as an
administrative collection of virtual server names and IP addresses. A resource
group can be owned by only one node at any given time. Individual resources
within a group must exist on the node that currently owns the group. At any
given instance, different servers in the cluster cannot own different resources in
the same resource group.
Each resource group has an associated cluster-wide policy that specifies which
server or node the group prefers to run on, and which server or node the group
should move to in case of a failure. Each group also has a network service name
and address to enable network clients to bind to the services provided by the
resource group. In the event of a failure, resource groups can be failed over or
moved from the failed node to another available node in the cluster. However,
clients on the network may still access the same resources by using the same
network name and IP address.

Cluster Communications

A server cluster communicates on a public, private, or mixed network


depending on the nature of the communication:
 The public network is used for client access to the cluster.
 The private network is used for intra-cluster communications, which are
also referred to as node-to-node communications.

-7-
 The mixed network can be used either for intra-cluster communications or
for client access to the cluster.

One type of node-to-node communication on the private network monitors the


health of each node in the cluster. Each node periodically exchanges IP packets
with the other nodes in the cluster to determine if all nodes are operational. This
process is referred to as sending heartbeats.

Important The recommended configuration for server clusters is a


dedicated private network and a mixed network. The dedicated private network
is for node-to-node communication. The mixed network functions as a backup
connection for node-to-node communication if the private network fails. This
configuration avoids having a single point of network failure.

Failover and Failback

Failover is the process of moving a group of resources from one node to


another in case of the failure of a node or the failure of one of the resources in
the group. Failback is the process of returning a group of resources to the node
on which it was running before the failover occurred.

Failover
Failover can occur automatically because of an unplanned hardware or
application failure, or can be triggered manually by the person who administers
the cluster for maintenance. The algorithm for both situations is identical; when
a resource fails, all other resources in the resource group attempt to shut down
safely before the failover takes place.
When an entire node in the cluster fails, its resource groups are moved to one or
more available servers in the cluster. Automatic failover is similar to planned
administrative reassignment of resource ownership. It is, however, more
complicated because the normal shutdown process is not safely performed on
the failed node.
Automatic failover requires determining what groups were running on the failed
node and which nodes should take ownership of the various resource groups.
The node preference list, which is part of the resource group properties, is used
to assign a resource group to a node. After the assignment is complete, all nodes
in the cluster update their databases and keep track of which node owns the
resource group. Clusters that are configured as active/passive clusters use N+1
failover when setting the node preference list of all resource groups.

N+1 Failover
N+1 failover sets the node preference list of all resource groups to a passive
node that is not the primary node for any resource group. The node preference
list identifies the passive cluster node to which resources should be moved
during first failover. The standby node is a node in the cluster that is mostly
idle.
N+1 failover typically provides the fastest failover time. This is because the
passive node does not support any virtual server during normal operations. The

-8-
passive node remains idle, waiting to take over a virtual server if one of the
active nodes fails.

Failback
When a node comes back online, some resource groups are moved back to the
recovered node. This move is referred to as failback. The properties of a
resource group must have a preferred owner defined to failback to a recovered
or restarted node. Resource groups for which the recovered or restarted node is
the preferred owner will be moved from the current owner to the recovered or
restarted node.
Cluster service provides protection against failback of resource groups at peak
processing times, and also protects nodes that have not been correctly recovered
or restarted. Failback properties of a resource group may include the hours of
the day during which failback is allowed, plus a limit on the number of times
that failback is attempted.

Cluster Administrator

Cluster Administrator is a cluster management tool that allows you to


configure, control, and monitor clusters. You can install Cluster Administrator
on a server running Microsoft Windows 2000 Professional, Microsoft
Windows 2000 Server, or Windows 2000 Advanced Server, regardless of
whether it is a cluster node.
You can use Cluster Administrator to manage cluster objects, establish groups,
initiate failover, handle maintenance, and monitor cluster activity through a
convenient graphical interface. You can also write extensions that enable
Cluster Administrator to manage your own custom types.

Note For more information about developing extensions, see the Microsoft
Platform Software Development Kit (SDK).

 Examining Key Concepts of


Exchange 2000 Clustering

When you run Exchange 2000 Server Setup on a node in the Windows 2000
server cluster, Exchange 2000 Server installs the cluster-aware version of
Exchange 2000 with all the necessary custom files and resources that are
required for the Exchange 2000 server cluster to function.

Multimedia: Exchange 2000


Clustering

-9-
This presentation provides an overview of the key concepts of Exchange 2000
clustering.

Hardware and Software


Requirements

To create an Exchange 2000 server cluster, you must meet the hardware and
software requirements that are described in the following sections.

Hardware Requirements
All hardware used for Exchange 2000 clustering must be on the Hardware
Compatibility List (HCL). You can find the most recent version at
http://www.microsoft.com/hcl/default.asp.

Important Servers that are members of a cluster should have redundancy


built in wherever possible. This includes redundancy in power supplies,
network interface cards (NICs), and disk drives. You should use separate NICs
for the public and private networks of the cluster.

Software Requirements
Specific versions of Windows 2000 and Exchange 2000 Server are required to
create Exchange 2000 server clusters. These version requirements are described
in the following table.
Exchange clusters
Windows 2000 Exchange 2000 available

Windows 2000 Server or Exchange 2000 Server None


Windows 2000 Advanced
Server
Windows 2000 Server Microsoft Exchange 2000 None
Enterprise Server
Windows 2000 Advanced Exchange 2000 Enterprise Two-node
Server Server
Windows 2000 Advanced Exchange 2000 Enterprise Two-node
Server Server with Service Pack 1
(SP1)
Windows 2000 Datacenter Exchange 2000 Enterprise Three-node
Server Server with SP1
Windows 2000 Datacenter Exchange 2000 Enterprise Four-node
Server Server with SP1

- 10 -
Note It is recommended that Exchange 2000 SP1 or greater always be used
when using Exchange 2000 server clusters. SP1 contains several enhancements
that improve performance and availability of Exchange 2000. In addition, you
should install the latest service pack for your copies of Windows 2000
Advanced Server or Windows 2000 Datacenter Server.

Exchange 2000 Virtual Servers

When you create an Exchange 2000 server cluster, you create a Windows 2000
cluster group and then add specific resources to it. Exchange 2000 server
clusters are referred to as Exchange Virtual Servers. Unlike the physical
computer running Exchange 2000 Server, an Exchange Virtual Server is a
cluster group that can be failed over if the physical server itself fails.

Required Resources and Their Cluster


Functionality
The following table lists the resources that an Exchange Virtual Server requires
and the respective cluster functionality for each resource.
Resource name Functionality Description

Disk Resource Active/Active One or more physical disks on the SAN or


shared small computer system interface (SCSI)
array.
IP Resource Active/Active The IP address for the Exchange Virtual
Server.
Network Name Active/Active The network name used by clients when
Resource connecting to the Exchange back-end server.
Microsoft Outlook® clients will resolve this
name by way of the DNS server.
System Attendant Active/Active The System Attendant is the fundamental
resource that controls the creation and deletion
of all the resources in the Exchange Virtual
Server. The System Attendant is dependant on
the network name and physical disk resources.
Exchange Active/Active Provides mailbox and public folder storage for
Information Store Exchange 2000. The Exchange Information
Store is a dependency of the System
Attendant.

(continued)
Resource name Functionality Description

Simple Mail Active/Active Provides connections to client computers and


Transfer Protocol is a dependency of the Exchange Information
(SMTP) Store.
Internet Message Active/Active Provides connections to client computers and
Access Protocol is a dependency of the Exchange Information
(IMAP4) Store.

- 11 -
Resource name Functionality Description
Post Office Active/Active Provides connections to client computers and
Protocol version 3 is a dependency of the Exchange Information
(POP3) Store.
Hypertext Transfer Active/Active Provides connections to client computers and
Protocol (HTTP) is a dependency of the Exchange Information
Store.
Content Indexing Active/Active The MSSearch resource provides content
indexing for the Exchange Virtual Server and
is a dependency of the Exchange Information
Store.
Message Transfer Active/Passive The MTA resource is active/passive; there can
Agent (MTA) be only one MTA per cluster. The MTA is
created on the first Exchange Virtual Server. If
the Exchange Virtual Server with the MTA is
not the last Exchange Virtual Server in the
cluster and is deleted, the MTA will be moved
to another Exchange Virtual Server in the
cluster. The MTA serves all Exchange Virtual
Servers in the cluster as long as it is online.
The MTA is a dependency of the System
Attendant.
Routing Service Active/Active Builds the link state tables and is a
dependency of the System Attendant.

Unsupported Resources
Exchange 2000 server clusters do not support the following Exchange 2000
Server components as resources:
 Active Directory® Connector (ADC)
 Chat Service
 Microsoft Exchange 2000 Conferencing Server
 Instant Messaging
 Key Management Service
 Exchange Calendar Connector
 Exchange Connector for Lotus cc:Mail
 Exchange Connector for Lotus Notes
 Exchange Connector for Microsoft Mail
 Exchange Connector for Novell GroupWise
 Event Service
 Network News Transfer Protocol (NNTP)
 Site Replication Service (SRS)

Note If you are planning to use a Windows 2000 Datacenter Server-based


three-node or four-node cluster, you will need to have a maximum of one
Exchange Virtual Server per active node.

- 12 -
 Exchange 2000 Clustering Best
Practices

 DesignAndDeploymentBestPractices
 ExaminingAvailableClusteringModels
 ChoosinganAppropriateClusteringModel
 DesignConsiderations
 ChoosingAppropriateDataStorage
 StoringExchangeData
 PerformanceMonitoring
 ImprovingPerformance
 Backupand Recovery BestPractices

Exchange 2000 best practices start right from installations. You must follow
design guidelines to design and deploy cluster servers.
Choose the appropriate clustering model. You must also be aware of such issues
as storage group limitations, server roles, location of databases and storage
group log files, multi-processor support limitations, memory limitations, and
drive letter limitations. All these facts will inevitably influence your design
decision.
Choose the appropriate data storage and store the Exchange databases and files
appropriately to optimize performance.
Once the server is installed and operational you should monitor its performance
take regular backups. Have a disaster recovery plan and practice restores.

Examining Available Clustering


Models

There are three clustering models available: two-node, three-node, and four-
node. The two-node model is based on Windows 2000 Advanced Server, while
the three-node and four-node clustering models are based on Windows 2000
Datacenter Server.

Windows 2000 Advanced Server


Clustering Models
In the two-node model, you can have two different configurations:
active/passive and active/active.

- 13 -
Two-Node Active/Passive Configuration
In this configuration, the primary node of the Exchange 2000 server cluster
supports a single Exchange Virtual Server and services all client computers
while the secondary node is idle. The secondary node is a dedicated server that
is ready to be used whenever a failover occurs on the primary node. If the
primary node fails, the secondary node picks up all operations and continues to
service client computers at a rate of performance that is close or equal to that of
the primary node.
Because this configuration has a dedicated secondary node, it ensures that your
Exchange 2000 server is minimally affected by a failover. As a result, you are
able to achieve the maximum availability and performance of your
Exchange 2000 messaging system.

Two-Node Active/Active Configuration


In this configuration, each node of the Exchange 2000 server cluster supports
an Exchange Virtual Server. When either one of the two nodes fails, the
surviving node takes over the failed Exchange Virtual Server and continues to
service client computers.

Windows 2000 Datacenter Server


Clustering Models
When you use Windows 2000 Datacenter Server, you will be able to use either
three-node or four-node server clusters. Exchange 2000 Enterprise Server
supports a maximum of 3 active/1 passive cluster configuration.

Four-Node 3 Active/1 Passive Configuration


In this configuration, three nodes of the Exchange 2000 server cluster host
active Exchange Virtual Servers while one node remains passive. This passive
node is a dedicated server that is ready to take over an Exchange Virtual Server
should a failover occur.
For enhanced reliability, you can increase the number of passive nodes in the
four-node server cluster. For example, you can have a 2 active/2 passive
configuration.

Three-Node 2 Active/1 Passive Configuration


Similar to the four-node configuration, a three-node cluster has two active
nodes and one passive node. If either of the two active nodes fails, the one
passive node is used to failover the Exchange Virtual Server on the failed active
node.

Note Exchange 2000 Enterprise Server does not support four-node


active/active clusters, where each of the four nodes has an active Exchange
Virtual Server.

Choosing an Appropriate
Clustering Model

- 14 -
Choosing a clustering model involves making two decisions: the number of
nodes in your cluster and the specific configuration of the nodes in the cluster.

Choosing a Two-Node Model


The number of nodes in your Exchange 2000 server cluster depends on the
number of mailboxes that you need to support in the cluster. If the mailboxes
can be supported by one server running Exchange 2000, a two-node server
cluster will provide the performance and high availability that you require.

Choosing an Active/Passive Configuration


This is the preferred and recommended configuration for a two-node cluster.
The passive node in the cluster sits idle until the active node fails or needs
maintenance. As a result, this configuration has the fastest failover time.
This configuration has the fastest failover and failback time, and gives you the
highest availability of your server running Exchange 2000. An active/active
cluster can support as many mailboxes as an active/passive cluster does.

Choosing an Active/Active Configuration


In this configuration, each active node supports one Exchange Virtual Server,
but must be capable of supporting both virtual servers. In the case of a failover,
when not enough resources are available on the surviving active node, a delayed
failover interval can occur, which results in downtime for the failed virtual
server. To ensure adequate resources are available, each active node must be
underutilized during normal operation. In fact, it is recommended that each
active node does not exceed 40 percent utilization of its server resources.
For example, you need to support 1,000 mailboxes on your cluster. You have
used the Microsoft Exchange Load Simulator (LoadSim) to determine the
following server utilization based on both active/passive configuration and
active/active configuration.
Active/passive cluster configuration:
Node Number of mailboxes Server utilization

Active node 1000 70 percent


Passive node 0 5 percent

Active/active cluster configuration:


Node Number of mailboxes Server utilization

Active node 500 40 percent


Active node 500 40 percent

In this scenario, the active/active cluster utilization is already at the


recommended limit for utilization. Considering that an active/passive cluster
has a faster failover time, you should choose to configure your cluster as an
active/passive cluster.

Note LoadSim, available for downloading from http://www.microsoft.com,


is used to simulate a MAPI user load on a server running Exchange 2000 in a
lab environment to verify that the server will provide acceptable performance.

- 15 -
Choosing a Three-Node or Four-Node
Model
Three-node and four-node clusters are for companies that have a large number
of mailboxes in a centralized location that require high availability. If the
number of mailboxes that you need to support in a single location exceeds what
can be supported on a single server running Exchange 2000, you will not be
able to use a two-node cluster. In this case, you should use the three-node or
four-node model so that you can distribute the mailboxes across two or three
active nodes with one passive node available for failover.
Whether to choose a three-node or four-node cluster depends on two factors:
the number of total mailboxes that the cluster will support and the capacity of
the hardware that you plan to use in the cluster. For example, you have 3,000
mailboxes that you need to host in a cluster. You have determined that the class
of server that you plan to use in your cluster can support 2,000 mailboxes while
still providing your required level of performance. You can then divide the
3,000 mailboxes between two active nodes (1,500 mailboxes per node), with a
third node configured as a passive node. This three-node 2 active/1 passive
cluster can grow and support up to 4,000 mailboxes (2,000 mailboxes per
node). If with the growth of your company, you must support more than 4,000
mailboxes, you can add an additional active node to your existing three-node
cluster, making it a four-node 3 active/1 passive cluster and dividing the
mailboxes evenly among the three active nodes.

Note Most hardware vendors that sell cluster capable hardware have tools
that can help you to determine the servers that are right for your clustering
requirements.

Design Considerations

The following are aspects that you must consider when designing the
architecture for your Exchange 2000 server cluster.

Storage Group Limitations


Each server running Exchange 2000 is limited to four storage groups. This is a
physical limitation that applies to each node of a server cluster. The following
table illustrates this limitation when implementing a two-node Exchange 2000
server cluster.
Exchange Virtual Server State Storage group names

Node 1 Active SG1, SG2, SG3


EVS1
Node 2 Active SG1, SG2
EVS2

If EVS2 on node 1 fails over to node 2, one of the storage groups from node 1
will fail to mount on node 2. This failure occurs because node 2 will have
exceeded the storage group limit for a single server and a cluster node.

- 16 -
Dedicated Server Roles
If the purpose of your clusters is to provide Exchange 2000 services to your
users, it is recommended that your servers only run Exchange 2000. In addition,
the servers in your Exchange 2000 server clusters should be member servers of
a domain and not domain controllers or global catalog servers.

Location of Databases and Storage


Group Log Files
If the storage groups for an Exchange Virtual Server are configured so that the
log files are on one set of physical drives and the databases on another, all of
the drives must be part of the Exchange Virtual Server. That is, all of the data
must be located on the shared disk, and all of the physical disk resources must
be part of the cluster group. These locations enable the log files and the storage
group databases to fail over to another node if the Exchange Virtual Server goes
offline.

Multi-Processor Support Limitations


Windows 2000 Datacenter Server supports 32 processors on a single server.
However, Exchange 2000 Server with SP1 will scale effectively only to eight
processors on a single server. As a result, you should partition all 32-processor
servers into four, 8-processor servers by using hardware partitioning. You
should avoid running Exchange 2000 Server with SP1 on servers with more
than eight processors, as this is an inefficient use of processor resources.

Memory Limitations
Exchange 2000 Server does not support instancing, which is the ability to run
multiple instances of an application as separate processes on the same
computer, or Physical Address Extension (PAE), which limits Exchange 2000
Server to about 3 gigabytes (GB) of usable memory. Therefore, you should not
install more than 3 GB of physical memory on a computer running
Exchange 2000 Server. In addition, Exchange 2000 requires that the /3GB
switch be used on your servers that have more than 1 GB of physical RAM
installed.

Note For more information about PAE, see the Microsoft Web site at
http://www.microsoft.com/hwdev/PAE

Drive Letter Limitations in Four-Node


Clusters
Four-node Exchange 2000 server clusters have an additional limitation that you
must plan for prior to building the cluster. Windows 2000 has a 24-disk volume
limitation per server. If you plan to have the majority of disks on the server as
shared cluster resources, the 24-disk volume limitation applies to the entire
cluster, not just to each individual node. Regardless of the number of nodes in
the cluster, the maximum number of shared disks is 22, with one additional disk
for the system disk, and another disk for the Exchange 2000 Server drive M.
For certain configurations, you may need to disable the CD-ROM or
DVD-ROM drive. Network share access may also be limited.

- 17 -
 Designing an Appropriate Data
Storage Strategy

Designing a data storage strategy is part of the overall planning of your


Exchange 2000 clustering strategy. Before choosing your data storage strategy,
you must have a clear understanding of the available data storage models. You
must also be aware of the specific planning considerations that each model
requires.

Examining Available Data Storage


Models

There are several data storage methods that are available when you use
Exchange 2000 server clusters. The two most popular choices are external
storage arrays and SANs. Which data storage method you choose will have a
potential impact on the performance and availability of your Exchange 2000
server cluster.

External Storage Array


An external storage array uses an external SCSI drive cabinet to house multiple
SCSI disk drives and other hardware, usually configured as a RAID set. It is
connected to a two-node cluster with SCSI cables. The advent of low voltage
differential (LVD) SCSI provides a lower-cost solution to a shared SCSI cluster
than the traditional high voltage differential (HVD) while still providing high
performance and practical cabling lengths.

RAID Set Types


RAID 0+1 and RAID 5 are two common RAID set types that you can use.
While RAID 0+1 provides fault tolerance with good performance, RAID 5
allows you to maximize your storage capacity.
For example, a configuration that uses ten 9-GB drives for the Exchange 2000
data drive obtains a capacity of 45 GB when using RAID 0+1 as the RAID
level. This means that only 50 percent of the total usable space can be utilized,
because the remaining 45 GB is used for RAID 0+1 fault tolerance support.
However, the same ten 9-GB drive, when configured as a RAID 5 set, provides
approximately 81 GB of storage. This is a capacity difference of approximately
36 GB.

Note RAID 0+1 is a combination of RAID 0, disk stripping, and RAID 1,


disk mirroring. The RAID 0 disks are mirrored.

Storage Area Networks


Storage area networks (SANs) are dedicated networks that use their own
networking hardware, storage media, and fiber channel pipes to provide

- 18 -
unparalleled data-access performance. By using RAID 0+1 arrays, SANs
provide high levels of fault tolerance and the ability to withstand disk failures.
In addition, the advent of fiber channel switches provides much higher levels of
throughput and allows administrators to design SANs with no single points of
failure.
External storage arrays that are physically attached to your servers running
Exchange 2000 slow down application and I/O processing. In contrast, SANs
improve your Exchange 2000 performance by shifting data transactions and I/O
away from your servers running Exchange 2000. As a result, SANs offer the
most efficient way to add and manage data storage capacity while ensuring
continuous availability.
Although SANs have traditionally been a more expensive solution than a
simple external storage array, the price for SANs has recently become more
affordable for most clustering budgets.

Choosing an Appropriate Data


Storage Model

Depending on the number of mailboxes that you are hosting on a single


Exchange 2000 server cluster, you can choose to use either external storage
arrays or SANs.

When to Use External Storage Array


If you have a limited number of mailboxes that require high availability, you
may only need a limited amount of disk space for your Exchange 2000 server
cluster. In this case, an external SCSI array may be the most economical
solution for you.

When to Use SANs


SANs can be used for companies that expect to eventually host more than
50,000 Exchange mailboxes in a single cluster. However, SANs can also be
used in much smaller environments.
SANs are typically faster, more reliable, and more flexible than external storage
arrays. SANs are also very scaleable, allowing you to expand your storage
capacity, in most cases, without incurring downtime to your messaging system.
Most SANs have built-in backup solutions that allow for quick backups and
restores. In addition, SANs come with a variety of management tools that allow
you to monitor their performance and availability. Most importantly, SANs are
now becoming more cost competitive. As a result, when the budget allows,
SANs are usually recommended instead of external storage arrays.

Planning Considerations

- 19 -
Planning external storage arrays and planning SANs require different
considerations.

Planning Considerations for External


Storage Arrays
When planning your external storage arrays, you must closely follow the
manufacturers’ recommendations. You must pay special attention to the
requirements regarding cable length when connecting the storage array to each
cluster node. In addition, you should calculate the amount of disk space that you
currently require and that you expect to require in the future. This calculation
will help to ensure that the storage cabinet and hardware can accommodate
future growth.

Planning Considerations for SANs


When planning your SAN configuration, you should consider the information
discussed in the following sections.

SAN Volumes and Sizing


Most SAN vendors provide planning tools that allow you to size the SAN to
your requirements. You must carefully plan the volumes that you want to share
among nodes of the cluster. You can determine the volumes that you require by
using the following three-step process:
.1 Determine mailbox and public folder requirements for the cluster.
.2 Divide the mailboxes evenly among active nodes.
.3 Determine the number of mailboxes per storage group per virtual server
without exceeding the storage group limit.

For example, if you have 9,000 mailboxes in a four-node cluster with a 3


passive/1 active configuration, you will need to have each active node
supporting 3,000 mailboxes. For backup and recovery purposes, you also
determine that each virtual server will have a single storage group with five
private information stores, each hosting 600 mailboxes. If each mailbox has a
size limit of 50 MB, each active node will require 150 GB, resulting in a total of
450 GB on the SAN. Therefore, you know that you will need enough hard disk
drives to provide 450 GB in storage.
In addition, you must ensure that your SAN solution allows for future growth.
You should avoid choosing a SAN solution that is at or near its full capacity for
the number of disks it can support.

Connections Between Your Servers and the SAN


The principles of redundancy need to apply to all your communication links.
The principles of redundancy means that to avoid a single point of failure, you
must plan to create duplicate, separately routed links between your server room
and your SAN. Redundancy should be a consideration when choosing your
SAN solution.

Availability Levels
Most high-end SANs are designed to offer 99.999 percent availability. Such
high availability is accomplished through redundant components, the ability to
“hot swap” components that fail, and the ability to upgrade firmware and

- 20 -
software without any downtime. Therefore, the total availability of your
Exchange 2000 messaging system should not be compromised by your SAN.

Disaster Recovery Planning


Your SAN needs to be fully integrated into the disaster recovery planning of
your Exchange 2000 messaging system. You must consider the possible threats
to your storage solution such as theft, or natural disasters such as fire or
earthquake, and work with your SAN engineer to reduce or eliminate these
threats.

Performance
The performance of your SAN affects the performance of your Exchange 2000
cluster. When selecting a SAN solution, you should consider the performance
ratings of each vendor’s storage solution. Ask each vendor how performance of
their storage solutions, such as maximum I/O, compares with their competitors’
storage solutions when configured with the number of disks you envision for
your WAN solution.

 Storing Exchange Data

 SMTPQueuefolder
 ExchangeDatabasefiles
 TransactionLogfiles
 IndexingFiles

Exchange stores data in three main locations:


• Simple Mail Transfer Protocol (SMTP) queue folder
• .edb and .stm files
• Transaction log files

SMTP Queue Folder


The SMTP queue stores SMTP messages until they are written to a database
(public or private depending on the type of message), or sent to another server
or connector.
Typically, messages stored in the SMTP queue are there for a short time.
Therefore, your storage solution for the SMTP queue should optimize

- 21 -
performance before capacity and reliability. However, in some situations when
downstream processes fail, the SMTP queue could be required to store a large
amount of data. For that reason, do not assume that a RAID-0 array is the best
storage solution for SMTP queues. Generally, RAID-0 is acceptable only if
mail loss is acceptable. RAID-1 is a good solution because it gives some
measure of reliability while providing adequate throughput.

.edb and .stm Files


An Exchange database consists of a rich-text .edb file and a native multimedia
content .stm file.
The .edb file stores the following items:
• All of the MAPI messages
• Tables used by the Store.exe process to locate all messages
• Checksums of both the .edb and .stm files
• Pointers to the data in the .stm file
The .stm file contains messages that are transmitted with their native Internet
content. Because access to these files is generally random, they can be placed
on the same disk volume.
As you plan your storage solution for these files, assume a certain amount of
reliability; in other words, RAID-0 is not a recommended option. After
reliability, your storage solution is based on a choice between optimizing
performance (RAID-1) and optimizing capacity (RAID-5). If possible, use
RAID-1 (or RAID-0+1) for these files.
You can store public folders on a RAID-5 array because data in public folders
is usually written once and read many times. RAID-5 provides improved read
performance.

Transaction Log Files


The transaction log files maintain the state integrity of your .edb and .stm files,
which means that the log files are what actually represents the data. There is a
transaction log file database for each storage group. This file is implemented as
a database to increase performance. If a disaster occurs and you have to rebuild
your server, use the latest transaction log files to rebuild your databases. If you
have the log files and latest backups, you can recover all your data. If you lose
your log files, however, the data is lost.
The level of disaster recovery you can achieve relates to the log files (for
example, a storage group). For optimal reliability and performance keep the log
files for each storage group on their own drive. As you plan your storage
solution for the transaction log files, make integrity and reliability of the utmost
importance. Thus, a mirrored RAID-1 (or RAID-0+1) solution is recommended.

Indexing Files

- 22 -
- 23 -
Performance Monitoring

 Memory
 Virtual Memory
 MSExchangeIS\VMLargest BlockSize
 MSExchangeIS\VMTotal 16MB FreeBlocks
 MSEXchangeIS\VMTotal FreeBlocks
 MSExchangeIS\VMTotal LargeFreeBlockBytes
 /3GB Switch

Ensuring that your Exchange 2000 Server clusters perform well involves setup
steps and proactive monitoring of your clusters. The following sections provide
steps to improve performance, monitor performance, and test the performance
of your Exchange 2000 clusters.

Memory
By default, Exchange 2000 uses all of the physical memory available on your
computer; however, you can restrict the amount of memory used by Exchange.
The biggest individual consumer of memory in Exchange 2000 is the Store.exe
process. On an active, production Exchange 2000 Server, it is not uncommon to
notice that the Store.exe process consumes nearly all of the server memory.
Like Exchange Server version 5.5, the Store.exe process uses a unique cache
mechanism called Dynamic Buffer Allocation (DBA). This process self-
governs how much memory it uses, and balances that with other applications
running on the server. If Exchange is the only application running, DBA
allocates more memory to itself.
The amount of memory you need in your server depends on the number of
databases, size of the databases, and number of transactions. As you create
more Exchange databases on the server, your memory requirements increase.
Database configuration achieves a certain amount of memory preservation. For
example, the first database in a storage group consumes the greatest amount of
virtual memory; whereas, if you add a new database to an existing storage
group, the memory consumption of this database will be less.
Exchange 2000 can handle up to 20 databases per server; the total storage space
is made up of a maximum of 4 storage groups and 5 databases per storage
group. Wherever possible, fill out your storage groups to the maximum number
of databases before you create a new storage group.

- 24 -
The advantages to filling out your storage group are:
• Reduced memory consumption
• Reduced disk overhead
• However, there are a few disadvantages:
• Circular logging can only be controlled at the storage group level and
is not recommended.
• Only one backup process can take place in a single storage group at a
time. Backing up one database in a storage group will halt the online
maintenance of all other databases in the storage group.

Virtual Memory
Windows 2000 implements a virtual memory system based on a flat (linear) 32-
bit address space. Thirty-two bits of address space translates into 4 GB of
virtual memory. On most systems, Windows 2000 allocates half this address
space (the lower half of the 4-GB virtual address space from x00000000
through x7FFFFFFF) to processes for its unique private storage and the other
half (the upper half, addresses x80000000 through xFFFFFFFF) to its own
protected operating system memory usage.
It is important to monitor the virtual memory on your Exchange 2000 clusters.
For more information about monitoring virtual memory, see the “Performance
Monitoring” section in this document.
For more information about virtual memory, see your Windows 2000 online
documentation, Microsoft Windows 2000 Server Resource Kit, and Inside
Microsoft Windows 2000.

/3GB Switch
If your computer running Exchange 2000 has 1 GB of physical memory or
more, it is very important to add the /3GB switch to the Boot.ini file on the
server so that 3 GB are available for user-mode applications. By default,
Microsoft Windows 2000 Advanced Server reserves 2 GB of virtual address
space for the kernel and allows user mode processes, such as the Exchange
2000 Store.exe process, to use 2 GB of virtual address space.
Virtual address space for a specific process is allocated at startup and increases
as more memory is used during run time. It is normal for the actual memory
usage of a process to be much less than the address space the process was
allocated.
Important If your computer running Exchange 2000 has 1 GB of memory or
more, it is very important that the Store.exe process does not run out of virtual
address space. If it runs out of virtual address space, memory allocation fails,
even if there is plenty of physical RAM left, and you must restart the
Information Store service.

Example
A server with 2 GB of physical RAM without the /3GB switch in the Boot.ini
file will run out of memory when the Store.exe process’s virtual address space
reaches 2 GB. Windows Task Manager will show that only about 1.5 GB is
actually being used when actually the server is out of memory.

- 25 -
- 26 -
Exchange 2000 Server Cluster
Failover Performance

 ESE LogCheckpoint Depth


 IS ServiceConnections
 SMTP QueueSize

Many factors determine the speed of Exchange 2000 Server cluster failovers. If
an Exchange Virtual Server must fail over, certain tasks must be accomplished
to complete the failover. Understanding these tasks helps you configure your
Exchange 2000 Server clusters for the fastest possible failover.

Extensible Storage Engine (ESE) Log Checkpoint


Depth
Storage group databases write new transactions to a log and then update the
database when it is efficient to do so. The maximum amount of logs that can be
written before the data is committed to the database is called the log checkpoint
depth. By default, this depth is 20 MB. It is possible for the Exchange
Information Store service to write 20 MB of logs before it writes that
information into the actual database.

Exchange Information Store Service Connections


The number of connections into the Exchange Information Store service affects
failover time. The Exchange Information Store service performs cleanup
routines before it releases and allows failover to occur. An unloaded server that
takes 100 seconds to fail over fully takes 120 seconds to fail over with 3,000
simultaneous Microsoft Outlook® Web Access or Microsoft Outlook
connections.

SMTP Queue Size


The number of messages in the SMTP queue is also a factor in the time it takes
an Exchange Virtual Server to fail over from one cluster node to another. This
time can be significant if the queue size is over 1,000 messages. To reduce this
time, modify the Max Handle Threshold registry key.

- 27 -
Backup and Recovery

 BackingUpData
 RecoveringaSingleLostServer inCluster
 RecoveringaLostCluster Quorum

It is important to back up all of the important data for your company. This
includes the contents of your users’ mailboxes and the configuration data that is
needed to operate the servers running Exchange 2000. Make sure that you have
backups of your static data, such as all software applications and management
scripts. In addition, it is advisable to make regular backups of your dynamic
data, such as all Exchange 2000 configuration data and your Exchange
databases.

- 28 -
Backing Up cluster Server

Types of
Backing Up Data on an Exchange 2000 Server
Cluster Node
Windows
Window
Use Windows 2000 Backup to back up a cluster node in which the Cluster
service is operational. On the What to Back Up screen of the wizard, select
Back up everything on my computer.

Exchang
Be sure the node you back up is the owner of the cluster quorum disk. To check

Exchang
the ownership, stop the Cluster service on all other nodes except the node
running Windows 2000 Backup. Then chose one of the following options:
• Select Only back up the System State data to back up the system
state, which includes the quorum.
To back up all cluster disks owned by a node, perform the backup from that

Supporti
node.

Support
Note: During backup, Windows 2000 Backup might report the following error:
“Completed with Skipped Files.” If you examine the Windows 2000 Backup
log, notice that both CLUSDB and ClusDB.log failed to be backed up. You
should ignore this error. The quorum logs from the cluster quorum drive are
successfully backed up.
After you back up the cluster quorum disk on one node, you do not need to

User
UserApp
Ap
back up the quorum on the remaining cluster nodes. As an option, you can also
back up the cluster software, cluster administrative software, system state data,
and other cluster disks on the remaining cluster nodes.

- 29 -
Managem
Manage
Recovering Cluster Servers

Recoverin
Recovering a Single Lost Server in a Cluster

Have
Haverep
rep
If a single server in a cluster fails, Exchange resources running on the server
move to another available node in the cluster. Exchange databases remain intact
on shared storage and accessible by the Exchange Virtual Server from other
nodes in the cluster. This is a feature of clustering, and it provides reliability
when a disaster occurs on a single server in a cluster. After you move resources
to an available node in the cluster, use the following procedures to remove the
nonfunctioning node and replace it with a new node.

Removing the Lost Server Node from the Cluster

Have
HaveWin
Wi
When a server suffers a disaster in a cluster and needs to be replaced by a new
node, follow these steps:
• Use Cluster Administrator to evict the lost node from the cluster.

available
available
Use Cluster Administrator to verify for each cluster group and resource
that the evicted node no longer appears as a possible or preferred
owner.
• Physically remove the damaged node from the cluster and shared
storage.

Building a New Server for the Cluster

Have
Havefull
You do not have to rebuild the lost node to replicate the original lost node. You

ful
can build an entirely new node (new computer name, new IP address, and so
on) and then join the cluster.

To create a new server node


• Install Windows 2000 and provide a new computer name during
installation.

- 30 -
• Join the same domain as before with the same administrative permissions
given to the previous Exchange administrator account.
• Set up the new computer to access the same shared storage of the original
node.

To join the new server node to the cluster


• Set up the cluster service on the newly built server.
• When asked to join a cluster, specify the cluster you want to join.
Installing Exchange on the Server Node and Moving Resources Back to the
Node
After the server node rejoins the cluster, use the following steps to install
Exchange on the server node and move the resources back to the node.

To install Exchange and move resources back to


the node
• Install Exchange 2000 Server, and any Exchange Service Packs that are
installed on the other node, on the new node. You must install Exchange
before Exchange resources can be moved back to the new node.
• Verify that the cluster groups and resources on the other node show the new
node as a possible or preferred owner.
• Move the Exchange resources that originally failed to the new node.

Recovering a Lost Cluster Quorum


To recover from a cluster quorum failure, perform a cluster quorum backup. In
addition to the cluster quorum backup, you must have the following items to
recover Exchange 2000:
• Replacement hardware If hardware was permanently damaged in the
disaster, replace it with new hardware.
• Windows 2000 and Exchange 2000 installation CDs This includes all
applicable service packs and software updates as outlined in the Windows
2000 and Exchange 2000 Release Notes.
• Full backups of the system drive This includes any other logical drives
where critical applications or data was installed.
• Backups of Exchange databases This includes backups of the Information
Store databases.
• Member servers present in Active Directory If the Exchange 2000 Server
computer that you are restoring is a member server (not a domain
controller) in the domain, ensure that Active Directory still contains a
server object for it. If Active Directory does not contain the required
Exchange 2000 Server object, server recovery cannot proceed. Do not
attempt to recover Exchange 2000 Server unless the Exchange object exists
in Active Directory.
• Recent Windows 2000 system state data backup A system state backup is
a new type of backup in Windows 2000. You use Windows 2000 Backup to
back up the system configuration that normally fails to back up in a file
system backup. A system state data backup backs up system configuration
information, such as the Windows registry and IIS metabase.

- 31 -
Restore Cluster Quorum from Backup
Before the cluster can be restarted on any nodes in the cluster, the quorum
needs to be restored.
• Use the DumpConfig tool to restore the signature of the quorum disk if the
signature has changed since it was backed up. You can find the
DumpConfig tool in Microsoft Windows 2000 Server Resource Kit.
• If the Cluster service is running, stop the service on all cluster nodes.
• Use Windows 2000 Backup to restore the system state data (which contains
the contents of the cluster quorum disk). Windows 2000 Backup puts the
contents of the cluster quorum disk in the
systemroot\cluster\cluster_backup subdirectory.
• After you restore the cluster quorum, you are prompted to restart. Instead of
restarting, run Clusrest.exe to restore the contents of the
systemroot\cluster\cluster_backup subdirectory to the cluster quorum disk.
The Clusrest.exe tool can be found in Microsoft Windows 2000 Server
Resource Kit.
• Restart the computer.

Restore Exchange 2000 Databases from Backup


After you restore the quorum and restart the nodes in the cluster, verify that the
shared disk resources can be accessed after the Cluster service starts. If the
shared disk, on which the Exchange databases reside, can be accessed and has
survived the disaster, check to see if the .edb, .stm, and log files still exist for
the Exchange Virtual Server storage groups. If the files are intact, start your
Exchange resources. If the shared drive is lost, follow these steps to restore your
Exchange databases from backup:
• Start Exchange System Manager and, for databases owned by Exchange
Virtual Servers on the cluster, select Do not mount at startup. This option
prevents the creation of new databases on the shared disk resource when
the Exchange resources start.
• Use Windows 2000 Backup to perform the steps described in the
“Recovering Databases” section of the “Disaster Recovery for Microsoft
Exchange 2000 Server” white paper. Before you restore the databases,
verify that the shared storage that the Exchange databases reside on is
available and accessible by the cluster node that currently owns the disk
resource.
• Verify that the databases are mounted with Exchange System Manager and
check the event log. Be sure to clear the Do not mount at startup check box
for each database that was successfully restored.
______________________________________________________________
Note: Setup /Disasterrecovery does not work for a cluster server. You need to
remove the server and build a new node.

- 32 -
Recovering Databases

Recoverin
ActiveDire
If the server running Exchange 2000 is still functional after a disaster,
recovering a database is a straightforward process. You can use the

Forest
Windows 2000 Backup utility to restore the databases that you want to recover.
For step-by-step procedures for recovering databases, see the Exchange 2000
online documentation.
To recover one or more databases, first make sure that the Exchange
Information Store is running. In addition, make sure that the database or
databases that you want to restore are dismounted.
Because Exchange 2000 supports multiple storage groups and multiple
databases, it is only necessary to dismount the specific database or databases
that you want to restore. This allows users to continue to access all of the other
databases in the information store. Note that each storage group has a log file
signature value. Each log file in the sequence has the signature stamped on it.
The corresponding .edb file stores this signature value in its header information,
which prevents you from accidentally replaying log files from a different
storage group against the database you are restoring.

Recovering a Database From Backup


Tape
Before performing a restore from a backup tape, it is helpful to make a copy of
all existing database files, even if these files are damaged. Until your backup set
is fully restored and the restore is verified, do not assume that your database has
been successfully restored. If the attempted recovery fails, it might still be
possible to repair the existing database, even if it has been damaged.
Repairing the database should not be the first option that you consider, because
it is likely that you will lose at least some data during any repair process. The
amount of data lost will depend on the amount of data that has been corrupted.
Although statistically it is most likely that you will only lose a single message

- 33 -
or a single attachment, it is possible that you could lose a whole folder, a whole
mailbox, or even the entire store.

Back Up Entire Storage Groups


Although you can back up all of your databases individually, it is recommended
that you back up at least one entire storage group at one time. If you back up
databases individually, you must also back up the associated log files multiple
times in the process.

Note When you are restoring from a backup set, your current database is
overwritten as soon as the process begins. Rename the database that you are
restoring before you begin the restore process. If you do not leave your database
drive at least half empty, you will not be able to restore from backup because
you will not have enough space left for the restore.

Restoring Multiple Databases


You can restore multiple databases to multiple storage groups at the same time
by running multiple instances of the Windows 2000 Backup utility. Because
each storage group is treated as a backup set, and log files are shared, it is best
to perform one restore per storage group at a time. Additionally, you can
restore multiple databases in a storage group simultaneously, but you should
not replay the log files simultaneously. To do this, restore each backup set from
tape to its own temporary log directory to prevent the creation of two
restore.env files in the same directory. Do not select the Last Backup Set
checkbox, which triggers hard recovery. Instead, run eseutil /cc in each
temporary directory that contains restored transaction log files.

Recovering a Database to a Different


Server
It is easy to accidentally delete the wrong mailbox, for example, if the names of
two people are similar. If you need to recover a deleted mailbox, or a deleted
item from a mailbox, and if you have not configured your Exchange 2000
Server to retain these types of items, then you may need to recover the database
to a different server. To recover a database to a different server than where the
database was originally running, the database display name and the storage
group display name must be identical to the original display names. In addition,
the restore server must exist in a different forest and must be configured
identically to the original server running Exchange 2000. The organization
names and administrative group names on the restore server must match those
on the original server.
Recovering data from the new server requires you to:
 Create an empty database on the restore server and restore over it using the
backup set from your production server.
 Create a user in the restore forest and connect that user to the mailbox that
you need to restore.
 Log on to the restore domain as the new user and copy the mailbox data to a
.pst file.
 Log on either to the production domain as the original user and copy the
information from the .pst file to the original user’s mailbox, or give the .pst
file to the original user and direct them to recover their data.

- 34 -
 Either log on to the production domain as the original user and copy the
information from the .pst file to the original users mailbox, or give the .pst
file to the original user and direct them to recover their data.

- 35 -
Recovering Mailboxes

Recoverin
You can recover a deleted mailbox by reconnecting it to a new user account.
You can recover a damaged mailbox by restoring it from a backup.

Recovering Individual Items


Because Exchange 2000 performs both backup tasks and restore tasks at the
physical page level rather than at the mailbox level, you cannot easily restore
the individual messages in a mailbox from a backup. If you want to allow users
to retrieve messages from the Deleted Items folder in Outlook, you can do so by

Recov
enabling the Recover Deleted Items feature on the server running


Exchange 2000.

Recovering Mailboxes After the


Retention Period Has Expired
When the mailbox retention option is enabled, by default the retention period is
set to 30 days. If a mailbox is deleted, and if restoration is requested within the

Recov
30-day retention period, then you can recover and reconnect that mailbox


without restoring the entire database because of the GUID associated with each
mailbox.
Every mailbox has a GUID that never changes. Users in Active Directory also

Expire
have an attribute called msExchMailboxGuid. When a user is connected to a
mailbox, their GUID attribute matches the msExchMailboxGUID associated
with a mailbox in the store.
When you delete a user account from Active Directory, although their
Exchange mailbox is not deleted, there is no longer a user account in Active
Directory with a GUID that points to that mailbox. The next time the Cleanup
Wizard or Information Store (IS) maintenance runs, the mailbox is marked as

De
disconnected.
Reconnecting a mailbox takes the msExchMailboxGUID value that is on the

mailbox in the store and associates it with the user account that you specify.

- 36 -
The Cleanup Wizard reads the msExchMailboxGUID associated with each of
the mailboxes in the store and searches Active Direcotry for a user account with
that GUID. If the wizard cannot find a matching user account, then it marks the
mailbox as “disconnected.”
If the retention period has expired, you can recover the mailbox by performing
the following tasks:
 Install the recovery server in a different Active Directory forest than that in
which the original server is located, because only one Exchange 2000
organization can exist in any one Active Directory forest.
You are not required to match the name of the recovery forest to that of the
original forest. Recovery servers running Exchange 2000 can exist in the
same physical network as the original organization without interfering either
with Active Directory or with the Exchange 2000 organization.
 Install Exchange 2000 on the recovery server by using the same
organization name that was used in the original organization.
 Recover the database to an administrative group in which the
legacyExchangeDN values match the legacyExchangeDN values in the
administrative group from which the database was originally located.
 Name the restore storage group and the restore logical database so that their
names match the original storage group and logical database names.
 Create a .pst file, and move all data that you need to recover into the .pst
file. Open the .pst file on the production server, and move the data back to
the appropriate location.

- 37 -
Creating a Disaster Recovery Plan

Creating

Keep
Keepaacop
An effective disaster recovery plan documents standard operating procedures, is

co
known and understood by all administrators, and is periodically tested in a lab
environment to verify its accuracy.

informatio
When you develop a disaster recovery plan for your Exchange 2000 data, you

informatio
must address several system prerequisites and logistic considerations.

you
youmay
maynn
Prerequisites
The first steps in creating a disaster recovery plan are to make sure that you
have performed the following:
 Installed the correct tape drivers on each server that you want to back up.
 Supplied each server that you want to back up with an appropriate number
of backup tape sets. Depending on the amount of data that you intend to
back up, each tape set may require multiple tapes.

Verify
Verifythat
 Configured each server that you want to back up not to reboot automatically

tha
to prevent the MEMORY.DMP file from being overwritten by another,
immediate system malfunction.

of
oflog
logfiles
 Configured each server that you want to back up to write an event to the

file
system log, to send an administrative alert, to write all debugging
information to %SystemRoot%\MEMORY.DMP, and to overwrite any
existing file.

Logistics
In addition to addressing all of the prerequisites in the preceding list, you must
also settle the following logistic issues:
 Maintain a copy of your backup procedures, of your configuration
information, and of all appropriate repair disks in the same room with each

Disable
Disablecir
server that you might need to back up. Configuration information should

ci
include details about your operating system and about your server running

- 38 -
restore to
Exchange 2000, in addition to any hardware-specific data, such as
information about your Redundant Array of Independent Disks (RAID)
configuration and disk partitioning.
 Make sure that you have enough capacity on your hard disk or disks to
restore both the database and the log files. Remember that a full weekly
backup plus one week of transaction log files might be more than your
server can store. This depends partly on how many log files are generated
during each week. For example, if your server generates 2,000 log files each
week, then this will amount to 10 GB of log file space, in addition to the
space required by the database itself.
 Consider the location of your tape drives. If a tape drive is located on a
server, consider the effect that the additional load will have on the other
services on that server. Depending on the number of available tape drives,
tape backups of most servers may be preferable to backing up data across
the network.
 Remember that circular logging automatically overwrites transaction log
files after the data that those files contain has been fully committed to the
database. Although circular logging reduces disk storage space
requirements, when it is enabled you cannot perform either differential or
incremental backups, and you cannot recover to the point of failure.
 Plan to back up mailbox stores as often as possible. Ideally, you should back
up mailbox stores once each business day.
 Plan to replicate or back up critical public folders. Ideally, you should either
replicate these folders at least once per business day, or back up the public
folder store once each business day.
 Plan to keep a copy of your data backups at an off site location.

- 39 -
Appendix
Contents
 Overview..........................................................................................................2
 Introduction to Exchange 2000 Clustering......................................................2
Architecture of Server Clustering....................................................................3
Advantages of Server Clustering ....................................................................3
Identifying Business Needs for Using Exchange 2000 Clustering.................3
 Reviewing Key Concepts of Microsoft Windows 2000 Clustering................5
Clustering Components....................................................................................5
Cluster Communications..................................................................................7
Failover and Failback.......................................................................................8
Cluster Administrator.......................................................................................9
 Examining Key Concepts of Exchange 2000 Clustering................................9
Multimedia: Exchange 2000 Clustering..........................................................9
Hardware and Software Requirements..........................................................10
Exchange 2000 Virtual Servers.....................................................................11
 Exchange 2000 Clustering Best Practices......................................................13
Examining Available Clustering Models.......................................................13
Choosing an Appropriate Clustering Model..................................................14
Design Considerations...................................................................................16
 Designing an Appropriate Data Storage Strategy..........................................18
Examining Available Data Storage Models..................................................18
Choosing an Appropriate Data Storage Model..............................................19
Planning Considerations................................................................................19
 Storing Exchange Data.............................................................................21
Performance Monitoring................................................................................24
Exchange 2000 Server Cluster Failover Performance...................................27
Backup and Recovery....................................................................................28
Backing Up cluster Server.............................................................................29
Recovering Cluster Servers............................................................................30
Recovering Databases....................................................................................33
Recovering Mailboxes...................................................................................36
Creating a Disaster Recovery Plan................................................................38
Appendix........................................................................................................40
Cluster Networking Requirements...................42
General Requirements.........................................................................................42
Geographically Dispersed Clusters.....................................................................43
Cluster Networking Best Practices...................45
Hardware Planning Recommendations..........................................................45
Network Interface Controller Configuration Recommendations..................45
Cluster Service Configuration Recommendations........................................47
Procedures for Implementing Best Practices......................................................48
Configuring Network Interface Controllers Prior to Configuring Cluster
Service............................................................................................................48
Configuring Cluster Network Properties after Configuring Cluster Service.....49
Windows 2000...............................................................................................49

- 40 -
Windows .NET Server...................................................................................49
References.....................................................51
White Papers........................................................................................................51

- 41 -
Cluster Networking Requirements

This section describes the requirements that server cluster puts on the network
infrastructure. These requirements must be met for the server cluster solution to
function correctly.

General Requirements
This section describes the requirements for any server cluster deployment.

• The complete hardware configuration for a cluster must be selected


from the Cluster Hardware Compatibility List (HCL). The network
interface controllers (NICs) along with any other components used in
certified cluster configurations must have the Windows logo and
appear on the Microsoft Hardware Compatibility List. Note: A cluster
built from logoed components that does not appear on the cluster HCL
is NOT a certified configuration.
(Windows 2000, .NET server)

• Two or more independent networks must connect the nodes of a cluster


in order to avoid a single point of failure. The use of two local area
networks (LANs) is typical. A cluster whose nodes are connected by
only one network is not a supported configuration.
(Windows 2000, .NET server)

• Each cluster network must fail independently of all other cluster


networks. That is, two cluster networks must not have a component in
common that can cause both to fail simultaneously. For example, the
use of a multi-port NIC to attach a node to two cluster networks would
not meet this requirement in most cases because the ports are not
independent. Likewise, two networks that share a switch could also
have a single point of failure. The simplest way to ensure that your
cluster meets this requirement is to use physically independent
components to construct cluster networks.
(Windows 2000, .NET server)

• All of the adapters used to attach nodes to the same cluster network
must use the same communication settings – e.g. the same Speed,
Duplex Mode, Flow Control, and Media Type. If the adapters are
connected to a switch, the port settings of the switch must match those
of the adapters.
(Windows 2000, .NET server)

• Each cluster network must be configured as a single IP subnet whose


subnet number is distinct from those of other cluster networks. For
example, a cluster could use two networks configured with the
following subnet addresses: 10.1.x.x and 10.2.x.x with mask

- 42 -
255.255.0.0. Addresses may be assigned to the nodes dynamically by
DHCP, but manual configuration with static addresses is recommended
(see section Cluster Networking Best Practices). The use of Automatic
Private IP Addressing (APIPA) to configure cluster networks is not
supported. APIPA is not designed for use with computers that are
attached to multiple networks.
(Windows 2000, .NET server)

• At least two of the cluster networks must be configured to support


internal communication between cluster nodes in order to avoid a
single point of failure. That is, the roles of these networks must be
configured as either Internal Cluster Communications Only or All
Communications in Cluster Service. Typically, one of these networks
is a private interconnect dedicated to internal cluster communication
(see section Cluster Networking Best Practices).
(Windows 2000, .NET server)

• The use of NIC teaming on all cluster networks concurrently is not


supported. At least one of the cluster networks that is enabled for
internal communication between cluster nodes must not be teamed.
Typically, the unteamed network is a private interconnect dedicated to
this type of communication. The use of NIC teaming on other cluster
networks is acceptable; however, if communication problems occur on
a teamed network, Microsoft Product Support Services may require
that teaming be disabled. If this action resolves the problem or issue,
then you must seek further assistance from the manufacturer of the
teaming solution.
(Windows 2000, .NET server)

• The nodes of a cluster must belong to a single domain. The domain


configuration must meet the following requirements in order to avoid a
single point of failure in the authentication process:

o The domain must have at least two domain controllers.


o If DNS is used to resolve names in the domain, then at least
two DNS Servers must be deployed as well. The DNS servers
should support dynamic updates.
o Each domain controller and cluster node must be configured
with a primary and at least one secondary DNS Server. If the
domain controllers are also DNS Servers, then each should
point to itself for primary DNS resolution and to the other
DNS Servers for secondary resolution.
o At least two domain controllers must be configured to be
Global Catalog Servers.

(Windows 2000, .NET server)

Geographically Dispersed Clusters


Geographically dispersed clusters have the following additional requirements:

- 43 -
• The nodes in a cluster may be on different physical networks; however,
the private and public network connections between cluster nodes must
appear as a single, non-routed LAN using technologies such as virtual
LANs (VLANs).
(Windows 2000, .NET server)

• The round-trip communication latency between any pair of cluster


nodes must be no more than 500 milliseconds.
(Windows 2000, .NET server)

• As with LANs, each VLAN must fail independently of all other cluster
networks.
(Windows 2000, .NET server)

• Due to the complexity of geographically-separated clusters, you need


to involve the hardware manufacturer or hardware vendor in any issue.
Often, there is third-party software and drivers that are required for the
clusters to function. Microsoft Product Support Services may not be
aware of how these components interact with Windows Clustering.
(Windows 2000, .NET server)

- 44 -
Cluster Networking Best Practices

This section describes network best practices for deploying a server cluster.

Hardware Planning
Recommendations
• Use identical NICs in all cluster nodes; that is, each adapter should be
the same make, model, and firmware version.
(Windows 2000, .NET server)

• Reserve one network exclusively for internal communication between


cluster nodes. This is the private network. Use other networks for
communication with clients. These are public networks. Do not use
NIC teaming on the private network.
(Windows 2000, .NET server)

Network Interface Controller


Configuration
Recommendations
• Manually select the speed and duplex mode of each cluster NIC. Do
not use automatic detection. Some adapters drop packets while
automatically negotiating the settings of the network. All adapters on a
network must be configured to use the same speed and duplex mode. If
the adapters are connected to a switch, ensure that the port settings of
the switch match those of the adapters.
(Windows 2000, .NET server)

• Use static IP addresses for all nodes on the private network. Choose the
addresses from one of the following ranges.
• 10.0.0.0 - 10.255.255.255 (Class A network)
• 172.16.0.0 - 172.31.255.255 (Class B network)
• 192.168.0.0 - 192.168.255.255 (Class C network)
(Windows 2000, .NET server)

• Use static IP addresses for all nodes on all public networks. The use of
dynamic configuration via DHCP is not recommended. Failure to
renew a lease could disrupt cluster operation.
(Windows 2000, .NET server)

• Do not configure DNS servers, WINS servers, or a default gateway on


the private NICs.

- 45 -
(Windows 2000, .NET server)

• You should configure WINS or DNS servers on the public NICs. If


network name resources will be deployed on the public network, then
the DNS servers should support dynamic updates; otherwise, name-to-
IP address mappings will not be promptly updated during failover.
(Windows 2000, .NET server)

• Do configure a default gateway on the public NICs, if cluster nodes use


those NICs to communicate with clients or services on remote subnets.
Be aware that in clusters with multiple public networks, configuring
nodes with a default gateway on more than one network can cause
routing problems.
(Windows 2000, .NET server)

• On each cluster node, set the network connection order to be:


o Public network(s) – highest priority
o Private network
o Remote access connections – lowest priority
(Windows 2000, .NET server)

• Change the default name for each network connection to clearly


identify the use of each network. For example, you might change the
name of the private network connection from “Local Area Connection
(x)” to “Private Cluster Network”.
(Windows 2000, .NET server)

• The private LAN should be isolated. Only nodes that are part of the
cluster should be connected to the private subnet. Where there are
several clusters, using the same subnet for the private network for all of
the clusters is reasonable. You should not, however, put other network
infrastructure such as domain controllers, WINS server, DHCP servers
etc. on the private subnet.
(Windows 2000, .NET server)

• To create an isolated network segment, you may use a switch capable


of creating VLAN segments, a hub, or in the case of a two-node server
cluster, you may use a crossover cable.
(Windows 2000, .NET server)

• You should disable the default media sense policy for TCP/IP to ensure
that, if cables are disconnected or media sense is lost, the TCP/IP
configuration and corresponding cluster network configuration are not
torn down. Add the following registry value to each node:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\
Tcpip\Parameters
Value Name: DisableDHCPMediaSense
Data Type: REG_DWORD
Data: 1
(Windows 2000)

- 46 -
Cluster Service Configuration
Recommendations
• Set the private network role to Internal Cluster Communications Only.
Verify that the role for each public network is set to “All
Communications” (this is the default value).
(Windows 2000, .NET server)

- 47 -
Procedures for Implementing Best
Practices
Configuring Network Interface
Controllers Prior to
Configuring Cluster Service
• Configure the speed of each NIC as follows:
o Open Control Panel. Open Network Connections. Right-click on
the appropriate connection object and select Properties. Click
Configure, and then Advanced.
o Set the desired network speed using the drop-down list.
o Ensure that other settings, such as Duplex Mode, are also the same
for all adapters on a network.
• Configure the Internet Protocol settings of the private NIC as follows:
o Return to Network Connections. Open the Properties of the
appropriate connection object.
o Ensure that the Internet Protocol (TCP/IP) check box is selected.
o Highlight Internet Protocol and select Properties.
o Click the radio-button for Use the following IP address and enter a
static address.
o Ensure that there is no default gateway configured for the private
network.
o Verify that there are no values defined in the Use the following
DNS server addresses box. Click Advanced. On the DNS tab,
verify that there are no values defined. Make sure that the Register
this connection's addresses in DNS and Use this connection's DNS
suffix in DNS registration check boxes are cleared. Note that if the
cluster node is a DNS server, then IP address 127.0.0.1 will appear
in the list and should remain there.
• Configure the network connection order as follows:
o Return to Network Connections. Select Advanced. Select
Advanced Settings. In the Connections box, order the available
network connections as follows:
 Public network(s)
 Private network
 Remote access connections
• Change the default names of the network connections as follows:
o Return to Network Connections. Right click on a network
connection object. Select Rename. Edit the name value.
o The name used for the connection object that represents a network,
such as the private network, should be the same on all nodes. If the
names of the connection objects are not the same, Cluster Service
will choose one and change the others to match it.

- 48 -
Configuring Cluster Network Properties
after Configuring Cluster Service
Windows 2000
When installing the cluster software, the Configuring Cluster Networks dialog
box will be presented for each network (in arbitrary order). For public network,
make sure name and IP address match network interface for public network.
Check the box "Enable this network for cluster use". Select the option "All
communications (mixed network)". For private network, make sure name and
IP address match network interface for private network. Check the box "Enable
this network for cluster use". Select the option "Internal cluster communications
only".

The default configuration during installation is to configure your public


network adapter for "All Communication" and the private (heartbeat) network
adapter for "Internal Cluster Communications." Microsoft recommends that you
keep this default configuration. For your cluster to install and function properly,
you must configure at least one of the networks for "Internal Cluster
Communications" or "All Communications."

Windows .NET Server


The Windows .NET server cluster configuration wizard does not provide a way
to change the network settings during configuration. The default setting for all
networks is to enable “All Communications”. This will ensure that the cluster
can operate correctly. To conform to best practices, you should make one of the
networks into a private network and make the private network the highest
priority for internal cluster communications as follows:

• Set the private network role to be “Internal Cluster Communications” as


follows:
o In cluster administrator, double click on the cluster name. You will
see a folder “Cluster Configuration”
o Double click on the cluster configuration folder and double click
on the “Networks” folder to see all the available cluster networks.
o Select the network you are configuring for the private cluster
communications and chose properties.
o In the Properties for the network you will see roles such as “Client
access only”. For the private network make sure that the “Enable
this network for cluster use” checkbox is checked and that the
“Internal Communications Only” role is selected.
• Configure the private network to be the highest priority for internal cluster
communication as follows:
o In Cluster Administrator, select the cluster and choose Properties.
o From the Network Priority tab, verify that the private network is
listed at the top.
o If it is not, use the Move Up button to increase its priority.

- 49 -
IPSec
Although it is possible to use Internet Protocol security (IPSec) for applications
that can failover in a server cluster, IPSec was not designed for failover
situations and we recommend that you do NOT use IPSec for applications in a
server cluster.

The primary issue is that Internet Key Exchange (IKE) Security Associations
(SAs) are not transferred from one server to the other if a failover occurs
because they are stored in a local database on each node.

In a connection that is protected by IPSec, an IKE SA is created in phase-I


negotiations. Two IPSec SAs are created in phase II. A time-out value is
associated with the IKE and IPSec SAs. If Master Perfect Forward Secrecy is
not used, the IPSec SAs are created by using key material from the IKE SAs. If
this is the case, the client must wait for the default time-out or lifetime period
for the inbound IPSec SA to expire and then wait for the timeout or lifetime
period that is associated with the IKE SA.

The default time-out for the Security Association Idle Timer is 5 minutes, in the
event of a failover, clients will not be able to reestablish connections until at
least 5 minutes after all resources are online, using IPSec.

Although IPSec is not optimally designed for a clustered environment, it may


be used if your business need for secure connectivity outweighs client
downtime in the event of a failover.

NetBIOS
In Windows .NET Server, the cluster service does not require NetBIOS,
however a number of services are affected if NetBIOS is disabled. You should
be aware of the following:

• By default, when a cluster is configured, NetBIOS is enabled on the


cluster IP Address resource. Once the cluster is created you should
disable NetBIOS by unchecking the check box on the parameters page
of the Cluster IP Address resource property sheet.
• When you create additional IP Address resources you should uncheck
the NetBIOS checkbox.
• With NetBIOS disabled, you will not be able to use the “Browse”
function in Cluster Administrator when opening a connection to a
cluster. Cluster Administrator uses NetBIOS to enumerate all clusters
in a domain.
• Print and File services are disabled – no virtual names are added as
redirector endpoints.
• Cluster Administrator does not work if a cluster name is specified.
Cluster Administrator calls GetNodeClusterState which uses the
remote registry APIs which, in tern, use named pipes based on the
virtual name.

- 50 -
References

White Papers
• Deploying Microsoft Exchange 2000 Server Service Pack 2
Clusters
http://www.microsoft.com/exchange/techinfo/deployment/200
0/E2KSP2_Cluster.asp

• Best Practices for Deploying Full-Text Indexing


http://www.microsoft.com/Exchange/techinfo/deployment/200
0/BestIndexing.asp

• Windows 2000 Clustering: Performing a Rolling Upgrade


http://www.microsoft.com/technet/win2000/win2ksrv/technot
e/rollupgr.asp

• Storage Solutions for Microsoft Exchange 2000 Server


http://www.microsoft.com/Exchange/techinfo/deployment/200
0/E2KStorage.asp

• Microsoft Exchange 2000 Front-End Server and SMTP Gateway


Hardware Scalability Guide
http://www.microsoft.com/exchange/techinfo/administration/2
000/E2k_Fescalability.asp

• Microsoft Exchange 2000 Server Back-End Mailbox Scalability


http://www.microsoft.com/exchange/techinfo/administration/2
000/be_scalability.asp

• Exchange 2000 Server LoadSim Tool


http://www.microsoft.com/exchange/downloads/2000/loadsim
.asp

• Exchange 2000 Server ESP Tool


http://www.microsoft.com/Exchange/downloads/2000/ESP.asp

• Exchange 2000 Server Capacity and Topology Calculator


http://www.microsoft.com/exchange/techinfo/planning/2000/e
xchangecalculator.asp

• Introducing Windows 2000 Clustering Technologies


http://www.microsoft.com/windows2000/techinfo/howitworks/
cluster/introcluster.asp

• Exploring Windows Clustering Technologies


http://www.microsoft.com/windows2000/technologies/clusteri
ng/default.asp

- 51 -

Das könnte Ihnen auch gefallen