Sie sind auf Seite 1von 31

Windows Server 2008 SP2 and Windows

Server 2008 R2 SP1 on HP Integrity Servers


Failover Cluster Installation and
Configuration Guide

HP Part Number: T8704-96012


Published: April 2011
© Copyright 2011 Hewlett-Packard Development Company, L.P.
Legal Notices
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP
shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft®, Windows®, and Windows NT® are trademarks of Microsoft Corporation in the U.S. and other countries.

Intel® and Itanium® are registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Java® is a U.S. trademark of Sun Microsystems, Inc.

UNIX® is a registered trademark of The Open Group.


Table of Contents
About This Document.........................................................................................................7
Intended Audience.................................................................................................................................7
New and Changed Information in This Edition.....................................................................................7
Document Organization.........................................................................................................................7
Typographic Conventions......................................................................................................................7
Related Information................................................................................................................................8
Publishing History..................................................................................................................................8
HP Encourages Your Comments............................................................................................................9

1 Introduction...................................................................................................................11
Clustering Overview.............................................................................................................................11
Cluster Terminology.............................................................................................................................12
Nodes...............................................................................................................................................12
Cluster Service.................................................................................................................................12
Shared Disks....................................................................................................................................12
Resources.........................................................................................................................................12
Resource Dependencies...................................................................................................................13
Services and Applications...............................................................................................................13
Quorums..........................................................................................................................................13
Heartbeats........................................................................................................................................16
Virtual Servers.................................................................................................................................16
Failover............................................................................................................................................16
Failback............................................................................................................................................16

2 Installing and Configuring the Cluster.......................................................................17


An Overview of the Installation and Configuration Process...............................................................17
Gathering Required Installation Information.......................................................................................19
Installing the Cluster.............................................................................................................................20
Additional Configuration Topics..........................................................................................................28
NIC Teaming in Clustered Environments.......................................................................................28
Troubleshooting the Cluster.................................................................................................................29
What to Do if Validation Tests Fail..................................................................................................29
Validation Issues for Multi-site or Geographically Dispersed Failover Clusters............................29
Troubleshooting...............................................................................................................................29
Additional Clustering Tasks.................................................................................................................30
Upgrading Individual Nodes in the Cluster...................................................................................30
Evicting a Node from the Cluster....................................................................................................30
Destroying a Cluster........................................................................................................................31

Table of Contents 3
List of Figures
1-1 Disk Only example........................................................................................................................14
1-2 Node Majority example.................................................................................................................14
1-3 Node and File Share Majority example.........................................................................................15
1-4 Node and Disk Majority example.................................................................................................15
2-1 Example cluster hardware cabling scheme (2 node cluster).........................................................18
2-2 Server Manager window...............................................................................................................20
2-3 Add Features Wizard window......................................................................................................21
2-4 Failover Cluster Management window.........................................................................................22
2-5 Validate Configuration Wizard window.......................................................................................23
2-6 Validate Configuration Wizard window.......................................................................................23
2-7 Validation Wizard result symbols.................................................................................................24
2-8 Create Cluster Wizard window.....................................................................................................25
2-9 Failover Cluster Management window.........................................................................................25
2-10 Configure Service menu................................................................................................................26
2-11 High Availability wizard...............................................................................................................27
2-12 High Availability wizard...............................................................................................................27
2-13 High Availability wizard...............................................................................................................28
2-14 Failover Cluster Management window.........................................................................................31
2-15 Failover Cluster Management window.........................................................................................31

4 List of Figures
List of Tables
2-1 Installation and Configuration Input............................................................................................19

5
6
About This Document
This document describes how to install and configure Microsoft Failover Clusters on HP Integrity
servers running Microsoft Windows Server 2008 with Service Pack 2 (SP2) or Microsoft Windows
Server 2008 R2 SP1.
The document printing date and part number indicate the document’s current edition. The
printing date changes when a new edition is printed. Minor changes may be made at reprint
without changing the printing date. The document part number changes when extensive changes
are made.
Document updates may be issued between editions to correct errors or document product changes.
To ensure that you receive the updated or new editions, you should subscribe to the appropriate
product support service. See your HP sales representative for details.
To find the latest version of this document, or other documents supporting Windows Server
2008 R2 SP1 and Windows Server 2008 SP2 on HP Integrity Servers, click here:
• http://www.hp.com/go/windows-on-integrity-docs (to locate documents by operating system)
• http://www.hp.com/go/integrity_servers-docs (to locate documents by server model number)

Intended Audience
This document is intended for system administrators and HP support personnel responsible for
installing, configuring, and managing Microsoft Failover Cluster solutions using HP Integrity
servers.
This document is not a tutorial.

New and Changed Information in This Edition


This document includes the following changes since its last release:
• About This Document: added Service Pack 1 (SP1) support for Windows Server 2008 R2
• Publishing History: multiple changes to table content
• Introduction: in first Note, added Service Pack 1 (SP1) for Windows Server 2008 R2
• An Overview of the Installation and Configuration Process: deleted sentence and URL from Step
11

Document Organization
This document is organized as follows:

“Introduction” (page 11) Describes cluster concepts and terminology.


“Installing and Describes how to install and configure clusters.
Configuring the Cluster”
(page 17)

Typographic Conventions
This document uses the following typographical conventions:
WARNING A warning calls attention to important information that if not understood
or followed will result in personal injury or nonrecoverable system
problems.
CAUTION A caution calls attention to important information that if not understood
or followed will result in data loss, data corruption, or damage to
hardware or software.

Intended Audience 7
IMPORTANT This alert provides essential information to explain a concept or to
complete a task
NOTE A note contains additional information to emphasize or supplement
important points of the main text.
KeyCap The name of a keyboard key or graphical interface item (such as buttons,
tabs, and menu items). Note that Return and Enter both refer to the
same key.
Computer output Text displayed by the computer.
User input Commands and other text that you type.
Command A command name or qualified command phrase.
Ctrl+x A key sequence. A sequence such as Ctrl+x indicates that you must hold
down the key labeled Ctrl while you press another key or mouse button.

Related Information
You can find more information about clustering with HP Integrity servers, server management,
and software in the following locations:
• For a collection of links to various overviews, white papers, and configuration documents
supporting failover clustering on Windows Server 2008:
http://www.microsoft.com/downloads/
details.aspx?familyid=75566F16-627D-4DD3-97CB-83909D3C722B&displaylang=en
At the time of this publication, the following documents were available:
— Microsoft High Availability Strategy White Paper
— Overview of Failover Clustering with Windows Server 2008
— Windows Server 2008 Failover Clustering Architecture Overview
— Windows Server 2008 Failover Clustering Datasheet
— Windows Server 2008 Multi Site Clustering

Publishing History
The document part number and publication date indicate the document’s current edition. The
publication date will change when a new edition is printed. Minor changes may be made at
reprint without changing the publication date. The document part number will change when
extensive changes are made. Document updates may be issued between editions to correct errors
or document product changes. To ensure that you receive the updated or new editions, you

8
should subscribe to the appropriate product support service. See your HP sales representative
for details.

Manufacturing Part Supported Operating Supported Smart Supported Products Publication Date
Number Systems Setup Version (Servers)

T8704-96012 • Microsoft Version 7.1 BL860c April, 2011


Windows Server BL870c
2008 with Service
Pack 2 (SP2) for BL860c i2
Itanium-based BL870c i2
Systems
BL890c i2
• Microsoft
Windows Server rx2800 i2
2008 R2 with rx2660
Service Pack 1
rx3600
(SP1) for Itanium
Edition rx6600
rx7640
rx8640
Superdome sx2000

HP Encourages Your Comments


HP encourages your comments concerning this document. We are committed to providing
documentation that meets your needs. Send any errors found, suggestions for improvement, or
compliments to:
docsfeedback@hp.com
Please include the document title, manufacturing part number, and any comment, error found,
or suggestion for improvement you have concerning this document.

HP Encourages Your Comments 9


10
1 Introduction
In Windows Server® 2008, the improvements to failover clusters (formerly known as server
clusters) include simplified creation and configuration, greater security, and enhanced stability.
Cluster setup and management are much easier. Security and networking in clusters have been
improved, in addition to the way a failover cluster communicates with its storage systems.

NOTE: Throughout this document, “Windows Server 2008” refers to both the “Windows Server
2008 with Service Pack 2 (SP2)” and “Windows Server 2008 R2 with Service Pack 1 (SP1)” versions
of the operating system, unless specifically noted otherwise.
Other new features in failover clustering include:
• New validation wizard verifies that your system, storage, and network configurations are
suitable for creating a cluster.
• Support for GUID partition table (GPT) disks in cluster storage. GPT disks can have partitions
larger than two terabytes and built-in redundancy in the way partition information is stored,
unlike master boot record (MBR) disks.
• Improvements to interfaces for working with shared folders, simplifying their configuration
and management.
• Improvements to management interfaces.
See the following documents for basic, introductory information about clustered solutions for
Windows Server 2008:
• For a summary of failover clustering features and functionality:
Windows Server 2008 SP2: http://technet.microsoft.com/en-us/library/cc770625(WS.10).aspx
Windows Server 2008 R2: http://technet.microsoft.com/en-us/library/dd621586(WS.10).aspx
• To see a list of frequently asked questions about the Failover Cluster Configuration Program:
http://www.microsoft.com/windowsserver2008/en/us/clustering-faq.aspx
• For a collection of high-level Help topics regarding the configuration and management of
failover clusters:
http://technet2.microsoft.com/windowsserver2008/en/library/
6c5b0145-dee7-47b1-b29c-4e52b146ee341033.mspx

Clustering Overview
Clustering in Windows Server 2008 has been radically redesigned to simplify and streamline
cluster creation and administration. Rather than worrying about groups and dependencies,
administrators can create an entire cluster in one seamless step via a wizard interface. All you
have to do is supply a name for the cluster and the servers to be included in the cluster and the
wizard takes care of the rest. You do not have to be a cluster specialist or have in-depth knowledge
of failover clusters to successfully create and administer Windows Server 2008 failover clusters.
The goal of Windows Server 2008 failover clustering is to make it possible for the non-specialist
to create a failover cluster that works. Organizations using previous versions of failover clustering
often had staff dedicated to installation and management of failover clusters. This significantly
increased the total cost of ownership for failover cluster services. With the introduction of
Windows Server 2008 failover clusters, even an IT generalist without any special training in
failover cluster services will be able to create a server cluster and configure the cluster to host
redundant services, and the configuration will work. This means a lower total cost of ownership
for you.
You will not need an advanced degree to get failover clusters working. The main reason for this
change is that the new administrative interface does the heavy lifting for you. In previous versions

Clustering Overview 11
of failover clustering, you had to learn an unintuitive, cluster-centric vocabulary and then try to
figure out what those words really meant. There is no need to learn the intricacies of cluster
vocabulary with Windows Server 2008 failover clustering. Instead, configuration is task based.
You are asked if you want to create a highly available file server, Dynamic Host Configuration
Protocol (DHCP) server, Windows Internet Name Service (WINS) server, or other type of server
and then the wizard walks you through the process.

Cluster Terminology
A working knowledge of clustering begins with the definition of some common terms. The
following terms are used throughout this document.

Nodes
Individual servers or members of a cluster are referred to as nodes or systems (the terms are
used interchangeably). A node can be an active or inactive member of a cluster, depending on
whether or not it is currently online and in communication with the other cluster nodes. An
active node can act as host to one or more cluster groups.

Cluster Service
Cluster service refers to the collection of clustering software on each node that manages all
cluster-specific activity.

Shared Disks
Shared disks are devices (normally hard disk drives) that the cluster nodes are attached to by a
shared bus. Applications, file shares, and other resources to be managed by the cluster are stored
on the shared disks.

Resources
Resources are physical or logical entities (such as file shares) managed by the cluster software.
Resources can provide a service to clients or be an integral part of the cluster. Examples of
resources are physical hardware devices such as disk drives, or logical items such as IP addresses,
network names, applications, and services. Resources are the basic unit of management by the
cluster service. A resource can only run on a single node in a cluster at a time, and is online on
a node when it is providing its service on that node.
At any given time, a resource can exhibit only one of the following states:
• Offline
• Offline pending
• Online
• Online pending
• Failed
When a resource is offline, it is unavailable for use by a client or another resource. When a
resource is online, it is available for use. The initial state of any resource is offline. When a resource
is in one of the pending states, it is in the process of either being brought online or taken offline.
If the resource cannot be brought online or taken offline after a specified amount of time, and
the resource is set to the failed state, you can specify the amount of time that cluster service waits
before failing the resource by setting its pending timeout value in Failover Cluster Management
tool.
Resource state changes can occur either manually (when you use the Failover Cluster Management
tool to make a state transition) or automatically (during the failover process). When a service
and application fails over, the states of each resource are altered according to their dependencies
on the other resources in the service and application.

12 Introduction
Resource Dependencies
A dependency is a reliance between two resources that makes it necessary for both resources to
run on the same node (for example, a Network Name resource depending on an IP address).
The only dependency relationships that cluster service recognizes are relationships between
resources. Cluster service cannot be told, for example, that a resource depends on a Windows
Server 2008 service; the resource can only be dependent on a resource representing that service.

Services and Applications


Services and applications are managed as single units for configuration and recovery purposes.
If a resource depends on another resource, both resources must be a member of the same service
or application. For example, in a file share resource, the service or application containing the file
share must also contain the disk resource and network resources (such as the IP address and
NetBIOS name) to which clients connect to access the share. All resources within a service or
application must be online on the same node in the cluster.

NOTE: During failover, entire services and applications are moved from one node to another
node in the cluster. A single resource cannot fail from one node to another.

Quorums
The Windows Server 2008 failover clustering quorum model is entirely new and represents a
blend of the earlier shared disk and majority node set models. In Windows Server 2008 failover
clustering there are now four ways to establish a quorum.
The following is a list of the different quorum types and their characteristics:
• Disk Only – This is the traditional MSCS quorum model, where a shared quorum disk must
be online and nodes must be able to communicate with that disk. In this configuration, the
disk is the master. The nodes have no votes, and the cluster stays up even when only one
node can talk to the disk.

Cluster Terminology 13
Figure 1-1 Disk Only example

• Majority Node Set – This type of quorum is optimal for clusters having an odd number of
nodes. In this configuration, only the nodes have votes. The shared storage does not have
a vote. A majority of votes are needed to operate the cluster.

Figure 1-2 Node Majority example

• Node and File Share Majority – This type of quorum is optimal for clusters having an even
number of nodes when a shared witness disk is not an option. Other characteristics include
the following:
— each node and the file share “witness” gets a vote
— it does not require a shared disk to reach a quorum
— the file share has no special requirements
— the file share should be located at a third site, making this type of quorum the best
solution for geographically dispersed clusters

14 Introduction
Figure 1-3 Node and File Share Majority example

• Node and Disk Majority – This type of quorum is optimal for clusters having an even
number of nodes. Each node and the witness disk gets a vote, and it requires that each node
can communicate with the disk. This cluster can survive the loss of any one vote.

Figure 1-4 Node and Disk Majority example

The concept of quorum in Windows Server 2008 moves away from the requirement for a shared
storage resource. The concept of quorum now refers to a number of votes which must equate to
a majority of nodes. All nodes and disk resources get a vote. This helps eliminate failure points
in the old model, where it was assumed that the disk would always be available. If the disk failed,
the cluster would fail.
In Windows Server 2008 failover clustering the disk resource that gets a vote is no longer referred
to as a quorum disk; now it is called the witness disk. With the new quorum models, the cluster
can come online even if the witness disk resource is not available.
The No Majority Disk Only model behaves similarly to the old quorum disk model. If the
quorum disk failed, the cluster would not come online, thus representing a single point of failure.
The Node Majority model behaves similarly to the Majority Node Set model. This model requires
three or more nodes and there is no dependence on witness-disk availability. The disadvantage
of this model is that you cannot run two server clusters, because a majority of nodes is not possible
in a two-node cluster scenario.
The Node and File Share Majority and the Node and Disk Majority models are similar. In the
Node and Disk Majority model, both the nodes and the disk resource are allowed to vote. The

Cluster Terminology 15
cluster will come online as long as a majority of votes are reached, regardless of the status of the
disk resource. In the Node and File Share Majority model, a file share replaces the disk as a
disk-based vote. The Node and File Share Majority model is an excellent solution for
geographically dispersed multi-site clusters. In the Node and Disk Majority quorum model, the
disk resource is a shared disk, the witness disk.
Failover cluster administrators can select the quorum model of choice, depending on the
requirements of the clustered resource. The quorum model should be selected after the cluster
is first created and prior to putting the cluster into production.

Heartbeats
Heartbeats are network packets periodically broadcast by each node over the private cluster
network. Heartbeats inform other nodes of a single system's health, configuration, and network
connection status. When heartbeat messages are not received among the other nodes as expected,
the cluster service interprets this as node failure, and a failover begins.

Virtual Servers
Groups that contain an IP address resource and a network name resource (along with other
resources) are published to clients on the network under a unique server name. Because these
groups appear as individual servers to clients, they are called virtual servers. Users access
applications or services on a virtual server the same way they access applications or services on
a physical server. They do not need to know that they are connecting to a cluster and have no
knowledge of which node they are connected to.

Failover
Failover is the process of moving a group of resources from one node to another in the case of a
failure. For example, in a cluster where Microsoft Internet Information Server (IIS) is running on
node A and node A fails, IIS fails over to node B of the cluster.

Failback
Failback is the process of returning a resource or group of resources to the node on which it was
running before it failed over. For example, when node A comes back online, IIS can fail back
from node B to node A.

16 Introduction
2 Installing and Configuring the Cluster
This chapter provides installation and configuration directions for clustered systems using HP
Integrity servers and Microsoft Windows Server 2008, IA64 Edition.

An Overview of the Installation and Configuration Process


To install and configure your cluster, you must complete the following steps:
1. All hardware components that comprise a cluster configuration need to earn a Microsoft
logo on Windows Server 2008 designations and will be listed in the Windows Catalog.
However, Windows Server 2008 Cluster solutions will not be listed in the Windows Server
Catalog.
For more information about the Microsoft Windows Server Catalog, see:
http://www.windowsservercatalog.net/
First, you will need to select the operating system and storage platform for your clustering
solution. Then you will need to verify that you have two or more supported HP Integrity
servers, supported Fibre Channel or SAS adapters, two or more supported network adapters,
two or more supported Fibre Channel host bus adapters (HBAs), two supported Fibre
Channel switches, and one or more supported shared storage enclosures. Also verify that
you have the required drivers for these components.
For more information about the Windows logo program, see:
http://www.microsoft.com/whdc/winlogo/hwrequirements.mspx
For more information about Microsoft's support policy for Windows Server 2008 failover
clusters, see:
http://support.microsoft.com/default.aspx?scid=kb;EN-US;943984
For more information about Microsoft's support policy for server clusters, the Hardware
Compatibility List, and the Windows Server Catalog, see:
http://support.microsoft.com/kb/309395
2. (This step applies only to systems where the Windows Server 2008 operating system is NOT preloaded,
per the purchase agreement.)
Use the Microsoft Windows Server 2008, IA64 Edition CD to install the OS on each of the
nodes that will make up the clustered system. For more information about this step, see the
appropriate “Installation (Smart Setup) Guide, Windows Server 2008” document at:
http://docs.hp.com/en/hw.html#Windows%2064-bit%20on%20HP%20Integrity%20Servers
3. Use the Smart Setup CD to install the Support Pack on each node. This installs or updates
the system firmware and operating system drivers. Insert the Smart Setup CD, click the
Support Pack tab, and follow the onscreen instructions.
4. Use the Smart Update CD (if shipped with your system) on each node to install any Microsoft
hot fix updates or security patches that have been published for the operating system.
5. Locate your HP Storage Enclosure configuration software CD.
6. Locate your HP Storage Enclosure Controller firmware, and verify you have the latest
supported version installed.
7. Locate your HP StorageWorks MultiPath for Windows software.

An Overview of the Installation and Configuration Process 17


NOTE: You must use MultiPath software if you have redundant paths connected to your
storage. Installing more than one HBA per cluster provides multiple connections between
the clusters and your shared storage (see Figure 2-1). Multiple HBAs, along with MultiPath
software, are highly recommended because they provide continuous access to your storage
system and eliminate single points of failure.

8. Locate your HP Fibre Channel switch firmware, and verify that you have the latest supported
version installed.
9. Verify that you have sufficient administrative rights to install the OS and other software
onto each node.
10. Verify that all of the required hardware is properly installed and cabled (see Figure 2-1).

NOTE: Figure 2-1 is an example only. It might not represent the actual cabling required
by your system.

Figure 2-1 Example cluster hardware cabling scheme (2 node cluster)

11. Determine the input parameters required to install your clustered system and record them
in the table in “Gathering Required Installation Information” (page 19).
12. Go to the next section (“Installing the Cluster” (page 20)) for installation instructions.
13. Go to the next section (“Additional Configuration Topics” (page 28)) for links to Microsoft
documentation regarding cluster configuration.

18 Installing and Configuring the Cluster


14. Go to the next section (“Troubleshooting the Cluster” (page 29)) for information about
troubleshooting problems with your failover cluster.

Gathering Required Installation Information


Use Table 2-1 to record the input parameters you will need to install and configure the cluster.
Record the information in the Value column next to each description.
Table 2-1 Installation and Configuration Input
Input Description Input Value

Node names Node 1: Node 2:


(Microsoft Windows Server Node 3: Node 4:
2008 supports up to eight
nodes per cluster) Node 5: Node 6:

Node 7: Node 8:

Domain name, DNS IP Domain name:


address, and subnet mask for
the domain controller: DNS IP:

Domain controller subnet mask:

Cluster management Cluster name:


network name and IP
address: Cluster IP:

Public network connection Node 1 Public-1: Node 1 team:


IP addresses and team IP
address for each node: Node 1 Public-2:

Node 2 Public-1: Node 2 team:

Node 2 Public-2:

Node 3 Public-1: Node 3 team:

Node 3 Public-2:

Node 4 Public-1: Node 4 team:

Node 4 Public-2:

Node 5 Public-1: Node 5 team:

Node 5 Public-2:

Node 6 Public-1: Node 6 team:

Node 6 Public-2:

Node 7 Public-1: Node 7 team:

Node 7 Public-2:

Node 8 Public-1: Node 8 team:

Node 8 Public-2:

Private network connection Subnet mask:


(cluster heartbeat) subnet
mask and IP address for each Node 1: Node 2:
node:
Node 3: Node 4:

Node 5: Node 6:

Node 7: Node 8:

Gathering Required Installation Information 19


Installing the Cluster
To install and configure failover clustering, complete the following steps:
1. Right-click on My Computer and select Manage.
2. In the Server Manager window, select Features from the list and click on Add Features.

Figure 2-2 Server Manager window

3. In the Add Features Wizard window, select the following features:


• Failover Clustering
• Multipath IO (if you are planning to use MPIO, which is recommended)

20 Installing and Configuring the Cluster


Figure 2-3 Add Features Wizard window

4. Click Next. Confirm your selected features and click Install to continue. Confirm that the
installation succeeded and click Close.
5. Validate the cluster configuration using the Failover Cluster Management tool.
a. Ensure that all servers in your cluster are powered On and connected to the shared
storage.
b. Click Start→Programs→Administrative Tools→Failover Cluster Management to
run the Failover Cluster Management tool.
c. Select Validate a Configuration… to run the validation wizard.

Installing the Cluster 21


Figure 2-4 Failover Cluster Management window

d. When prompted to select the servers you want to add, type in the system host name
for each of the cluster nodes. Then click the Add button (or click Browse to search the
network for it). When finished adding all nodes, click Next to continue.

22 Installing and Configuring the Cluster


Figure 2-5 Validate Configuration Wizard window

e. In the next screen, select which test to run for validation (selecting “Run all tests” is
recommended, especially for the first validation attempt). Then click Next.

Figure 2-6 Validate Configuration Wizard window

Installing the Cluster 23


f. After prompting you to confirm the tests you selected, the wizard runs the tests, and a
Summary Report screen should display the results and indicate that all tests were
completed successfully. All tests must pass with either a green check mark or in some
cases a yellow triangle (warning). The following table shows the summary symbols and
explains their meaning:

Figure 2-7 Validation Wizard result symbols

g. When looking for problem areas (red X or yellow ! marks), in the part of the report that
summarizes the test results, click an individual test to review the details. Also review
the summary statement (by clicking View Report) for information about whether the
cluster is considered a supported configuration.
h. If you need to view Help topics that will help you understand the results, click More
about cluster validation tests.
To view the logged results of the tests after you close the wizard, see
SystemRoot\Cluster\Reports\Validation Report date and time.html,
where SystemRoot is the folder in which the operating system is installed (for example,
C:\Windows).
To view Help topics about cluster validation after you close the wizard, in Failover
Cluster Management, click Help→Help Topics. Then click the Contents tab, expand
the contents for the failover cluster Help, and click Validating a Failover Cluster
Configuration.
i. After taking action to correct any problems, rerun the wizard as needed to confirm that
your configuration passes the tests.
For Microsoft discussions of various cluster validation topics, see:
• http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx
• http://technet.microsoft.com/en-us/library/cc770723(WS.10).aspx

24 Installing and Configuring the Cluster


6. Create the cluster.
a. From the Failover Cluster Management tool, select Create a Cluster.
b. Enter a Cluster Name. Only select the public network (with a check mark), and then
assign a unique IP address for the cluster. Finally, click Next to create the cluster.

Figure 2-8 Create Cluster Wizard window

c. After the cluster is created, make sure that the Public and Private networks are available,
and that all shared storage disks are visible in the Failover Cluster Management tool.

Figure 2-9 Failover Cluster Management window

Installing the Cluster 25


d. To add groups for shared storage, right-click on Services and Applications and select
Configure a Service or Application…

Figure 2-10 Configure Service menu

26 Installing and Configuring the Cluster


e. Select File Server as the service you want to configure for high availability. Then click
Next.

Figure 2-11 High Availability wizard

f. Enter a name for the file server, assign it an IP address, and click Next.

Figure 2-12 High Availability wizard

Installing the Cluster 27


g. Select the disk drive you want to add to the file server and click Next.

Figure 2-13 High Availability wizard

This completes the cluster installation process.

Additional Configuration Topics


See the following documents for more information on configuration issues, or how to implement
specific types of clustered solutions in Windows Server 2008:
• For a step-by-step guide to configuring a two-node file server failover cluster:
http://technet.microsoft.com/en-us/library/cc731844(WS.10).aspx
• For a step-by-step guide to configuring a two-node print server failover cluster:
http://technet.microsoft.com/en-us/library/cc771509(WS.10).aspx
• For information on how to configure the quorum in a failover cluster:
http://technet.microsoft.com/en-us/library/cc770620(WS.10).aspx
• For information on how to configure accounts in Active Directory:
http://technet.microsoft.com/en-us/library/cc731002(WS.10).aspx

NIC Teaming in Clustered Environments


One method of avoiding single points of failure in the cluster's network infrastructure is to
connect the cluster nodes' network adapters to multiple, distinct networks. Another method is
to deploy redundant switches and routers, and team those adapters, which provides the following
benefits:

28 Installing and Configuring the Cluster


• The NIC and switch redundancy layer is transparent to the IP layer.
• It may use standby, redundant team members to load balance your network traffic and
improve performance for transmitted and received packets on the individual cluster node.
• It may use advanced redundancy mechanisms to improve the detection of failures in your
network infrastructure, and to provide a proactive response to them. For example, cluster
nodes continuously test their connectivity with each other but they cannot detect path failures
when there is an external switch upstream. Active Path Failover is an advanced teaming
feature that detects such failures, and fails over to a NIC that has a path to an Echo Node
device (an external switch upstream).
If you are going to implement NIC teaming in your cluster networks, you should complete the
following steps:
1. Plan your network infrastructure according to the cluster demands, taking into account NIC
teaming configuration, redundant switches, routers, and so on.
2. Create the teams planned in the previous step for every cluster node.
3. Validate your cluster configuration.
4. Create your cluster.
For more information about NIC teaming issues in clustered environments, see the following
document:
http://support.microsoft.com/kb/254101

Troubleshooting the Cluster


What to Do if Validation Tests Fail
In most cases, if any tests in the cluster validation wizard fail, then Microsoft does not consider
the solution to be supported. There are exceptions to this rule, such as the case with multi-site
(geographically dispersed) clusters where there is no shared storage. In this scenario the expected
result of the validation wizard is that the storage tests will fail. This is still a supported solution
if the remainder of the tests complete successfully.
The type of test that fails is a guideline to the corrective action to take. For example, if the storage
test "List all disks" fails, and subsequent storage tests do not run (because these would also fail),
contact the storage vendor to troubleshoot. Similarly, if a network test related to IP addresses
fails, consult with your network infrastructure team. Most of the warnings or errors should result
in working with internal teams or with a specific hardware vendor.
After the issues have been addressed and resolved, it is necessary to rerun the cluster validation
wizard. It is required (in order to be considered a supported configuration) that all tests are run
and completed successfully without failures.

Validation Issues for Multi-site or Geographically Dispersed Failover Clusters


Failover cluster solutions that do not have a common shared disk and instead leverage data
replication between nodes might not pass the cluster validation "storage" tests. This is a common
configuration in cluster solutions where nodes are stretched across geographic regions. If a cluster
solution does not require external storage to fail over from one node to another, it does not need
to pass the "storage" tests to be a fully supported solution.
For more information on multi-site or geographically dispersed clusters, see the following white
paper:
http://go.microsoft.com/fwlink/?LinkId=112125

Troubleshooting
See the following documents for more information about troubleshooting errors and interpreting
system event descriptions in clusters:

Troubleshooting the Cluster 29


• For information about system events related to the cluster as a whole:
http://technet.microsoft.com/en-us/library/cc756214(WS.10).aspx
• For information about system events related to individual nodes in the cluster:
http://technet.microsoft.com/en-us/library/cc773566(WS.10).aspx
• For information about system events related to networking issues:
http://technet.microsoft.com/en-us/library/cc773427(WS.10).aspx
• For information about system events related to storage issues:
http://technet.microsoft.com/en-us/library/cc756210(WS.10).aspx
• For information about system events related to clustered services and applications:
http://technet.microsoft.com/en-us/library/cc773501(WS.10).aspx
• For information about system events related to a cluster witness disk or witness file share
(applies only to clusters with an even number of nodes):
http://technet.microsoft.com/en-us/library/cc773510(WS.10).aspx

Additional Clustering Tasks


Upgrading Individual Nodes in the Cluster
After your initial installation and configuration, you can upgrade the software and drivers
installed on each node in your cluster and add the latest system updates and security fixes. This
task must be done regularly to keep your Integrity servers up-to-date and secure.
With clustered systems, you can do maintenance even when users are online. Wait until a
convenient, off-peak time when one of the nodes in the cluster can be taken offline for maintenance
and its workload distributed among the remaining nodes. Before the upgrade, however, you
must evaluate the entire cluster to verify that the remaining nodes can handle the increased
workload.
Pick the node you want to upgrade, then use the Failover Cluster Management tool to move all
the Services and Applications onto one or more of the remaining nodes. You can also use scripts
to move resources. Once all the resources have been failed over to the other nodes, the selected
node is ready to upgrade. For more information about how to upgrade your Integrity servers
with the latest drivers and hot fixes, see the latest Smart Setup Guide for Integrity servers at:
http://docs.hp.com/en/hw.html#Windows%2064-bit%20on%20HP%20Integrity%20Servers
Once the upgrade to the first node is complete, reboot it if necessary and move the resources
back to it. As soon as possible, repeat this process to upgrade the other nodes in the cluster. This
minimizes the amount of time the nodes are operating with different versions of software or
drivers.

Evicting a Node from the Cluster


To evict a node from the cluster, complete the following steps:
1. Click Start→Programs→Administrative Tools→Failover Cluster Management to run
the Failover Cluster Management tool.

30 Installing and Configuring the Cluster


2. Right-click on the node name and select More Actions→Evict.

Figure 2-14 Failover Cluster Management window

Destroying a Cluster
To destroy a cluster, complete the following steps:
1. Click Start→Programs→Administrative Tools→Failover Cluster Management to run
the Failover Cluster Management tool.
2. Right-click on the node name and select More Actions→Destroy Cluster....

Figure 2-15 Failover Cluster Management window

Additional Clustering Tasks 31

Das könnte Ihnen auch gefallen