Beruflich Dokumente
Kultur Dokumente
INTRODUCTION Audience Scope Related documents REQUIREMENTS Prerequisites Hardware requirements Software requirements CREATING iSCSI LUNS CONNECTING TO THE ISCSI LUNS FORMATTING THE iSCSI LUNS
2 2 2 3 3 3 3 4 4 7 10
CREATING A FAILOVER CLUSTER Adding the failover cluster feature Validating the failover cluster configuration Creating a failover cluster CREATING A HIGHLY AVAILABLE VIRTUAL MACHINE Virtual machine creation and operating system installation Making a virtual machine highly available TESTING PLANNED FAILOVER TESTING UNPLANNED FAILOVER CONCLUSION
11 12 13 16 20 20 24 27 28 29
AUGUST 2011
INTRODUCTION
In 2008, Microsoft released Hyper-V, its first bare-metal hypervisor-based technology, built into Windows Server 2008.With the Hyper-V architecture, hardware-assisted, x64-based systems can run independent, virtual environments with different operating systems and resource requirements within the same physical server. In 2009, the release of Windows Server 2008 R2 introduced more advanced features for Hyper-V, including Live Migration and Cluster Shared Volume (CSV). These features work in a Windows failover clustering environment and, additionally, can leverage iSCSI logical unit numbers (LUNs) as a storage option to create virtual machines and virtual disks. The inclusion of the Microsoft iSCSI Software Initiator in Windows Server 2008 provides ubiquitous SAN connectivity for customers using existing Ethernet infrastructure. This cost-effective, highly scalable, virtualization platform offers advanced resource management capabilities. Hyper-V minimizes Total Cost of Ownership (TCO) for your environment by increasing resource utilization, decreasing the number of servers and all associated costs, and maximizing server manageability. Using shared storage along with a Hyper-V infrastructure offers the additional benefits of higher availability, simple server migration and improved recovery. The Iomega StorCenter px series network storage array offers versatile storage provisioning, advanced protocol capabilities, expandability, and affordability in an easy-to-use product ideal for small businesses, workgroups, and departments. Based on enterprise-class EMC storage technology, the StorCenter px series provides multiple gigabit Ethernet connections, easy file sharing, iSCSI block access, flexible RAID configurations for optimized data protection, and storage pools for application flexibility and expandability to match your budget. The Iomega StorCenter px series can present iSCSI LUNs for Microsoft Hyper-V Server as well as failover cluster to create virtual machines and virtual disks, and the px series is Microsoft Windows Server 2008 and 2008 R2 (Hyper-V) certified for iSCSI. This white paper provides information on the installation, deployment and configuration requirements and procedures of the failover clustering feature of Microsoft Hyper-V on the Iomega StorCenter px series using iSCSI storage. Detailed step-by-step instructions with screenshots are included for illustration purposes. Audience Information contained in this white paper is intended for Iomega customers, partners, and service personnel involved in planning, architecting, or administering a Microsoft Hyper-V failover clustering environment with Iomega StorCenter px series as the storage device. The readers are expected to have experience with Microsoft Hyper-V Server and an Iomega StorCenter network storage device that runs EMC Lifeline software. SCOPE This document summarizes the experiences and methods followed while installing, deploying and configuring Microsoft Hyper-V failover clustering on Iomega StorCenter px series. The objectives of this document are as follows: Provide a material for reference, to be used while performing similar installation, deployment and configuration. Provide details on the possible areas of risks and the identified best practices while installing, deploying and configuring the described component or system.
White Paper
RELATED DOCUMENTS The documents Using Iomega StorCenter ix12-300r/px12-350r with Windows Server 2008 R2 Hyper-V over iSCSI and EMC Virtual Infrastructure for SMB Enabled by Iomega StorCenter ix12-300r Network Storage and Microsoft Hyper-V are located on http://www.iomega.com and provide additional, relevant information about how to configure and deploy Microsoft Hyper-V Server with an Iomega StorCenter network storage device. The reader can find more technical information about Microsoft Hyper-V by visiting http://technet.microsoft.com.
REQUIREMENTS
This section includes the prerequisites and requirements for a successful deployment. PREREQUISITES The following are the prerequisites for a successful deployment: The shared storage to be used must be compatible with Microsoft Windows Server 2008 R2. The storage should contain at least two separate volumes (LUNs), configured at the hardware level for a two-node failover cluster. The clustered volumes should not be exposed to servers that are not in the cluster. One volume will function as the witness disk (described later in this document). One volume will contain the files that are being shared between the cluster nodes. The volume serves as the shared storage on which you will create virtual machines and virtual hard disks. For iSCSI each clustered server must have one or more network adapters or host bus adapters that are dedicated to the cluster storage. The network you use for iSCSI cannot be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage should be identical, and we recommend that you use Gigabit Ethernet or higher. HARDWARE REQUIREMENTS Table 1 lists the identified hardware requirements:
Hardware Iomega StorCenter px12-350r network storage Quantity One Configuration Intel Core 2 Duo E8400 3.0GHz CPU 4GB RAM 12 7200 rpm SATA-II disks Intel Pro/1000 Quad ports NIC
Two
White Paper
DNS Server
One
Domain controller
One
White Paper
This section discusses the steps for creating iSCSI LUNs on an Iomega StorCenter px series storage device. 1. Go to Storage > iSCSI to create iSCSI LUNs. 2. Click On to turn on iSCSI if not enabled already. 3. Click Settings to configure iSNS and/or Mutual CHAP if needed.
If multiple Storage Pools exist on the Iomega storage device, you need to select a Storage Pool where the LUN will be created. Otherwise, no selection is available. Enter the size of the LUN in GB. The size cannot exceed the free space of the pool.
White Paper
5. Click Create to have the LUN created. When a LUN is first created, everyone has read and write access to it by default, which means that everyone on your network can connect to the LUN then read and write content. The LUN is not secure and open to all users. 6. To secure the LUN, click Access Permissions.
White Paper
8. After clicking Apply, the read/write permission for everyone is automatically removed, and the LUN is secured to only specified users.
9. Repeat the above procedure to create another LUN. In this white paper, LUN WITNESS is used as the cluster witness (also known as quorum) disk, and LUN CLUSTER is used as the cluster shared storage.
White Paper
4. Click the Targets tab. This will list the iSCSI qualified target names discovered through the target portal.
5. Select the appropriate target, and click Connect to set up the iSCSI connection. Check the Add this connection to the list of Favorite Targets option so that the server automatically attempts to restore the connection every time the server restarts. Also, check the Enable multi-path option to allow multipathing to the target.
White Paper
6. Click the Advanced button to configure more connection settings. a. Select Microsoft iSCSI Initiator from the Local Adapter drop-down list. b. Choose IP addresses from the Initiator IP and Target portal IP drop-down lists. c. Check Enable CHAP log on, and enter CHAP information for authentication to the secured LUN. The CHAP name is a user name that has been granted read/write access to the LUN on the Iomega device, and the target secret is the user password. If the user password is less than 12 characters long, you need to pad it with * to make it 12 characters long.
7. After the connection is established, the status of the target is changed from Inactive to Connected. 8. Repeat the above process to connect to the other iSCSI LUN, which will be used to create virtual machines and virtual hard disks. Also repeat the process on the other cluster node to connect to both LUNs.
White Paper
2. Right-click the new disk to create a simple volume using the wizard. Assign a drive letter, provide a volume label, and perform a quick NTFS format.
White Paper
10
3. Repeat the above process to format the other iSCSI drive. Both drives appear in Disk Management and are available for use.
4. Repeat the above process to format the iSCSI drives on the other cluster node, and make sure the drive letters match on both nodes.
White Paper
11
ADDING THE FAILOVER CLUSTER FEATURE 1. Open Server Manager and click Features in the left pane and then Add Features on the right.
2. In the Add Features wizard, select the checkbox for Failover Clustering.
White Paper
12
3. Click Next and then Install. Follow the instructions in the wizard to complete the installation. The wizard reports an installation status.
4. Repeat the process for the second cluster node. VALIDATING THE FAILOVER CLUSTER CONFIGURATION 1. On either cluster node, go to Start > Administrative Tools > Failover Cluster Manager to open the failover cluster snap-in.
White Paper
13
2. Click Validate a Configuration in the center pane under Management. Follow the instructions in the wizard to specify the two servers.
3. Run all tests to fully validate the cluster before creating a cluster.
White Paper
14
White Paper
15
6. The Summary page appears after the tests are run. While still on the Summary page, click View Report and read the test results. Correct any errors before proceeding to create the cluster. To view the results of the tests after you close the wizard, see SystemRoot\Cluster\Reports\Validation Report date and time.htm, where SystemRoot is the folder in which the operation system is installed (for example, C:\Windows).
CREATING A FAILOVER CLUSTER 1. Right-click Failover Cluster Manager, and select Create a Cluster. This will start the cluster creation wizard. 2. Type the name of each cluster node in the Enter server name textbox, and click Add to add them one at a time to the list of Selected servers.
White Paper
16
3. Enter the Cluster Name. If DHCP is not utilized in the environment to automatically configure the administrative access point for the cluster, you need to specify an IP address to be used as the cluster access point.
White Paper
17
5. Proceed to cluster creation. A status bar is displayed during creation. Upon completion, a summary page displays all the cluster information.
White Paper
18
7. Click Enable Cluster Shared Volumes in the center pane under Configure. On a failover cluster that uses Cluster Shared Volumes, multiple clustered virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage.
9. Click Add storage in the right pane to add a cluster shared volume. The cluster manager updates the volume information accordingly upon completion.
White Paper
19
White Paper
20
3. Check Store the virtual machine in a different location, and click Browse to select a cluster shared volume as the storage location. A cluster shared volume is typically named C:\ClusterStorage\Volume# on the cluster nodes.
White Paper
21
5. Configure a virtual network for the virtual machine using the network adapter created for the virtual machine.
6. Specify storage for the virtual machine so that an operating system can be installed.
White Paper
22
7. Decide when and how to install an operating system on the virtual machine.
8. Review summary of the virtual machine and proceed to creation. 9. Connect the operating system installation media appropriately and power up the virtual machine to install OS on the virtual machine. 10. Reconfigure automatic start action for the virtual machine. In general, automatic actions will allow users to automatically manage the state of the virtual machine when the Hyper-V Virtual Machine Management service starts or stops. However, when a virtual machine is made highly available, the management of virtual machine state should be controlled through the cluster service.
White Paper
23
a. Right-click the virtual machine and select Manage virtual machine to open the Hyper-V Manager. b. In Hyper-V Manager, right-click the virtual machine and select Settings. c. Click Automatic Start Action in the left pane, and choose Nothing in the right pane.
Figure 37. Modify Automatic Start Action settings for a virtual machine
MAKING A VIRTUAL MACHINE HIGHLY AVAILABLE Virtual machines that are created using the Failover Cluster Manager are automatically made highly available in the cluster. If a clustered server fails while running this virtual machine, another node in the cluster automatically resumes the virtual machine (a process known as failover). However, if a virtual machine is created using some other Microsoft management consoles, such as Hyper-V Manager or System Center Virtual Machine Manager, the virtual machine must be manually made highly available in the cluster. 1. Right-click Services and applications in the left pane of the Failover Cluster Manager, select Configure a Service or Application.
White Paper
24
2. The High Availability Wizard will open. Click Next. 3. On the Select Service or Application page, select Virtual Machine from the list.
4. On the Select Virtual Machine page, check the name of the virtual machine to make highly available. The virtual machine must be offline before it can be made highly available.
White Paper
25
5. Confirm the selection and proceed to making the virtual machine highly available. 6. To verify that the virtual machine is now highly available, you can check in either one of two places in the console tree: a. Expand Services and Applications. The virtual machine should be listed there.
b. Expand Nodes. Select the node on which you created the virtual machine.
White Paper
26
4. The failover process starts by saving the virtual machine state. The Current Owner remains the same during this process.
White Paper
27
5. The failover process then restores the virtual machine on the other cluster node. The Current Owner is changed to be the other node.
White Paper
28
3. Confirm to stop the cluster service on the node. 4. The virtual machine is moved to the other node automatically. The virtual machine is now in a failed state, but can be brought online on the other node.
CONCLUSION
Microsoft Hyper-V dramatically improves the efficiency and availability of resources and applications in organizations of any sizes. Hyper-V customers enjoy reduced TCO on overall IT costs by lowering both capital and operational costs and improving operational efficiency and flexibility. This is especially important to small businesses that normally have a very limited IT budget. The Iomega StorCenter px series network storage array is a high-performance, ease-of-use, and highly reliable storage dev ice specifically designed to meet the storage challenges that small businesses face daily. The device supports the iSCSI protocol, the predominant way of utilizing IP storage by the Microsoft Hyper-V Server. Customers total infrastructure costs are further reduced by using an existing Ethernet infrastructure. The Iomega StorCenter px series network storage array is certified iSCSI storage array for Windows Server 2008 R2 server. It provides reliable and proven storage solution to small businesses that plan to deploy Microsoft Hyper-V in a failover cluster.
2011 Iomega Corporation. All rights reserved. Iomega, StorCenter, and the stylized i logo are either registered trademarks or trademarks of Iomega Corporation in the United States and/or other countries. EMC and Lifeline are registered trademarks of EMC Corporation in the U.S. and/or other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Certain other product names, brand names and company names may be trademarks or designations or their respective owners. Iomega's specific customer support policies (including fees for services) and procedures change as technology and market conditions dictate. Product in photos may vary slightly from product in package. Product capacities are specified in gigabytes (GB), where 1GB = 1,000,000,000 Bytes. To obtain information about Iomegas current policies please visit Iomega at www.iomega.com or call 1-888-4iomega (1-888-446-6342). WINWP-0811-01