Sie sind auf Seite 1von 33

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family

Rawley Burbridge IBM Systems and Technology Group ISV Enablement September 2012

Copyright IBM Corporation, 2012

Table of Contents
Abstract........................................................................................................................................2 Introduction .................................................................................................................................2
Guidance and assumptions ..................................................................................................................... 3

Introduction to VMware vSphere ...............................................................................................3


Infrastructure services.............................................................................................................................. 4 Application services ................................................................................................................................. 4 VMware vCenter Server........................................................................................................................... 4

VMware storage-centric features...............................................................................................5


VMFS version 5.0 .................................................................................................................................... 5 Storage VMotion and Storage Dynamic Resource Scheduler (DRS) ..................................................... 6 Storage I/O Control .................................................................................................................................. 7

Storage and connectivity best practices ..................................................................................8


Overview of VMware Pluggable Storage Architecture............................................................................. 8 Storage Array Type Plug-in ............................................................................................... 9 Path Selection Plug-in ..................................................................................................... 10 VMware ESXi host PSA best practices ................................................................................................. 11 Default behavior Fixed PSP ......................................................................................... 11 Recommendation Round Robin PSP ........................................................................... 11 VMware ESXi host Fibre Channel and iSCSI connectivity best practices............................................. 13 Fibre Channel connectivity .............................................................................................. 13 iSCSI connectivity............................................................................................................ 15 General storage best practices for VMware .......................................................................................... 20 Physical storage sizing best practices............................................................................. 20 Volume and datastore sizing ........................................................................................... 20 Thin provisioning with VMware........................................................................................ 21 Using Easy Tier with VMware ......................................................................................... 21 Using IBM Real-Time Compression with VMware .......................................................... 22 VMware storage integrations ................................................................................................................. 22 vStorage APIs for Array Integration................................................................................. 22 IBM Storage Management Console for VMware vCenter ............................................... 25 VMware vStorage APIs for Data Protection .................................................................... 26

Summary....................................................................................................................................29 Resources..................................................................................................................................30 Trademarks and special notices..............................................................................................31

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 1

Abstract
The purpose of this paper is to provide insight into the value proposition of the IBM System Storage SAN Volume Controller and the IBM Storwize V7000 disk family for VMware environments, and to provide best practice configurations.

Introduction
The many benefits that server virtualization provides has led to its explosive adoption in todays data center. Server virtualization with VMware vSphere has been successful in helping customers use hardware more efficiently, increase application agility and availability, and decrease management and other costs. The IBM System Storage SAN Volume Controller is an enterprise-class storage virtualization system that enables a single point of control for aggregated storage resources. SAN Volume Controller consolidates the capacity from different storage systems, both IBM and non-IBM branded, while enabling common copy functions and non-disruptive data movement, and improving performance and availability. The IBM Storwize V7000 disk family has inherited the SAN Volume Controller software base, and as such offers many of the same features and functions. The common software base of the SAN Volume Controller and the IBM Storwize V7000 disk family allows IBM to conveniently offer the same VMware support, integrations, and plug-ins. This white paper focuses on and provides best practices for the following key components: IBM System Storage SAN Volume Controller IBM System Storage SAN Volume Controller combines hardware and software into an integrated, modular solution that forms a highly-scalable cluster. SAN Volume Controller allows customers to manage all of the storage in their IT infrastructure from a single point of control and also increase the utilization, flexibility, and availability of storage resources. For additional information about IBM SAN Volume Controller, refer to the following URL: ibm.com/systems/storage/software/virtualization/svc/index.html IBM Storwize V7000 and Storwize V7000 Unified systems The IBM Storwize V7000 system provides block storage enhanced with enterprise-class features to midrange customer environments. With the built-in storage virtualization, replication capabilities, and key VMware storage integrations, the Storwize V7000 system is a great fit for VMware deployments. The inclusion of IBM real-time compression further enhances the strong feature set of this product. The IBM Storwize V7000 Unified system builds upon the Storwize V7000 block storage capabilities by also providing support for file workloads and file specific features such as IBM Active Cloud Engine. For additional information about the IBM Storwize V7000 system, refer to the following URL: ibm.com/systems/storage/disk/storwize_v7000/index.html

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 2

VMware vSphere 5.0 VMware vSphere 5.0 (at the time of this publication) is the latest version of a market-leading virtualization platform. vSphere 5.0 provides server virtualization capabilities and rich resource management. For additional information about VMware vSphere 5.0, refer to the following URL: www.vmware.com/products/vsphere/mid-size-and-enterprise-business/overview.html

Guidance and assumptions


The intent of this paper is to provide architectural, deployment, and management guidelines for customers who are planning or have already decided to implement VMware on the IBM SAN Volume Controller or the Storwize V7000 disk family. It provides a brief overview of the VMware technology concepts, key architecture considerations, and deployment guidelines for implementing VMware. This paper does not provide detailed performance numbers or advanced high availability and disaster recovery techniques. This paper is also not intended to be any type of formal certification. For detailed information regarding hardware capability and supported configurations refer to the VMware Hardware Compatibility List (HCL) and IBM System Storage Interoperation Center (SSIC) websites. VMware HCL URL: http://www.vmware.com/resources/compatibility/search.php IBM SSIC URL: ibm.com/systems/support/storage/ssic/interoperability.wss This paper assumes users with essential knowledge in the following areas: VMware vCenter Server ESXi installation Virtual Machine File System (VMFS) and raw device mapping (RDM) VMware Storage VMotion, High Availability (HA) and Distributed Resource Scheduler (DRS)

Introduction to VMware vSphere


VMware vSphere is a virtualization platform capable of transforming a traditional data center and industry-standard hardware into a shared mainframe-like environment. Hardware resources can be pooled together to run varying workloads and applications with different service-level needs and performance requirements. VMware vSphere is the enabling technology to build a private or public cloud infrastructure. The components of VMware vSphere fall into three categories: Infrastructure services, application services, and the VMware vCenter Server. Figure 1 shows a representation of the VMware vSphere platform.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 3

Figure 1: VMware vSphere platform

Infrastructure services
Infrastructure services perform the virtualization of server hardware, storage, and network resources. The services within the infrastructure services category are the foundation of the VMware vSphere platform.

Application services
The components categorized as application services address availability, security, and scalability concerns for all applications running on the vSphere platform, regardless of the complexity of the application.

VMware vCenter Server


VMware vCenter Server, formerly known as VMware VirtualCenter, provides the foundation for the management of the vSphere platform. VMware vCenter Server provides centralized management of configurations and aggregated performance statics for clusters, hosts, virtual machines, storage, and guest operating systems. VMware vCenter Server scales to provide management of large enterprises, granting administrators the ability to manage more than 1,000 hosts and up to 10,000 virtual machines from a single console. VMware vCenter Server is also an extensible management platform. The open plug-in architecture allows VMware and its partners to directly integrate with vCenter Server, extending the capabilities of the vCenter platform, and adding additional functionality.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 4

Figure 2 shows the main pillars of functionality provided by VMware vCenter Server.

Figure 2: Pillars of VMware vCenter Server

VMware storage-centric features


Since its inception, VMware has pushed for advancements in storage usage for virtualized environments. VMware uses a purpose built, virtualization friendly, clustered file system which is enhanced with storagecentric features and functionality aiming to ease the management and maximize the performance of the storage infrastructure used by virtualized environments. VMware has also led the industry in working with partners to create integrations between storage systems and VMware. The following sections outline some of the storage features provided by VMware.

VMFS version 5.0


VMware VMFS is a purpose built file system for storing virtual machine files on Fibre Channel (FC) and iSCSI-attached storage. It is a clustered file system, meaning multiple vSphere hosts can read and write to the same storage location concurrently. vSphere host access can be added or removed from the VMFS volume without any impact. The file system used disk file locking to ensure that multiple vSphere hosts do not access the file at the same time. For example, this ensures that a virtual machine is powered on by only one vSphere host. VMware VMFS has undergone many changes and enhancements since the inception of VMFS version 1 with ESX server version 1. The latest version of VMFS (at the time of this publication), version 5.0, includes many enhancements to increase scalability and performance of VMFS. Table 1 provides a comparison of VMFS-5 compared to the previous version, VMFS-3.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 5

Feature 64 Terabyte VMFS volumes Support for more files Support for 64 TB physical raw device mappings Unified block size (1 MB) Atomic Test and Set (ATS) usage vStorage APIs for Array Integration (VAAI) locking mechanism Sub-blocks for space efficiency

VMFS-3 Yes (requires 32 extents) 30720 No

VMFS-5 Yes (single extent) 130689 Yes

No Limited

Yes Unlimited

64 KB (maximum approximately 3 k)

8 KB (maximum approximately 30 k) 1 KB

Small file support


Table 1: Comparing VMware VMFS-3 with VMFS-5

No

VMware provides a nondisruptive upgrade path between the various versions of VMFS.

Storage VMotion and Storage Dynamic Resource Scheduler (DRS)


VMware Storage VMotion was a feature that was added with experimental support in VMware ESX 3.5, and made its official debut in vSphere 4.0. Storage VMotion provides the capability to migrate a running virtual machine between two VMFS volumes without any service interruption. VMware administrators have had the VMotion capability, which migrates a virtual machine between two vSphere hosts for some time. Storage VMotion introduced the same functionality and use cases for migrating virtual machines between storage systems. VMware has built upon the Storage VMotion functionality with a new feature in vSphere 5.0, Storage DRS. Storage DRS creates the following use cases for using Storage VMotion: Initial virtual machine placement When creating a virtual machine, users can now select a VMFS datastore cluster object rather than an individual VMFS datastore for placement. Storage DRS can choose the appropriate VMFS datastore to place the virtual machine based on space utilization and I/O load. Figure 3 provides an example of initial placement.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 6

Figure 3: VMware Storage DRS initial placement

Load balancing Storage DRS continuously monitors VMFS datastore space usage and latency. Configurable presets can trigger Storage DRS to issue migration recommendations when response time and or space utilization thresholds have been exceeded. Storage VMotion is used to migrate virtual machines to bring the VMFS datastores back into balance. Figure 4 provides an example of Storage DRS load balancing.

Figure 4: VMware Storage DRS load balancing

VMFS datastore maintenance mode The last use case of Storage DRS is a way to automate the evacuation of virtual machines from a VMFS datastore which needs to undergo maintenance. Previously, each virtual machine would need to be migrated manually from the datastore. Datastore maintenance mode allows the administrator to issue the command to place the datastore in maintenance mode, and Storage DRS migrates all the virtual machines from the datastore.

Storage I/O Control


The Storage I/O Control feature eliminates the noisy neighbor problem that can exist when many workloads (virtual machines) access the same resource (VMFS datastore). Storage I/O Control allows administrators to set share ratings on virtual machines to ensure that virtual machines are getting the required amount of I/O performance from the storage. The share rating works across all vSphere hosts

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 7

that are accessing a VMFS datastore. Virtual machine disk access is manipulated by controlling the host queue slots. Figure 5 shows two examples of virtual machines accessing a VMFS datastore. Without Storage I/O Control enforcing share priority, a non-production data mining virtual machine can monopolize disk resources, impacting the production virtual machines.

Figure 5: VMware Storage I/O Control

Storage and connectivity best practices


IBM System Storage SAN Volume Controller and the Storwize V7000 disk family use the same software base and host connectivity options. This common code base allows IBM to provide consistent functionality and management across multiple products. It also means that from a VMware ESXi host perspective, each of the storage products which share this code base appear as the same storage type and have consistent best practices. The following sections outline the best practices for VMware with SAN Volume Controller and Storwize V7000 disk family.

Overview of VMware Pluggable Storage Architecture


VMware vSphere 4.0 introduced a new storage architecture called Pluggable Storage Architecture (PSA). The purpose was to leverage third-party storage vendor multipath software capabilities through a modular architecture that allows partners to write a plug-in for their specific array capabilities. These modules can communicate with the intelligence running in the storage system to determine the best path selection or to coordinate proper failover behavior. Figure 6 provides a diagram of the VMware PSA; the modules and their purpose are highlighted in the following sections.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 8

Figure 6 VMware Pluggable Storage Architecture diagram

Storage Array Type Plug-in


The Storage Array Type Plug-in (SATP) is a module that can be written by storage partners for their specific storage systems. A SATP provides intelligence of the storage system to the VMware ESXi hypervisor (vmkernel) including characteristics of the storage system and any specific operations required to detect path state and initiate failovers. IBM SAN Volume Controller and the Storwize V7000 disk family products use the SATP called VMW_SATP_SVC. When a volume is provisioned from SAN Volume Controller or the Storwize V7000 disk family products and mapped to a vSphere 4.0 or newer ESXi host, the volume is automatically assigned the SATP. The SATP configured on a volume can be viewed from the Manage Paths window, as displayed in Figure 7.

Figure 7: Properties of Storwize V7000 assigned volume SATP example

In addition to providing intelligence of the storage system, the SATP also contains default settings for what Path Selection Plug-in (PSP) is used by the ESXi host for each storage volume.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 9

Path Selection Plug-in


The PSP is the multipath policy used by the VMware ESXi host to access the storage volume. VMware Native Multipathing (NMP) offers three different PSPs which storage partners can choose to use based on the characteristics of the storage system. The three PSPs are: Must Recently Used When the Must Recently Used (MRU) policy is used, the ESXi host selects and begins using the first working path discovered on boot, or when a new volume is mapped to the host. If the active path fails, the ESXi host switches to an alternative path and continues to use it regardless of whether the original path is restored. VMware uses the name VMW_PSP_MRU for the MRU policy. Fixed When the Fixed policy is used, the ESXi host selects and begins using the first working path discovered on boot or when a new volume is mapped to the host, and also marks the path as preferred. If the active preferred path fails, the ESXi host switches to an alternative path. The ESXi host automatically reverts back to the preferred path when it is restored. VMware uses the name VMW_PSP_FIXED for the Fixed policy. Round Robin When the Round Robin policy is used, the ESXi host selects and begins using the first working path discovered on boot or when a new volume is mapped to the host. By default, 1,000 I/O requests are sent down the path before the next working path is selected. The ESXi host continues this cycle through all available paths. Failed paths are excluded from the selection until restored. VMware uses the name VMW_PSP_RR for the Round Robin policy.

The PSP configured on a volume can be viewed from the Manage Paths window as displayed in Figure 8.

Figure 8: Properties of Storwize V7000 assigned volume PSP example

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 10

VMware ESXi host PSA best practices


The IBM SAN Volume Controller and the Storwize V7000 disk family products are all supported to use any of the three offered PSPs (VMW_PSP_MRU, VMW_PSP_FIXED, and VMW_PSP_RR). However there are IBM recommended best practices regarding which PSP customers need to use their environments.

Default behavior Fixed PSP


Volumes provisioned from the IBM SAN Volume Controller and the Storwize V7000 disk family products are assigned the SATP of VMW_SATP_SVC, which uses a PSP default of VMW_PSP_FIXED. As previously mentioned, the Fixed PSP selects the first discovered working path as the preferred path. Customers need to be aware that if the Fixed PSP is used, the preferred path used by each volume must be evenly distributed. This ensures that the paths that are active are evenly balanced. The preferred path can be modified from the Manage Paths window. Right-click the path that you need to set as the new preferred path, and then select the Preferred option, as shown in Figure 9. This is a nondisruptive change to the active VMFS datastores.

Figure 9: Modifying the preferred path

Managing the preferred paths across hosts and VMFS datastores can become an unnecessary burden to administrators, and this is why IBM recommends modifying the default behavior of vSphere 4.0 through vSphere 5.0.

Recommendation Round Robin PSP


Using the Round Robin PSP ensures that all paths are equally used by all the volumes provisioned from IBM SAN Volume Controller or the Storwize V7000 disk family products. The default behavior can be changed in the following ways: Modify volume-by-volume This method can be performed on active VMFS datastores nondisruptively. However, it does not impact the new volumes assigned to an ESXi host. The PSP can be modified from the Manage Paths window.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 11

From the Path Selection list, select Round Robin (VMware) and click Change, as shown in Figure 10.

Figure 10: Modifying the PSP for an individual volume

IBM Storage Management Console for VMware vCenter The IBM Storage Management Console for VMware vCenter version 2.6.0 includes the ability to set multipath policy enforcement. This setting can enforce the Round Robin policy on all new volumes provisioned through the management console. This selection option is displayed in Figure 11.

Figure 11: Multipath policy enforcement with management console

Modify all volumes This method modifies the default PSP to be Round Robin. All new discovered volumes use the new default behavior, however, existing volumes are not

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 12

modified until the ESXi host rediscovers all the volumes as is done during a reboot. The default behavior can be modified with the following vSphere CLI commands: ESX/ESXi 4.x: esxcli nmp satp setdefaultpsp psp VMW_PSP_RR --satp VMW_SATP_SVC ESXi 5.0: esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp VMW_SATP_SVC This method is recommended by IBM for simplicity and global enforcement.

VMware ESXi host Fibre Channel and iSCSI connectivity best practices
IBM SAN Volume Controller and the Storwize V7000 disk family support the FC and iSCSI protocols for block storage connectivity with VMware ESXi. Each protocol has its own unique best practices, which is covered in the following sections.

Fibre Channel connectivity


IBM SAN Volume Controller and the Storwize V7000 disk family can support up to 256 hosts and 512 distinct configured host worldwide port names (WWPNs) per I/O group. Access from a host to a SAN Volume Controller cluster or a Storwize V7000 disk family system is defined by means of switch zoning. A VMware ESXi host switch zone must contain only ESXi systems. Host operating system types must not be mixed within switch zones. VMware ESXi hosts follow the same zoning best practices as other operating system types. You can find further details on zoning in the Implementing the IBM System Storage SAN Volume Controller V6.3 IBM Redbooks at the following URL: ibm.com/redbooks/redbooks/pdfs/sg247933.pdf A maximum of eight paths from a host to SAN Volume Controller or the Storwize V7000 disk family is supported. However, IBM recommends that a maximum of four paths, two to each node, be used. The VMware storage maximums, as shown Figure 12, dictate that a maximum of 256 logical unit numbers (LUNs) and 1024 paths be used per ESXi server. If the IBM recommendation is followed, it ensures that the maximum number of LUNs and paths can be used for each ESXi host accessing SAN Volume Controller or the Storwize V7000 disk family.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 13

Figure 12: VMware storage maximums

VMware ESXi hosts must use the generic host object type for SAN Volume Controller and the Storwize V7000 disk family. As VMware is generally configured to allow multiple hosts to access the same clustered volumes, two approaches can be used for ensuring that consistency is maintained when creating host objects and volume access.

Single ESXi host per storage host object


The first approach is to place the WWPNs of each VMware ESXi host in its own storage host object. Figure 13 provides an example of this type of setup. Two VMware ESXi hosts are configured, each with its own storage host object. These two VMware ESXi hosts share the same storage and are part of a VMware cluster.

Figure 13: Unique storage host object per ESXi host

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 14

The advantage of this approach is that the storage host definitions are very clear to create and maintain. The disadvantage, however, is when volume mappings are created; they must be created for each storage host object. It is also import, but not required, to use the same SCSI-LUN numbers across VMware ESXi hosts. This is more difficult to maintain with the single VMware ESXi host per storage object method.

VMware ESXi cluster per storage host object


An alternative way to manage storage host object is in instances where VMware ESXi hosts are in the same cluster, a single storage host object is created for the cluster, and all VMware ESXi host WWPNs are placed in the storage host object. Figure 14 provides an example of the previously used VMware ESXi hosts being placed in a single storage host object.

Figure 14: Single storage host object for multiple ESXi hosts

The advantage of this approach is that volume mapping is simplified because a single mapping is performed for the VMware ESXi cluster against a per-host basis. The disadvantage of this approach is that the storage host definitions are not as clear as expected. If a VMware ESXi host is being retired, the WWPNs for that host must be identified and removed from the storage host object. Both of the storage host object approaches are valid, and the advantages and disadvantages should be weighed by each customer. IBM recommends that a single approach is chosen and implemented on a consistent basis.

iSCSI connectivity
VMware ESXi includes a software iSCSI initiator which can be used with 1 GbE or 10 GbE Ethernet connections. The VMware ESXi iSCSI initiator is the only supported way to connect to SAN Volume Controller or the Storwize V7000 disk family. Each VMware ESXi host can have a single iSCSI initiator and that initiator provides the source iSCSI qualified name (IQN).

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 15

A single iSCSI initiator does not limit the number or type of network interface cards (NICs) which can be used for iSCSI storage access. VMware best practice recommends that for each physical NIC which will be used, a matching virtualized VMkernel port NIC created and bonded to the physical NIC. The VMkernel port is assigned an IP address, while the physical NIC acts as a virtual switch uplink. Figure 15 provides an example of a VMware ESXi iSCSI initiator configured with two VMkernel IP addresses, and bonded to two physical NICs.

Figure 15: VMware iSCSI software initiator port binding

The VMware ESXi iSCS initiator supports two types of storage discovery which is covered in the following sections.

Static iSCSI discovery


With the static discovery method, the target iSCSI ports on the SAN Volume Controller or Storwize V7000 disk family systems are entered manually into the iSCSI configuration. Figure 16 provides an example of the configuration type.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 16

Figure 16: Static iSCSI discovery

With static discovery, each of the source VMkernel IP addresses performs an iSCSI session login to each of the static target port IPs. Each time a login occurs, an iSCSI session is registered on the node in which the login occurred. The following example provides an overview of the sessions created with static discovery. Source VMkernel IPs of: 1.1.1.1 1.1.1.2

SAN Volume Controller or the Storwize V7000 disk family system target IPs of: 1.1.1.10 1.1.1.11 1.1.1.12 1.1.1.13

The following sessions are created on each node: Vmk-0 (1.1.1.1) to Port-0 (1.1.1.10) Vmk-0 (1.1.1.1) to Port-1 (1.1.1.11) Vmk-1 (1.1.1.2) to Port-0 (1.1.1.10) Vmk-1 (1.1.1.2) to Port-1 (1.1.1.11)

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 17

Figure 17: Sessions created with iSCSI discovery

This static discovery configuration results in four sessions per node.

Dynamic iSCSI discovery


Dynamic iSCSI discovery simplifies the setup of the iSCSI initiator because only one target IP address must be entered. The VMware ESXi host queries the storage system for the available target IPs which will all be used by the iSCSI initiator. Figure 18 provides an example of the dynamic iSCSI discovery configuration within VMware ESXi.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 18

Figure 18: Dynamic iSCSI discovery

When using iSCSI connectivity for VMware ESXi along with SAN Volume Controller and the Storwize V7000 disk family systems, the important thing to note is how many iSCSI sessions are created on each node. For storage systems running software older than 6.3.0.1, only one session can be created on each node. This means that the following rules must be followed: Maximum of one VMware iSCSI initiator session per node. Static discovery only, dynamic discovery is not supported. VMware ESXi host iSCSI initiator can only have one VMkernel IP associated with one physical NIC.

For storage systems running software version 6.3.0.1 or newer, these rules are changed to the following: Maximum of four VMware iSCSI initiator sessions per node. Static and dynamic discovery are both supported. VMware ESXi host iSCSI initiator can have up to two VMkernel IPs associated with two physical NICs.

The configuration presented in Figure 17 provides a redundancy for the VMware ESXi host since two physical NICs are actively used. It also uses all available ports of both nodes of the storage system. Regardless of whether static or dynamic discovery is used, this configuration stays within the guidelines of four sessions per node.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 19

General storage best practices for VMware


Storage for virtualized workloads requires considerations that might not be applicable to other workload types. Virtualized workloads often contain a heterogeneous mixture of applications, sometimes with disparate storage requirements. The following sections outline VMware storage sizing best practices on SAN Volume Controller and the Storwize V7000 disk family, and also detail how to use thin provisioning, IBM System Storage Easy Tier, and compression with VMware virtualized workloads.

Physical storage sizing best practices


There are two basic attributes to consider when sizing storage; storage capacity and storage performance. Storage performance does not scale with drive size, meaning larger drives generally have more capacity with less performance, while smaller drives generally have less capacity with more performance. For example, to satisfy a storage capacity requirement of 1 TB, the following disk configurations can be deployed: Two 600 GB 10,000 rpm SAS drives This configuration satisfies the storage capacity requirement and provides roughly 300 input/output operations per second (IOPS) of storage performance. Eight 146 GB 15,000 rpm SAS drives This configuration satisfies the storage capacity requirement and provides roughly 1,400 IOPS of storage performance.

In both situations the storage capacity requirement is satisfied, however the storage performance offered by each disk configuration is very different. To understand the storage capacity and performance requirements of a VMware virtualized workload, it is important to understand what applications and workloads are running inside the virtual machines. Twelve virtual machines running Microsoft Windows 2008 do not generate a significant amount of storage performance. However, if those 12 virtual machines are also running Microsoft SQL Server, the storage performance requirement can be very high. The SAN Volume Controller and the Storwize V7000 disk family systems enable volumes to use a large number of disk spindles by striping data across all the spindles contained in a storage pool. A storage pool can contain a single managed disk (MDisk), or multiple MDisks. The IBM Redbooks titled SAN Volume Controller Best Practices and Performance Guidelines provides performance best practices for configuring MDisks and storage pools. The Redbooks can be found at the following website: ibm.com/redbooks/redbooks.nsf/RedbookAbstracts/sg247521.html?OpenDocument The heterogeneous storage workload created by VMware benefits from the volume striping performed by SAN Volume Controller and the Storwize V7000 disk family. It is still important to ensure that the storage performance provided by the storage pool is satisfactory to the workload, however, as most VMware workloads are variable in their times of peak demand, the volume striping and more spindles available enables more consolidation.

Volume and datastore sizing


With VMware vSphere 5.0, the maximum VMFS datastore size for a single extent was raised to 64 TB. That means, a single storage volume of 64 TB can be provisioned, mapped to a VMware ESXi host,

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 20

and formatted as a VMFS datastore. The support for large volumes has been made possible by eliminating the legacy SCSI-2 locking which VMware used to maintain VMFS integrity. More information about that is available in the VMware integrations section. The advantage of large volumes to VMware administrators is that it simplifies management. As long as a storage pool has the storage capacity and performance available to satisfy the requirements, the maximum VMFS datastore size of 64 TB can be used. However operations such as IBM FlashCopy, Metro Mirror or Global Mirror, or Volume Mirroring are impacted by volume size. For example, the initial replication of a 64 TB volume can be a significant amount of time. VMware vSphere 5.0 offers the ability to create datastore clusters, which are a logical grouping of VMFS datastores into a single management object. This means smaller, more manageable VMFS volumes can be grouped into a single management object for VMware administrators. The IBM recommendation for volume and datastore sizing with VMware vSphere 5.0 and SAN Volume Controller and the Storwize V7000 disk family is to use volumes sized between 1 TB to 10 TB, and group these together into the VMware datastore clusters.

Thin provisioning with VMware


Thin provisioning is included on the SAN Volume Controller and Storwize V7000 disk family systems and can be seamlessly implemented with VMware. Thin volumes can be created and provisioned to VMware, or volumes can be nondisruptively converted to thin with Volume Mirroring. VMware vSphere also includes the ability to create thin provisioned virtual machine disks, or to convert virtual machine disks during a storage VMotion operation. Normally, to realize the maximum benefit of thin provisioning, thin virtual disk files must be placed on thinly provisioned storage volumes. This ensures that only the space used by virtual machines is consumed in the VMFS datastore and on the storage volume. This makes for a complicated configuration as capacity can be over-provisioned and must be monitored at both the VMFS datastore and storage pool levels. SAN Volume Controller and the Storwize V7000 disk family systems simplify thin provisioning through a feature called Zero Detect. Regardless of what VMware virtual machine disk type is being used (zeroed thick, eager-zeroed thick, or thin) the SAN Volume Controller or Storwize V7000 disk family system detects the zero blocks and does not allocate space for them. This means that VMware eagerzeroed thick virtual disks consumes as much space as a VMware thin provisioned virtual disk. The IBM recommendation is to implement and monitor thin provisioning on the storage system. The zeroed thick or eager-zeroed thick disk types should be deployed for virtual machines.

Using Easy Tier with VMware


SAN Volume Controller and the Storwize V7000 disk family have a classification of a storage pool called a hybrid pool. A storage pool is considered as a hybrid pool when it contains a mixture of solidstate drives (SSDs) and standard spinning disks. The main advantage of a hybrid storage pool is that IBM Easy Tier can be enabled. Easy Tier enables effective use of SSD storage by monitoring the I/O characteristics of a virtualized volume or storage pool and migrating the frequently-accessed portions of data to the higher-

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 21

performing SSDs. Easy Tier reduces the overall cost and investment of SSDs by effectively managing their usage. Easy Tier works seamless to the applications accessing the storage, including VMware and the virtualized workloads running on it. There are also no special configurations which must be implemented for an application to benefit from Easy Tier.

Using IBM Real-Time Compression with VMware


IBM Real-time Compression is seamlessly implemented on SAN Volume Controller and the Storwize V7000 disk family by providing a new volume type, compressed volume. Real-Time Compression uses the Random Access Compression Engine (RACE) technology, previously available in the IBM Real-time Compression Appliance, to compress incoming host writes before they are committed to disk. This results in a significant reduction of data which must be stored. Real-Time Compression is enabled at the storage volume level, so in a vSphere 5.0 environment, all virtual machines and data stored within a VMFS datastore that resides on a real-time compressed volume is compressed. This includes operating system files, installed applications, and any data. Lab testing and real-world measurements have shown that Real-Time Compression reduces the storage capacity consumed by VMware virtual machines by up to 70%. Real-Time Compression works seamlessly with the VMware ESXi hosts, and therefore, no special VMware configurations or practices are required. For more information about Real-time Compression and VMware, refer to the Using the IBM Storwize V7000 Real-time Compression feature with VMware vSphere 5.0 white paper at: ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_v7000_real_time_compression

VMware storage integrations


VMware has always provided an ecosystem in which partners can integrate their products and provide additional functionality for the virtualized infrastructure. Storage partners have the opportunity to integrate with several VMware application programming interfaces (APIs) to provide additional functionality, enhanced performance, and integrated management. The following sections outline some of these key integration points.

vStorage APIs for Array Integration


The vStorage APIs for Array Integration (VAAI) are a set of APIs available to VMware storage partners which when leveraged, allow certain VMware functions to be delegated to the storage array, enhancing performance and reducing load on servers and storage area networks (SANs). Figure 19 provides a high-level overview of the VAAI functions.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 22

Figure 19: vStorage APIs for Array Integration relationship to VMware functions

The implementation of vStorage APIs for Array Integration in vSphere 4.1 introduced three primitives: hardware-accelerated block zero, hardware-assisted locking, and hardware-accelerated full copy. The VMware vSphere 4.1 implementation of VAAI does not use standard SCSI commands to provide instructions to the storage array, so a device driver is required to be installed on the vSphere 4.1 ESX/ESXi hosts. You can find more details about installing the IBM Storage Device Driver for VMware VAAI at the following URL: http://delivery04.dhe.ibm.com/sar/CMA/SDA/02l6n/1/IBM_Storage_DD_for_VMware_VAAI_1.2.0_ IG.pdf The vSphere 5.0 implementation of VAAI also began using standard SCSI commands, so a device driver was no longer required to be installed on the vSphere 5.0 ESXi hosts. With both vSphere 4.1 and 5.0 implementations of VAAI, the SAN Volume Controller or the Storwize V7000 disk family system must be running the software version 6.2.x or newer. The VAAI primitives can be easily enabled and disabled with the following methods:

Controlling VAAI through the vSphere client


The VAAI primitives for hardware-accelerated block zero and full copy, namely DataMover.HardwareAcceleratedInit, and DataMover.HardwareAcceleratedMove respectively, can be enabled and disabled in the vSphere host advanced settings as shown in Figure 20. The hardware-assisted locking primitive can be controlled by changing the VMFS3.HardwareAcceleratedLocking setting as shown in Figure 21.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 23

Figure 20: Controlling hardware-accelerated block zero or full copy

Figure 21: Controlling hardware-assisted locking

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 24

Controlling VAAI through command line


The VAAI primitives can also be controlled through the command-line interface by using the esxcfg-advcfg command. To view the status of a setting, use the esxcfg-advcfg command with the g option:
~ # esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove Value of HardwareAcceleratedMove is 1 ~#

To change a setting, use the esxcfg-advcfg command with the s option:


~ # esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedMove Value of HardwareAcceleratedMove is 0 ~ # esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove Value of HardwareAcceleratedMove is 1

IBM Storage Management Console for VMware vCenter


IBM has taken advantage of the open plug-in architecture of VMware vCenter Server to develop the IBM Storage Management Console for VMware vCenter Server. The IBM Storage Management Console is a software plug-in that integrates into VMware vCenter and enables management of the supported IBM storage systems including: IBM System Storage SAN Volume Controller IBM XIV Storage System IBM Storwize V7000 IBM Storwize V7000 Unified IBM Scale Out Network Attached Storage (SONAS)

The IBM Storage Management Console for VMware is installed and runs as a Microsoft Windows Server service on the vCenter Server. When a vSphere client connects to the vCenter Server, the running service is detected and the features provided by the Storage Management Console are enabled for the client. Features of the IBM Storage Management Console include: Integration of the IBM storage management controls into the VMware vSphere graphical user interface (GUI) with the addition of an IBM storage resource management tool and a dedicated IBM storage management tab Full management of the storage volumes including: volume creation, deletion, resizing, renaming, mapping, unmapping, and migration between storage pools Detailed storage reporting such as capacity usage, FlashCopy or snapshot details, and replication status

The graphic in Figure 22 shows the relationships and interaction between the IBM plug-in, VMware vCenter and vSphere, and the IBM storage system.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 25

Figure 22: Relationships and interaction between components

Installation and configuration


You can download the IBM Storage Management Console for VMware vCenter by accessing the IBM Fix Central website (at ibm.com/support/fixcentral/) and searching for updates available for any of the supported IBM storage systems. Download the installation package that is appropriate for architecture of the vCenter server. On x86 architectures IBM_Storage_Management_Console_For_VMware_vCenter2.5.1-x86.exe On x64 architectures IBM_Storage_Management_Console_For_VMware_vCenter2.5.1-x64.exe

An installation and administrative guide is included in the software download.

VMware vStorage APIs for Data Protection


The vStorage APIs for Data Protection (VADP) is the enabling technology for performing backups of VMware vSphere environments. IBM Tivoli Storage Manager for Virtual Environments integrates with VADP to perform the following backups: Full, differential, and incremental full virtual machine (image) backup and restore. File-level backup of virtual machines running supported Microsoft Windows and Linux operating systems.

Data consistent backups by leveraging the Microsoft Volume Shadow Copy Service (VSS) for virtual machines running supported Microsoft Windows operating systems.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 26

Tivoli Storage Manager for Virtual Environments can centrally back up virtual machines across multiple vSphere hosts without the requirement of a backup agent within the virtual machines by using VADP. The backup operation is off loaded from the vSphere host, allowing the host to run more virtual machines. Figure 23 shows an example of a Tivoli Storage Manager for Virtual Environments architecture.

Figure 23: Tivoli Storage Manager for Virtual Environments architecture

Tivoli Storage Manager for Virtual Environments includes a GUI that can be used from the VMware vSphere Client. The Data Protection for VMware vCenter plug-in is installed as a vCenter Server extension in the Solutions and Applications panel of the vCenter Server system. The Data Protection for VMware vCenter plug-in can be used to complete the following tasks: Create and initiate or schedule a backup of virtual machines to a Tivoli Storage Manager server. Restore files or virtual machines form a Tivoli Storage Manager server to the vSphere host or datastore. View reports of backup, restore, and configuration activities.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 27

Figure 24 shows the Getting Started page which is displayed when the plug-in is first opened.

Figure 24: The Getting Started page of Tivoli Data Protection for VMware vCenter plug-in

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 28

Summary
The IBM System Storage SAN Volume Controller and the Storwize V7000 disk family systems provide scalability and performance for VMware vSphere environments through the native characteristics of the storage systems, and also through VMware API integrations. This paper outlined configuration best practices for using SAN Volume Controller and the Storwize V7000 disk family, and also included information on efficiency features such as thin provisioning, Easy Tier, and Real-Time Compression that can be seamlessly deployed within a VMware environment.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 29

Resources
The following websites provide useful references to supplement the information contained in this paper:

IBM Systems on PartnerWorld ibm.com/partnerworld/systems IBM Redbooks ibm.com/redbooks IBM System Storage Interoperation Center (SSIC) ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_over= yes IBM Storwize V7000 ibm.com/storage/storwizev7000 IBM System Storage SAN Volume Controller ibm.com/systems/storage/software/virtualization/svc/index.html IBM TechDocs Library ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs VMware vSphere 5 Documentation Center pubs.vmware.com/vsphere50/index.jsp?topic=/com.vmware.vsphere.install.doc_50/GUID-7C9A1E23-7FCD-42959CB1-C932F2423C63.html

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 30

Trademarks and special notices


Copyright IBM Corporation 2012. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 31

presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize V7000 disk family 32

Das könnte Ihnen auch gefallen