Beruflich Dokumente
Kultur Dokumente
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to SolutionLab@hds.com. Be sure to include the title of this white paper in your email message.
Table of Contents
Solution Components ............................................................................................................... 4 Hitachi Virtual Storage Platform ............................................................................................... 4 Hitachi Adaptable Modular Storage 2300 ................................................................................ 8 Hitachi Dynamic Provisioning Software ................................................................................... 9 Hitachi Dynamic Tiering ........................................................................................................... 10 Hitachi Universal Volume Manager .......................................................................................... 11 VMware vSphere 4 ................................................................................................................... 11 Storage Configuration Best Practices ..................................................................................... 12 Redundancy ............................................................................................................................. 13 Zone Configuration ................................................................................................................... 14 Storage Host Group Configuration ........................................................................................... 15 Hitachi Dynamic Provisioning Software Best Practices ........................................................... 18 Hitachi Dynamic Tiering Software Best Practices .................................................................... 19 Use Cases for Dynamic Tiering on vSphere ............................................................................ 20 Automatic and Manual Mode.................................................................................................... 27 Distributing Computing Resources Using DRS and Dynamic Provisioning Software ...... 28 Universal Volume Manager ....................................................................................................... 32 Disk Alignment and RAID Stripe Size ...................................................................................... 33 ESX Host Configuration ............................................................................................................ 34 Multipathing .............................................................................................................................. 35 Queue Depth ............................................................................................................................ 35 ESX Host Metrics to Monitor .................................................................................................... 38 SCSI Reservations ................................................................................................................... 38 VMkernel Advanced Disk Parameters ..................................................................................... 39 Scalability Best Practices ......................................................................................................... 40 Conclusion ................................................................................................................................. 41
Solution Components
This section describes the hardware and software components mentioned in this best practices guide.
Figure 1 shows the two types of racks available for a Virtual Storage Platform.
Control Rack Disk Expansion Rack
Figure 1
Figure 2 shows the logic boards in a fully populated single-chassis Virtual Storage Platform.
Figure 2
Figure 2 shows the following chassis components (note that a feature is a pair of boards on two separate power domains): GSW Grid Switch PCI Express Switch. One or two features (two or four boards) per control unit with 24 2GB/sec HiStar-E ports each can be installed DCA Data cache adapter cache memory. One to four features (two, four, six or eight boards) per control unit with up to 32GB of RAM each can be installed VSD Virtual storage director processor module. One or two features (two or four boards) per control unit can be installed. FED Front-end director host port module. One to four features (two, four, six or eight boards) per control unit of four or eight 8Gbps Fibre Channel ports can be installed. BED Back-end director disk controller module. One or two features (two or four boards) per control unit with eight 6Gbps SAS links per board can be installed. If the back-end director options are not installed (available for the single-chassis configuration only), two additional front-end director options can be used in those chassis slots.
3D Scaling Architecture
The Hitachi Virtual Storage Platform allows for optimal infrastructure growth in all dimensions by scaling up, scaling out and scaling deep.
Scale Up
Scale up to meet increasing demands by dynamically adding processors, connectivity and capacity in a single unit, providing the highest performance for both open and mainframe environments. In the basic single chassis configuration, the number of logic boards, disk containers, and drives is highly scalable. You can start with the minimum set of logic boards,10, and one disk container, then add more boards (up to a total of 28 boards in a single chassis) and disk containers (up to a total of eight disk containers in a single chassis). Disk container types may be intermixed within a chassis.
Scale Out
Scale out to meet multiple demands by dynamically combining multiple units into a single logical system with shared resources, support increased demand in virtualized server environments, and ensure safe multitenancy that is, the ability to run multiple servers simultaneously without the risk of corruption or modification of data from one server to another and quality of service through partitioning of cache and ports. You can double the scalability of the Virtual Storage Platform with a dual-chassis system with up to six racks. The logic box in each chassis is the same, using the same types and numbers of logic boards. Any front-end port can access any back-end RAID group; no division within the storage system exists between the chassis. A dual-chassis Virtual Storage Platform can manage up to 247PB of total storage capacity.
Table 1 lists the capacity differences between a single-chassis and a dual-chassis Virtual Storage Platform storage system.
Table 1. Virtual Storage Platform Chassis Capacity Comparison
Maximum Capacity Data cache Raw cache bandwidth Solid state drives 2.5 SFF drives 3.5 LFF drives Logical volumes (LDEVs)
Scale Deep
Scale deep to extend storage value by dynamically virtualizing new, existing external storage systems, extend Hitachi Virtual Storage Platform advanced functions to multivendor storage and offload less demanding data to external tiers to optimize the availability of your tier one resources. The Virtual Storage Platform provides the virtualization mechanisms that allow other storage systems to be attached to some of its front-end director Fibre Channel ports and accessed and managed via hosts that are attached to the host ports on the Virtual Storage Platform. As far as any host is concerned, all virtualized logical units passed through the Virtual storage Platform to the hosts appear to be internal logical units from the Virtual Storage Platform. The front-end ports on the Virtual Storage Platform that attach to the external storage systems front-end ports are operated in external or SCSI initiator mode (attached to servers), rather than the usual SCSI target mode (attached to hosts). For more information about the Hitachi Virtual Storage Platform, see the Hitachi Data Systems web site.
10
VMware vSphere 4
vSphere 4 is a highly efficient virtualization platform that provides a robust, scalable and reliable infrastructure for the data center. vSphere 4 features like DRS, High Availability (HA), and Fault Tolerance (FT) provide an easy-to-manage platform. Use of vSphere 4s round robin multipathing policy distributes load across multiple host bus adapters (HBAs) and multiple storage ports. Use of DRS with Hitachi Dynamic Provisioning software automatically distributes loads on the ESX host and across the storage systems back end.
11
Figure 3
12
Redundancy
A scalable, highly available and easy-to-manage storage infrastructure requires redundancy at every level. To take advantage of ESXs built-in multipathing support, each ESX host needs redundant HBAs. This provides protection against both HBA hardware failures and Fibre Channel link failures. When ESX 4.1 hosts are connected in this fashion to a Hitachi Virtual Storage Platform system, hosts can use a round robin multipathing algorithm where the I/O load is distributed across all available paths. Hitachi Data Systems recommends using a minimum of two HBAs for redundancy. All LUNs must be accessible to both HBAs. Fabric redundancy is also crucial. If all of your ESX hosts are connected to a single Fibre Channel switch, failures on that switch can lead to failures on all of your virtual machines stored on the storage system. To prevent a single point of failure in the fabric, deploy at least two Fibre Channel switches or a director class switch. If you use multiple Fibre Channel switches, one HBA from each host is connected to one switch while the second HBA is connected to the second switch. If you use a Fibre Channel director, follow the manufacturers recommendation for connecting multiple HBAs. Hitachi Data Systems recommends using at least two Fibre Channel switches or a Fibre Channel director. Without redundancy, a single point of failure within the storage system can expose all of your virtual machines to that single point of failure. The Virtual Storage Platform provides redundancy with two storage clusters where each storage cluster contains front-end directors. Each front-end director can contain either four or eight Fibre Channel ports. Cluster 1 contains all the odd-numbered Fibre Channel ports and cluster 2 contains all the even-numbered Fibre Channel ports. This protects against Fibre Channel link failures and storage controller failures. If one link fails, another link is available. If one storage controller fails, another storage controller is available. All LUNs need to be mapped to at least two storage ports, one on each storage cluster. Figure 4 shows a configuration containing two ESX hosts, a Virtual Storage Platform system and an Adaptable Modular Storage Platform family system connected to a SAN director with full Fibre Channel path redundancy.
13
Figure 4 Key Best Practice Connect at least two HBAs from each ESX host to two Fibre Channel ports on the Virtual Storage Platform, one from each storage cluster. For example, connect HBA 1 to Fibre Channel port CL1-A and HBA 2 to Fibre Channel port CL2-A.
Zone Configuration
Zoning divides the physical fabric into logical subsets for enhanced security and data segregation. Incorrect zoning can lead to LUN presentation issues to ESX hosts. Two types of zones are available, each with advantages and disadvantages: Port Uses a specific physical port on the Fibre Channel switch. Port zones provide better security and can be easier to troubleshoot than WWN zones. This might be advantageous in a smaller static environment. The disadvantage of this is ESX hosts HBA must always be connected to the specified port. Moving an HBA connection results in loss of connectivity and requires rezoning. WWN Uses nameservers to map an HBAs WWN to a target ports WWN. The advantage of this is that the ESX hosts HBA can be connected to any port on the switch, providing greater flexibility. This might also be advantageous in a larger dynamic environment. However, two disadvantages are the reduced security and additional complexity when troubleshooting. Hitachi Data Systems recommends using single initiator zones, which have one initiator (HBA) with single or multiple targets in a single zone.
14
Table 2 lists an example of ESX hosts with single-initiator zones and multiple paths.
Table 2. Sample Zoning with Multiple Paths per ESX Host
Host HBA HBA 1 Port 1 HBA2 Port 1 HBA 1 Port 1 HBA 2 Port 1
When connecting external storage to the Virtual Storage Platform, zone the fabric so that the external storage has multiple paths to the Virtual Storage Platform. Table 3 lists an example of external storage with redundant paths.
Table 3. Sample Zoning for External Storage with Redundant Paths
15
Figure 5
Hitachi Data Systems recommends that any configuration applied to a port in cluster 1 also is applied to the port in cluster 2 in the same location. For example, if you create a host group for a host on port CL1-A, also create a host group for that host on port CL2-A. Figure 6 shows the front-end director port names for a Virtual Storage Platform system with two eightport front-end director pairs installed.
Figure 6
16
Figure 7
17
18
19
Figure 8
Figure 8 illustrates the following Dynamic Tiering steps: Allocate new page New 42MB pages are allocated from the highest tier with space available. By default, SAS and SATA tiers reserve 8 percent for new page allocation. This can be adjusted from 0 to 50 percent through the command line. Monitor Access to the disks is counted and the I/O per hour (IOPH) per page average is entered into a table Determine Optimal Tier Data is analyzed and a page relocation plan is calculated. Relocate Relocate (called reallocate in some commands) is executed one DP-VOL at a time starting with the lowest number DP-VOL. The next relocation cycle continues from the last DP-VOL completed, as follows: If the target tier has insufficient capacity to relocate all the pages requested, these pages are not relocated. Relocation completes when one of the following conditions occurs: All pages that are scheduled to be relocated, and can be, are relocated. Auto cycle time is reached. Pool configuration or parameters are modified. Relocation is canceled by a user. Repeat Several cycles might be needed to complete the relocation.
20
Figure 9 shows the initial distribution of pages on tier 1 and tier 2 before Dynamic Provisioning relocated the pages.
Figure 9
Next, Dynamic Tiering manual mode monitoring was started using the command line; it ran for two hours to collect IOPH information. After the monitoring cycle, manual relocation was started and allowed to run to completion, approximately 27 hours. Figure 10 shows the distribution of pages after the relocation process was complete.
Figure 10
21
The IOPS throughput and goals are shown in Figure 11. The highlighted bars show the VM with improved IOPS.
Figure 11
The response times of the virtual machines are shown in Figure 12. The highlighted bars show the VMs with improved response times.
Figure 12
22
Figure 13
23
Figure 14 shows the response times of the virtual machine I/O before and after reallocation.
Figure 14
Figure 15 shows the distribution of allocated space on SATA and SAS after the reallocation was complete. Note that before relocation, no pages were allocated to SAS disks.
Figure 15
Using a single tier of storage made up of LDEVs on SATA RAID-6 (6D+2P) disks only, virtual machine throughput and response times were unacceptable. Response times on all virtual machines were affected by disk contention. Relocation after adding LDEVs on SAS RAID-5 (3D+1P) disks took approximately 14 hours, after which response times and throughput improved.
24
Figure 16
25
Figure 17 and Figure 18 show how performance improved with the addition of SAS disks. Figure 17 shows the total IOPS for the ESX hosts before and after the relocation. Note that some ESX I/O overhead was caused by virtual machine log file I/Os, delta file I/O and other ESX host functions.
Figure 17
Figure 18 shows the total ESX guest millisecond per command. This metric includes all I/O latencies combined.
Figure 18
26
When using multiple media types, follow these best practices: Size the SAS tier for the storage and performance of the most critical applications. For applications with lower performance requirements, use SATA disks, which can provide the following benefits: Relief for the SAS tier by offloading the I/O Additional capacity for applications with lower I/O requirements. You can also add SSD media to the Dynamic Tiering pool. SSD is utilized as tier 1, SAS is utilized as tier 2 and SATA is utilized as tier 3. The addition of SSD to the Dynamic Tiering pool can improve application performance.
27
For example, if VM1 and VM2 run during business hours in the US, and VM3 and VM4 run during business hours in APAC, set monitoring to 24 hours so that pages used by all VMs are monitored and placed on the correct tier during the subsequent relocation cycle. For another example, if an application performs OLTP during business hours and batch processing during off hours, set the monitoring cycle to business hours when OLTP is running. This maximizes the page allocations for OLTP where response time is more critical.
Figure 19
28
In Figure 19, each standalone ESX host contains an amount of CPU and memory resources. When ESX hosts are configured in a DRS cluster, the resources are aggregated into a pool. This allows the resources to be used as a single entity. A virtual machine can run on any ESX host in the DRS cluster, rather than being tied to a single host. DRS manages these resources as a pool and automatically places virtual machines on a host at power-on and continues to monitor resource allocation. DRS uses vMotion to move virtual machines from one host to another when it detects a performance benefit or based on other optimization decisions. Figure 20 shows how Hitachi Dynamic Provisioning software aggregates disks in to a Dynamic Provisioning pool.
Figure 20
Hitachi Dynamic Provisioning software aggregates all the allocated disks into a Dynamic Provisioning pool. All DP-VOLs are striped across all disks in the Dynamic Provisioning pool. This allows you to treat all the disks in the Dynamic Provisioning pool as a single entity. A single standard RAID group is bound by the IOPS available from the disks in that RAID group. When LDEVs are placed in a Dynamic Provisioning pool, the DP-VOLs are not tied to a single RAID group; instead, DP-VOLs span all the RAID groups and disks in the Dynamic Provisioning pool.
29
Figure 21 shows how standalone ESX configurations with standard RAID groups can have unbalanced loads.
Figure 21
With standalone ESX hosts, computing resources might be heavily used by certain virtual machines. These virtual machines might also be underperforming because the host can no longer provide any more resources; the host becomes a limiting factor. Other ESX hosts in the same farm might be moderately used or might be sitting idle. Meanwhile, the disk utilization in a RAID group might be at its performance limit handling I/Os from the virtual machines. Other RAID groups might be moderately used or lightly used. In this scenario, heavy imbalance of resource utilization exists on both the host side and the storage side. Utilization of resources changes throughout the day and only careful monitoring and manual administration can balance these loads. Figure 22 shows how DRS can distribute loads on the ESX hosts.
Figure 22
30
With ESX hosts configured in DRS cluster, computing resources are managed as a pool. DRS automatically uses vMotion to move virtual machines to other hosts to evenly balance utilization or for performance benefits. Monitoring and placement of the virtual machines is done automatically by DRS; no manual administration is required. However, when used with standard RAID groups, a heavy imbalance still exists on the storage system. To balance the loads on the storage system, manual monitoring is required and Storage vMotion can be used to migrate virtual disks to other RAID groups when necessary. Figure 23 shows how DRS with Hitachi Dynamic Provisioning software can distribute loads on ESX hosts and the RAID groups.
Figure 23
When RAID groups are configured in a Dynamic Provisioning pool, all the disks in the Dynamic Provisioning pool are treated as a single entity. DP-VOLs span multiple RAID groups in the Dynamic Provisioning pool. This distributes the I/O across all the disks in the pool. By using vSphere DRS, Hitachi Dynamic Provisioning software and round robin multipathing together, computing resources and I/O load are automatically distributed for better performance and scalability.
31
Figure 24
Universal Volume Manager offers two cache mode settings: Cache Mode = Enable Processes I/O to external LDEVs exactly the same as internal LDEVs. When a host write occurs the data is duplexed in cache and an immediate I/O complete response is sent back to the host. Cache Mode = Disable Default; tells the Virtual Storage Platform to hold off on sending an I/O complete response to the host until the I/O has been committed to the external storage system.
32
The cache mode setting does not change the cache handling for read I/Os. On a read request the Virtual Storage Platform will examine the cache to see if the data is available in cache, if it is it will return the data from cache, if it is not it will retrieve the data from the external storage system. Slower external storage systems can cause cache write pending to rise and affect the throughput of other hosts or LDEVs. Do not use Cache Mode = Enable when high IOPS are expected to the external storage system. Universal Volume Manager offers the ability to set the queue depth on the external storage connected to the Universal Storage Platform VM. Keep the following in mind when adjusting the queue depth setting: The range of queue depth values is 2 to 128 with a default of 8. Increasing the queue depth from the default setting of 8 to 32, 64 or 128 can have a positive effect on the response time of OLTP type applications. The maximum queue depth on an external port is 256, so if multiple external storage systems are attached, be sure to set the queue depth to not exceed 256 total for all external LUNs. In a Universal Volume Manager configuration that uses a Hitachi Adaptable Modular Storage 2000 family storage system connected as external storage, Dynamic Provisioning pools can be created on the Virtual Storage Platform system, the Adaptable Modular Storage 2000 family or both. Hitachi Data System recommends placing the Dynamic Provisioning pool on the Virtual Storage Platform only. Placing a Dynamic Provisioning pool on the Adaptable Modular Storage 2000 family system can result in administrative overhead, poor performance and poor utilization of the thin provisioning feature of Dynamic Provisioning.
33
The Start value of 128 indicates an aligned partition. A Start value of 63 indicates that the partition is not aligned. If the VMFS is not properly aligned, consider migrating the VMs to another LU and recreating the volume. If this is not an option, see VMwares Performance Best Practices for VMware vSphere 4 white paper. With Windows 2008, newly created partitions are properly aligned. New partitions created with previous versions of Windows operating system are not aligned by default. When a partition that was created on earlier versions of Windows is attached to Windows 2008, it carries the same partition properties as when it was created. For more information, see Microsofts Disk Partition Alignment Best Practices for SQL Server article
Figure 25
34
When an application issues an I/O, the guest operating systems virtual adapter driver passes the I/O to the virtual SCSI adapter in virtual machine monitor (VMM). The I/O is then passed to the VMkernel. At this point, the I/O can take different routes based on the kind of virtual disk the virtual machine uses: If virtual disks on VMFS are used, the I/O is passed through the virtual SCSI layer, then through the VMFS layer. If NFS is used, the I/O passes through the NFS layer. If raw device mapping (RDM) is used, it can be virtual or physical. Virtual RDM passes through the virtual SCSI layer where the physical RDM uses the virtual machine guest operating systems SCSI layer and bypasses the virtual SCSI layer. The I/O is then issued to the pluggable storage architecture (PSA) to determine what path the I/O to be sent. I/O is then passed to the HBA driver queue, then to the Virtual Storage Platform. The testing done to support this best practices guide used VMFS.
Multipathing
ESX 4.1 uses PSA, which allows the use of VMware Native Multipathing (NMP) and Path Selection Plugin (PSP). As shown in Figure 22, when I/O is issued through the PSA, the NMP calls the PSP assigned to the storage device. PSP determines to which path the I/O is to be sent. If the I/O is complete, NMP reports its completion. If the I/O is incomplete, the Storage Array Type Plug-in (SATP) is called to interpret the error codes and activate paths if necessary. PSP then resets the path selection. When the Virtual Storage Platform is used with ESX 4.1, NMP is automatically configured with the following plug-ins: SATP Default active-active storage system type (VMW_SATP_DEFAULT_AA) PSP Fixed path policy (VMW_PSP_FIXED) However, the Virtual Storage Platform can also support the round robin multipathing policy (VMW_PSP_RR). Round robin rotates through all available paths distributing the I/O load across the paths.
Key Best Practice To maximize the capabilities of the Hitachi Virtual Storage Platform, use the round robin path policy.
Queue Depth
Queue depth settings for ESX hosts can be complex and difficult to calculate. If you see a QFULL condition in the VMkernel logs, you can enable the VMkernel adaptive queue depth algorithm. Hitachi recommends using this only as a temporary measure until you can calculate and adjust the queue depth at the HBA. For more information, see VMwares Controlling LUN queue depth throttling in VMware ESX/ESXi Knowledge Base article. Unless you see problems in the VMkernel logs as described in the Controlling LUN queue depth throttling in VMware ESX/ESXi Knowledge Base article, Hitachi Data Systems recommends using a maximum queue depth setting of 32 in most environments. Monitoring of the ESX hosts is required to verify that this setting is optimal. For more information on monitoring the ESX host, see Table 4.
35
36
Figure 26 shows 16 SAS disk drives in a Dynamic Provisioning pool with four LUNs associated with the disks connected to two ESX hosts with two HBAs each.
Figure 26
In very large Dynamic Provisioning pools, the wide striping advantage can be beneficial; however, you must take care when assigning LUNs. A few large LUNs in the Dynamic Provisioning pool mean that this calculation results in a very high queue depth and poor performance. .
Key Best Practice Do not exceed the queue depth of 2,048 for the Fibre Channel port on the Virtual Storage Platform.
The procedure to change queue depths differs depending on the HBA vendor or type of HBA driver. To change queue depths for Emulex and Qlogic drivers, see the VMware Knowledge Base article Changing the Queue Depth for QLogic and Emulex HBAs. When changing LU queue depths, you also typically change the Disk.SchedNumReqOutstanding ESX advanced parameter. However, Hitachi Data Systems recommends setting this parameter to a value no higher than the default of 32 on ESX 4.1. This parameter affects the number of outstanding commands to a target when competing virtual machines exist. Monitoring queue depth is an important part of monitoring your environment and troubleshooting performance problems. When the queue depth is exceeded, I/O is queued in the VMkernel. This can increase I/O latency for the virtual machines. The esxtop or resxtop utilities can be used to monitor queue depth on ESX 4 at the storage disks adapter, device and virtual machine levels.
37
Metric AQLEN LQLEN ACTV QUED %USD CMDS/s READS/s WRITES/s MBREAD/s MBWRTN/s DAVG/cmd KAVG/cmd GAVG/cmd QAVG/cmd
Description Maximum number of ESX VMkernel active commands that the adapter driver is configured to support (storage adapter queue depth) Maximum number of ESX VMkernel active commands that the LU is allowed to have (LUN queue depth) Number of commands in the ESX VMkernel that are currently active for a LUN Number of commands in the ESX VMkernel that are currently queued for a LUN Percentage of queue depth used by ESX VMkernel active commands for a LUN Number of commands issued per second for a device Number of read commands issued per second for a device Number of write commands issued per second for a device Megabytes read per second for a device Megabytes written per second for a device Average device latency per command in milliseconds Average ESX VMkernel latency per command in milliseconds Average guest OS latency per command in milliseconds Average queue latency per command in milliseconds
SCSI Reservations
SCSI reservation conflicts can cause I/O performance problems and limit access to storage resources. This can occur when multiple ESX hosts access a shared VMFS volume simultaneously during certain operations. The following operations use SCSI reservations: Creating templates Creating virtual machines either new or from template Running vMotion Powering on virtual machines Growing files for virtual machine snapshots Allocating space for Thin virtual disks Adding extents to VMFS volumes Changing the VMFS signature
38
Many of these operations require VMFS metadata locks. Experiencing a few SCSI reservations conflicts is generally acceptable; however, best practice is to minimize these conflicts. The following conditions can affect the number of reservation conflicts: Number of virtual machines per VMFS volume Number of ESX hosts accessing a VMFS volume Use of virtual machine snapshots In addition, follow these best practices to minimize SCSI reservation conflicts: Do not run VMware Consolidated Backups (VCBs) on multiple virtual machines in parallel to the same VMFS volume. Run operations that require SCSI reservations to the shared VMFS volume serially.
Description Limit on disk bandwidth (KB/s) usage. Limit on disk throughput (IO/s) usage. Distance in sectors in which I/O of a VM is considered sequential. Sequential I/O is given higher priority to get the next I/O slot. Number of consecutive requests from one world (VMs). Number of outstanding commands to a target with competing worlds (VMs). Number of consecutive requests from VM required to raise the outstanding commands to the maximum. Number of switches between commands issued by different VMs required to reduce outstanding commands to SchedNumReqOutstanding. Maximum disk read/write I/O size before splitting (in KB).
8 32 128
Disk.SchedQControlVMSwitches
Disk.DiskMaxIOSize
32767
39
40
Conclusion
This white paper describes best practices for deploying vSphere 4 on the Virtual Storage Platform. Following these best practices helps to ensure that your infrastructure is robust, offering high performance, scalability, high availability, ease of management, better resource utilization and increased uptime. Table 6 lists best practices for optimizing the Hitachi Virtual Storage Platform for vSphere 4 environments.
Table 6. Best Practices for Hitachi Virtual Storage Platform
Best Practice Use Hitachi Dynamic Provisioning software with DRS to distribute loads on the storage system and ESX hosts. Use minimum of two host HBA ports. Use at least two Fibre Channel switches or a director class switch. Use at least two Fibre Channel Ports, one port for each storage cluster, on the Virtual Storage Platform.
Use single-initiator zones. For a standalone ESX host, configure one host group per ESX host on each storage cluster. For clustered ESX hosts, configure one host group per ESX cluster on each storage cluster.
Create the VM template on a zeroedthick format virtual disk. Use the default zeroedthick format virtual disk. Use zeroedthick virtual disks when thin provisioning is required When using Dynamic Provisioning software for cost savings, do not convert a thin format virtual disk to thick using the vCenter GUIs Inflate option.
Dynamic Tiering
Use a monitoring cycle long enough to collect IOPH on typical workload. Use manual mode to monitor and relocate for periods of time greater than 24 hours Use manual mode for initial page distribution or when new virtual disks are created and relocation is expected to take many hours. Use automatic mode when small changes in page allocation are expected.
Use eagerzeroedthick virtual disk format to prevent warm-up anomalies. Size the Dynamic Provisioning pools according to the I/O requirements of the virtual disk and application. When larger Dynamic Provisioning pools are not possible, separate sequential and random workloads on different Dynamic Provisioning pools. For applications that use log recovery, separate the logs from the database on different Dynamic Provisioning pools.
If minimizing the time to create the virtual disk is more important than maximizing initial write performance, use the zeroedthick virtual disk format. If maximizing initial write performance is more important than minimizing the time required to create the virtual disk, use the eagerzeroedthick format.
41
Description Scalability
Best Practice Configure for performance first, then capacity. Aggregate application I/O requirements, but take care not to exceed the capability of the RAID group. Make configuration choices based on I/O workload. Distribute workloads to other RAID groups.
Table 7 lists best practices for using external storage with Universal Volume Manager on the Virtual Storage Platform.
Table 7. Best Practices for External Storage with the Hitachi Virtual Storage Platform
Best Practices Use Hitachi Dynamic Provisioning software on the Virtual Storage Platform for best performance and space utilization. Use minimum of two Fiber Channel connections from the external storage system to the Virtual Storage Platform. Use at least two Fibre Channel switches or a director class switch. Set to Disable. For random workloads, increase from the default of 8 to 32 or 64.
Table 8 lists best practices for optimizing vSphere 4.1 for the Hitachi Virtual Storage Platform
Table 8. Best Practices for vSphere 4.1
Best Practice Use round robin (VMW_PSP_RR). Set LU queue depth to no more than 32 for SAS drives. Set LU queue depth to no more than 16 for SATA drives. Use the default value of 32 for the ESX VMkernel advanced parameter Disk.SchedNumReqOutstanding.
Reduce the number of virtual machines per VMFS volume. Reduce the number of ESX hosts accessing a VMFS volume. Minimize the use of virtual machine snapshots. Avoid running VMware Consolidated Backups (VCBs) on multiple virtual machines in parallel to the same VMFS volume. Run operations that require SCSI reservations to the shared VMFS volume serially.
Scalability
Configure for performance first, then capacity. Aggregate application I/O requirements, but take care not to exceed the capability of the RAID group. Make configuration choices based on I/O workload.
42
Best Practice Do not use different shares or IOPS settings on VMFS datastores that share the same underlying storage resources because this can produce uneven I/O results.
Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services web site. Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources web site. Click the Product Demos tab for a list of available recorded demonstrations. For more information about Hitachi products and services, contact your sales representative or channel partner or visit the Hitachi Data Systems web site.
43
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks and company names mentioned in this document are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation
Hitachi Data Systems Corporation 2011. All Rights Reserved. AS-059-01 January 2011 Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA www.hds.com Regional Contact Information Americas: +1 408 970 1000 or info@hds.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com
44