Beruflich Dokumente
Kultur Dokumente
White paper
Table of contents
Executive summary............................................................................................................................... 3 Target audience .................................................................................................................................. 3 Related documentation ......................................................................................................................... 3 ThP overview....................................................................................................................................... 3 ThP and its value ................................................................................................................................. 4 When configuring ThP on the XP24000/XP20000, you SHOULD: ............................................................ 5 When configuring ThP on the XP24000/XP20000, you SHOULD AVOID: ................................................. 6 Current ThP limitations and guidelines: ................................................................................................... 7 ThP Thresholds and Alarms ................................................................................................................... 7 The ThP Pool Threshold ..................................................................................................................... 7 The ThP Volume Threshold................................................................................................................. 8 Setup Email Notification for ThP Alarms .............................................................................................. 9 How to control the run-away process to prevent unnecessary pool usage.................................................... 9 Challenges .................................................................................................................................... 10 Proposed solution........................................................................................................................... 10 Benefits......................................................................................................................................... 10 Host File System Considerations .......................................................................................................... 10 ThP Pool design recommendations ....................................................................................................... 11 ThP V-VOL design recommendations .................................................................................................... 12 Pool protection .................................................................................................................................. 12 Double disk failure concerns............................................................................................................ 12 Formula for mean time to data loss (MTTDL) .......................................................................................... 13 Backup and recovery...................................................................................................................... 13 DMT protection .............................................................................................................................. 13 DMT Backup Guideline ............................................................................................................... 14 Impact of DMT Backup................................................................................................................ 14 Scenario 1 100% sequential write............................................................................................. 14 Scenario 2 100% random write (One IO/ThP page, worst case) .................................................. 14 Oracle 11g and ThP .......................................................................................................................... 15 Auto-Extend ................................................................................................................................... 15 Best practice.................................................................................................................................. 15 VMware support using ThP ................................................................................................................. 16
ThP with External Storage (ES)............................................................................................................. 17 ThP usage with partitioning ................................................................................................................. 18 ThP with Auto LUN............................................................................................................................. 19 ThP with Business Copy ...................................................................................................................... 19 ThP Snapshot Combination ................................................................................................................. 20 ThP Continuous Access Combination.................................................................................................... 21 ThP shredding ................................................................................................................................... 22 ThP Pool space reclaiming .................................................................................................................. 22 Zero unused disk space .................................................................................................................. 22 Reclaim unused disk space.............................................................................................................. 22 ThP V-VOL expansion ......................................................................................................................... 23 Best practices ................................................................................................................................ 25 Online LUN expansion ....................................................................................................................... 25 Expanding V-VOL using HP-UX ........................................................................................................ 26 Expanding V-VOL using OVMS........................................................................................................ 28 Using CVAE CLI to create ThP Pool ...................................................................................................... 30 Using CVAE CLI to create ThP V-VOLs .................................................................................................. 30 Additional ThP related CLI commands .................................................................................................. 30 Appendix ......................................................................................................................................... 31 ThP Operation sequence ................................................................................................................. 31 ThP Combinations with other Program Products.................................................................................. 32 ThP Pool Volume specification.......................................................................................................... 33 Restrictions on ThP Pool Volume ....................................................................................................... 33 ThP Pool specifications.................................................................................................................... 34 ThP Volume specifications ............................................................................................................... 35 Service information messages (SIM) ................................................................................................. 36 Glossary ....................................................................................................................................... 37 For more information.......................................................................................................................... 39
Executive summary
The XP24000/XP20000 introduced a new storage allocation concept called Thin Provisioning (ThP). Thin provisioned storage allows the array administrator to pre-plan user capacity needs and allocate virtual storage based on the planned future capacity, but physically only consume the amount of disk space that the user is actually accessing. As a result, administrators no longer have to concern themselves with unused storagethat has been allocated but is not currently in use by the users. This white paper provides an overview of the best practices for setting up XP Thin Provisioning (ThP) on the XP24000/XP20000 array. The reader should have previous experience or knowledge regarding provisioning of an XP array, and the tools used to provision the array such as Remote Web Console, and XP Command View Advanced Edition software. Additionally, the reader should be familiar with replication using XP Business Copy and other program products.
Target audience
The document is intended for customers who are considering the setup of ThP on the XP24000/ XP20000 array and have experience with the general setup and use of the previous XP array generations using the Remote Web Console and XP Command View AE. If the user is interested in using ThP with replication, they should already have experience using XP Business Copy and Raid Manager.
Related documentation
This paper is not intended to replace or substitute for the installation, configuration, and troubleshooting guides. Therefore in addition to this white paper, please refer to other documents for this product: HP StorageWorks XP Thin Provisioning Installation Guide HP StorageWorks XP Thin Provisioning Configuration Guide HP StorageWorks XP Thin Provisioning Release Notes HP StorageWorks XP Thin Provisioning White Papers HP StorageWorks RWC Guide HP StorageWorks XP Command View AE HP StorageWorks XP24000/XP20000 SNMP Agent Reference Guide These and other HP documents can be found on HP Web site: http://www.hp.com/support
ThP overview
System administrators typically provide themselves with much more storage than is needed for various applications because they plan ahead for growth. For instance, an application may require five volumes with 650 GB of total actual data, but based on some analysis or at the request of the department, the system administrator has created a 3 TB volume, allowing for data growth. If a volume is created with 500 GB of space, this space is typically dedicated to that application volume and no other application can use it. However, in many cases the full 500 GB is never used, so the remainder is essentially wasted. This is a major problem with managing storage capacity and is often referred to as stranded storage.
The inefficiencies of traditional storage provisioning can negatively impact capital costs and storage administration resources. The most obvious issue is the amount of storage that becomes unused and therefore increases the total cost of ownership. Additionally, since this allocated but unused storage capacity cannot typically be reclaimed for other applications, customers have to buy more storage capacity as their environments grow, increasing cost even further. At some point, customers may actually be required to buy a completely new storage system in addition to the one they have in place. Results from multiple surveys conducted by several analysts and storage companies targeting enterprise storage have uncovered several limitations regarding traditional storage provisioning methods. The highlights of the surveys are: Over 50% of the customers were aware that they had stranded and unused storage capacity due to inefficient provisioning methods. Over half of these customers had between 31-50% of stranded and unused storage. For example, if they had 10 TB of storage capacity then 3.1-5 TB was stranded. Almost half of the total users had to buy an additional storage system (array) because they could not utilize their stranded storage. This means that although these customers had unused storage capacity they had already paid for, they needed to buy a new storage system to meet the needs of their business. Close to one third of users are planning to buy an additional storage system in the next 12 months because they cannot access their stranded storage. Over 75% of users felt that storage provisioning was a time and resource drain on their IT organizations.
1 TB ThP Pool
Traditional
ThP
175 GB + 200 GB + 75 GB
+ 50 GB
650 GB
These are ways in which ThP can reduce the cost of ownership and significantly accelerate return on investment (ROI):
Advantage Simplified volume design Description
Smooth implementation of logical volume system without physical format Logical volume design independent of physical configuration Actual capacity design independent of logical volume configuration
Notes The format of the pool volume is required Dependent on a particular host environment Required to create a pool with multiple parity groups
This keeps the pool layout optimized from a LDEV space consumption perspective. Automatic load balancing is slow low level background process. Assuming that you have equally sized LDEVs, the remaining 10% of each LDEV will be used along with the newly added LDEVs, maintaining a reasonable pool page allocation performance balance. For better performance predictability, pool life, space manageability, and space allocation dependability. It will provide a full backup for ThP system area on the SVP.
It will simplify identification and administration. Similarly, the Snapshot volumes should follow the same principle. A ThP pool volume will be divided into 42 MB pages as soon as it is added to the pool, the fractions that cant make a full 42 MB page will be wasted. Similarly, each ThP V-VOL will get assigned 42 MB pages as necessary, even if the ThP V-VOL needs less than 42 MB page. You need a host to receive the ThP alarms when a threshold is exceeded.
Recommendation For the ThP V-VOLs that were previously used but are no longer needed, it is no longer need to perform a V-VOL format before releasing them from the pool because the pages will be zeroed automatically upon release.
Reason This enables that all of the pages that are returned to the pool free_page_queue will contain all Zero data. Volume Shredding will write the shred data pattern first. You could create a Business Copy with a brand new V-VOL (P-VOL) to the existing V-VOL (S-VOL) to effectively write zeros to the SVOLs pages. Note: 703 SOM is used to skip the processing that clears the data area to zero and is off by default and its highly recommended to leave it OFF.
Mode 703 = ON: The zero-clear processing is skipped. Mode 703 = OFF (default): The zero-clear processing is
not skipped. Since the dynamic mapping table (DMT) used by the ThP resides in the fifth shared memory (SM) set, make sure that you have installed the first four SM sets prior to that. If you must restore an image (sector based) backup into a THP V-VOL, make sure there is enough space in the pool for the entire volume before starting restore). Then run discard zero data to reclaim unused space after each V-VOL is restored. ThP is a separate application from other program products, and it uses its own fixed SM space. Image backups are usually full volume backups that dont differentiate between data and free space (for example: a 10 GB volume with 1 GB of actual data will have 10 GB backup image while a file backup will only be a 1 GB backup set). If an image backup is restored on a ThP V-VOL, the full virtual volume space will be allocated in the ThP pool.
If a file backup of a ThP volume takes too long (for example: when the ThP volume contains a large number of small files). There are two ways to mitigate this:
1. When possible, while using ThP volumes and creating files, create a fewer number of larger files
(for example: When operating a DBMS, A database management system, as in Oracle, increase the size of the existing file by using a function for file extension (such as the auto extended function of Oracle), do not create new files for table space extension of the DB.) Use Snapshot on the ThP volume, and then perform a tape backup from that Snapshot
This will result in undesirable pool space allocation (OS and tool dependent). Make sure the pool has enough space for the V-VOL before you start defrag, and that you discard zero data after defrag operation. This will result in undesirable space allocation. Discard zero data to reclaim space post low level formatting.
Dont use in it/erase or any low level formatting, where zero data is written to the volume unless you plan on running discard zero data to reclaim the unused space.
4. Repeat the process again with setting the ThP pool alarm threshold at 50%.
5. Measure the time it takes the ThP volumes to consume the extra 10% of the ThP pool
6. Repeat the process again with setting the ThP pool alarm threshold at 60%. 7. Measure the time it takes the ThP volumes to consume the extra 10% of the ThP pool
The table below represents the expected response of the ThP pool volume when the ThP pool has insufficient space to allocate to the V-VOLs.
Access area Page unassigned area I/O type Read Write Page assigned area Read Write Reported content Illegal request Write protect Read enable Write enable
at 5%.
2. If the ThP volume is used in a production environment with critical data, set the ThP volume
threshold at 100%.
3. If the ThP volume is used in a production environment with extremely critical data, set the ThP
volume threshold at 200% or higher (you will get warned much earlier than prior cases and have more time to react).
Please keep in mind that some OS and file system combinations allocate more space than others when creating a new file system. For those types of file systems (regardless of whether its for production or testing) please set your thresholds at no lower than 50% and apply the above rules. A ThP pool is likely shared by several V-VOLs, so its preferable that the pool capacity is much larger than a single free V-VOL capacity.
DATE : 01/23/2009 TIME : 12:50:41 Machine : Unknown(Seq.# 10038) RefCode : 630032 Detail : The TP VOL threshold was exce
Challenges
Its not recommended to de-fragment a file system using a ThP V-VOL, as it may result in unnecessary page allocation. It can be very difficult to predict whether a user may or may not cause a ThP V-VOL page allocation by mistake. Those mistakes can be quite difficult to correct after they occur. There are no safeguards concerning run-away applications space allocation, so be careful while using ThP V-VOLs.
Proposed solution
For those file systems that can grow their volumes on the fly (for example: Windows, most of UNIX advanced file systems, Linux advanced file system) where large volumes are presented via ThP to be used for an applications life cycle over time. (for example: A 2 TB ThP V-VOL to be used over the course of the next 24 months)
1. Configure the ThP V-VOL normally for the full 2 TB size 2. Present the 2 TB V-VOL to the host 3. Partition the 2 TB volume into four equal 250 GB host partitions 4. Present the 1st partition to your file system volume group 5. Let the application use it for the 1st six months 6. As needed, the server admin can add more partitions or utilize scripting to perform the LVM
expansion
Benefits
1. This file system can never allocate more than 250 GB from the ThP pool unless permitted 2. The pool consumption can be predictable over time resulting in better ThP management 3. The storage administrator is not so involved in the volumes growth, but is still notified in time to
10
OS HP-UX
Solaris
No1 Yes Yes3 Yes Yes Yes4,5,6 Yes Yes No1 Yes Yes
AIX
1 2 3 4 5 6
At FS creation time, the capacity of the pool is consumed up to 100% of the ThP V-VOL capacity. At FS creation time, the capacity of the pool is consumed to 30% of ThP V-VOL capacity. ZFS zpool scrub is not recommended to use because it will force the volume to fully allocate. If VMware eagerzeroedthick formatting is used, then run Discard Zero data from the RWC so the pool reclaims pages that have been zeroed. VMware thin formatting can result in less than optimum new page allocation when multiple V-VOLs are used in a single VMFS volume
VMFS does not support online volume expansion. Online volume expansion will work if the volume is presented as a raw device to the guest OS, and if the guest OS supports it.
example: A pool composed of RAID 1 SSD would be considered highest performing tier, and a pool with RAID 1 FC would be considered the safest tier.
3. Consider the amount of front-end bandwidth you will allocate to the V-VOLs bound the pool. Then
choose the HDD type, RAID level, and number of parity groups/pool to match the front-end bandwidth.
4. Divide the parity group into LDEVs equal in size to the data disks in the group. For example: RAID
5 3D+1P would be divided into 3 LDEVs. If size of the data disk does not divide into 42 MB evenly, create as many LDEVs as possible with 42 MB increment and leave the remaining space in the final LDEV. For example: 3D+1P of 144 GB drives would yield 2 LDEVs of 147462 MB and the last LDEV would be 147444 MB. maximize performance.
5. When adding multiple parity groups to the pool, choose parity groups from different DKA sets to
11
Item Emulation type RAID level HDD type Creation Capacity of Pool Volume
Specification OPEN-V All XP24000/XP20000 supported levels (including parity group concatenation) All XP24000/XP20000 supported HDD types By LDEV 8 GB to maximum allowable LDEV size by the array (approximately 3 TB today)
2. Create one ThP V-VOL per V-VOL group so that the V-VOL is guaranteed room for expansion at a
later time. System mode 726 set to ON will force users to create one V-VOL per V-VOL group. expand the V-VOL online at a later time if needed. Example: ThP V-VOL (00:FF:00 X) Snapshot volume (00:EE:00 V) External storage volume (00:AA:00 #) Regular volume (00:00:00)
3. Do not over provision the V-VOL unless the customer absolutely demands a larger volume. You can 4. The ThP V-VOL will show with x notation to differentiate it from other XP volumes.
Pool protection
Double disk failure concerns
You increase risk of the highly unlikely event of a double disk failure in the same parity group the larger your pool becomes. Keep in mind, double disk failure risk is equivalent to the same risk encountered when using LVM, VxFS or any other host based virtualization. Thin Provisioning does not add any additional double disk failure risk. The XP uses Dynamic Sparing to prevent double disk failure. If you lose a drive in an array group, the spare kicks in and data is restored. Dynamic sparing is a method of removing a disk drive from service if its read/write errors exceed a certain threshold. On normal read and write operations, the array keeps track of the number of errors that occur. If the error threshold is reached, the system considers that disk drive as likely to cause an unrecoverable error and automatically copies the data from that disk drive to a spare disk drive. The odds of a complete double drive failure are extremely slim because the rebuild of the spare drive will complete before the second drive fails. If a double drive failure does occur you will receive a 623XXX (XXX is the pool ID.) alert, and the pool will go into a Blockade state. In order to complete avoid double drive failure use RAID 6. Otherwise, you will have to recover the pool from backup. Additionally, for your database place the data files in one pool and the log backups in another.
12
NOTE Single drive failure in multiple parity groups within the same pool will not affect overall performance of the pool.
For RAID 6: MTTF or MTBF MTTDL= N*(G-1)*(G-2)*(MTTR*MTTR) Where N is the number of drives in a pool, G is the number of drives in a parity group, and MTTR is the correction copy time. In the case of a 300 GB 15K disk in 7+1 and 256 drives in a pool MTTDL is 45,553,935.96 hours or 1,898,080.65 days or 271,154.37 weeks or 63,059.16 months. A drive MTBF of 520,833 hrs is provided by the manufacturer when calculating a single drive failure. The formula is based upon years of array operations proving out the assertion disk failures are independent and completely uncorrelated. Aging of drives may influence these results.
DMT protection
The dynamic mapping table (DMT) contains all of the pointers from the V-VOLs to the spindles on the disk. If the DMT is lost, all of the data is effectively lost; therefore, we have added a protection mechanism to prevent loosing the DMT. The DMT is regularly saved in the reserved area of the ThP Pool. As a result, the DMT will be recoverable even after a worst case power failure. During an orderly power down, DMT is also saved to the System Disk. With an unexpected power loss, if power comes back: While the batteries are still goodDMT is OK, all is well After the batteries are failedall is well. The DMT is saved on ThP pool HDD and will be automatically restored upon power recovery. The restoration process can add several minutes to the power up process.
13
DMT Backup Guideline Back-up Area Size of storage area for back-up: up to 4 GB/Pool Location of back-up area: Head of Pool 4 GB can backup enough metadata for 10 PB of user data Impact of DMT Backup For the following scenarios: 420 GB ThP V-VOL 10K ThP pages (42 MB each) Assume the host is sustaining ~2K IOPS to the ThP V-VOL 8K I/O size 42 MB/8K = 5250 8K IOs Assume a new page allocation causes slow I/O Assume an allocated page is a normal fast I/O Scenario 1 100% sequential write What should you expect in 100% Sequential Write?
1. One IO out of 5250 will take longer time to be accepted 2. The total number of the slow IOs = 10K I/Os for the life time of the V-VOL (=total number of ThP
pages)
3. If the host consumes the 420 GB space in 6 month, the total number slow I/O is only 10,000
out of 55+ Million 8K I/O spread over 6 months and all other I/Os are at the fast I/O rate.
Scenario 2 100% random write (One IO/ThP page, worst case) The Big Picture for Random Write What should you expect in Random Write?
1. At worse case, ThP allocates one page per I/O due to the host I/O random address pattern. 2. The total number of the slow IOs = 10K I/Os for the life time of the V-VOL (=total number of
ThP pages).
3. With the host I/O rate of 2K IOPs; the 420 GB volume of 10K pages can be allocated in less than
five second at the slow I/O rate. the normal fast I/O rate.
4. After the THP V-VOL pages are fully allocated, the remaining 55+ Million I/Os will go back up to
14
Best practice
When selecting the optimal auto-extend size, consider the ASM Disk group AU (Allocation Unit), the data growth rate and ThP page size. It is not recommended to a ThP V-VOL for swap space and redo-log space because it will not take advantage of ThP functionality.
Use the closest possible increment to 42 MB. In the example, only 1 GB is available as the smallest increment unit.
15
Test Environment:
Server OS Guest OS XP24K HP-ML370 G5 ESX 3.0.2 Windows 2003 R2 Standard 32 Bit Edition SP2 LUN0-3, THP VOL Size100 GB/LUN
16
multiple arrays, and a connection from one array is lost, the entire pool will fail.
2. Try to utilize multiple LUNs from a single external storage array to help ThP performance. 3. Follow all of the internal Thin Provisioning guidelines. 4. ES ThP implementation adheres to the ES array performance and availability.
ThP V-VOLs
XP
External. LUNs
ES
17
ThP VOL
ThP VOL
ThP VOL
ThP VOL
ThP VOL
SLPR
18
CVAE Replication Manager Plug-in Supported Supported Ok, but run discard zero data
Normal-VOL *2 ThP-VOL *
3
*1 CommentGiven that the pair is in suspended state, and new data is written to the P-VOL (Primary Volume of the replica). A pairresync restore operation will cause the S-VOL data to over-write the P-VOL as expected. The remaining difference data that was written to the P-VOL during the suspend state will be set to zero, but the unused space will not be released to free space. The P-VOL usage % will not decrease to match the S-VOL usage %. The user can manually start Discard Zero Data from the Remote Web Console to return the unused space to back the pool after suspending the pair. *2 CommentA pairresync restore operation will not increase the ThP P-VOL used capacity to 100% in order to match the Normal S-VOL. Instead, the array will only copy the changes from S-VOL to P-VOL. *3 CommentThe V-VOL will fully allocate before you can run discard zero data; therefore, carefully copy to a select number of V-VOLs at a time to prevent filling up the pool, run Discard Zero Data, and then continue with remaining V-VOLs. However, it is possible that there will not be any pages full with zero data; therefore, be careful not to overprovision if you are unsure of the content.
19
Snapshot V-VOL
ThP V-VOL
DMT
DMT
Its ok to copy between many ThP Pools to a single Snapshot Pool ThP Pool
Snapshot Pool
ThP Pool(s)
ThP Pool and Snapshot Pool are required. Actual data is stored in ThP Pool. Metadata and difference data is in the Snapshot Pool. Data copy is performed between both pools. In the case of Snapshot S-VOL access when data does not exist in Snapshot Pool, DKC reads the data from ThP Pool. You must create a ThP Pool and Snapshot Pool. Take this into consideration as you design your system because it will reduce the total number of Pools.
20
P-VOL
V-VOL
S-VOL
DMT
DMT
Local and remote ThP Pools are required for ThP to ThP pairing. Continuous Access link is established through the V-VOLs, but data copy is performed between the pools. Local and remote pools do not have to be the same size. All Continuous Access functionality behaves as if the ThP V-VOL is a normal volume. Supported with Continuous Access Sync and Continuous Access Journal. Continuous Access pair status will change to PSUE when S-VOL pool is full before P-VOL pool, and write IO is attempted from the P-VOL.
21
ThP shredding
Shredding capability is applied to ThP VOL like normal VOL. The specified data is written to the assigned pages. This logic is not applied to non-assigned pages.
22
GB free; therefore, the ratio is 200% (400 GB/200 GB). If you expand the V-VOL to 100 GB, now the V-VOL free space is 700 GB and the ratio drops to 56% (400 GB/700 GB), which is still ok because the V-VOL threshold is set to 50%. ratio drops to 40%, below the threshold.
2. V-VOL has 300 GB free. The user tries to expand it out to 1200 GB, but it will fail because the 3. If V-VOL threshold is set to 250%, and V-VOL is 500 GB with 200 GB free, ratio is 200%. The
user will not be able to expand because it is below 250%. Basically, you can control whether or not the V-VOL can expand by setting the V-VOL threshold higher than free capacity ratio.
23
Similarly, to CVS with normal LDEVs in a parity group, free space must exist immediately below the V-VOL in the V-VOL group. In the example below you have V-VOL group X1-1 with 3 V-VOLs each 1 GB in size. The group has free space scattered in between the V-VOLs due to user deleting V-VOLs in order to return space to the pool. The user can expand:
1. 00:10:00 an additional 1 GB. If you delete 00:10:02 and 00:10:67, you can grow up to 4 TB. 2. 00:10:02 can grow up to 103 GB. If you delete 00:10:67, you can grow up to 4 TB104 GB. 3. 00:10:67 can grow up to the remaining 4TB105 GB.
24
Best practices
1. Do not set your V-VOL threshold excessively high unless you are positive that the pool will have
2. Only create one V-VOL per V-VOL group so that your V-VOL can expand up to the maximum 4 TB
size at any time. The number of V-VOL groups will not affect the maximum number of LDEVs. If you want 65K V-VOLs you can create 65K V-VOL groups with one V-VOL in each.
25
26
> vgmodify -v -r vg03 Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf Current Volume Group settings: Max LV 255 Max PV 16 Max PE per PV 4809 PE Size (Mbytes) 4 VGRA Size (Kbytes) 656 /dev/rdisk/disk69 Warning: Max_PE_per_PV for the volume group (4809) too small for this PV (9417). Using only 4809 PEs from this physical volume. "/dev/rdisk/disk69" size changed from 19701760 to 38576128kb An update to the Volume Group IS required New Volume Group settings: Max LV Max PV Max PE per PV PE Size (Mbytes) VGRA Size (Kbytes) 255 16 4809 4 656
27
$ init/limit $1$dga368 thpdev %INIT-I-DEFCLUSTER, value for /CLUSTER defaulted to 16 $ mount/share $1$dga368 _Label: thpdev _Log name: %MOUNT-I-MOUNTED, THPDEV mounted on _$1$DGA368: (CAALMG) $ sh dev/full $1$dga368 Disk $1$DGA368: (CAALMG), device type HP OPEN-V, is online, mounted, fileoriented device, shareable, available to cluster, error logging is enabled. Error count 8 Operations completed 49257 Owner process "" Owner UIC [RAIDMGR] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 1 Default buffer size 512 Current preferred CPU Id 1 Fastpath 1 WWID 01000010:6006-0E80-0561-AC00-0000-61AC-0000-0170 Total blocks 35603072 Sectors per track 32 Total cylinders 34769 Tracks per cylinder 32 Logical Volume Size 35603072 Expansion Size Limit 2147475456 Allocation class 1 Volume label "THPDEV" Relative volume number 0 Cluster size 16 Transaction count 1 Free blocks 35388096 Maximum files allowed 16711679 Extend quantity 5 Mount count 1 Mount status Process Cache name "_CAALMG$DKC100:XQPCACHE" Extent cache size 64 Maximum blocks in extent cache 3538809 File ID cache size 64 Blocks in extent cache 0 Quota cache size 0 Maximum buffers in FCP cache 4114 Volume owner UIC [RAIDMGR] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled. $ HORCMINST == "0" $ raidvchkdsp -g thpos -v aou Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID thpos DGA368 CL7-A-0 0 2 25004 368 756 17384 1 70 1 $ raidvchkset -g thpos -d DGA368 -vext 2G $ raidvchkdsp -g thpos -v aou Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID
28
2 25004 368
756
19432
1 70
1 ]
$ inqraid $1$dga368 $1$DGA368 -> [ST] CL7-A Ser = 25004 LDEV = 368 [HP ] [OPEN-V CA = SMPL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] A-LUN[PoolID 0001] SSID = 0x0005 $ sh dev $1$dga368/full Disk $1$DGA368: (CAALMG), device type HP OPEN-V, is online, mounted, File-oriented device, shareable, available to cluster, error logging is enabled.
Error count 8 Operations completed 49258 Owner process "" Owner UIC [RAIDMGR] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 1 Default buffer size 512 Current preferred CPU Id 1 Fastpath 1 WWID 01000010:6006-0E80-0561-AC00-0000-61AC-0000-0170 Total blocks 35603072 Sectors per track 32 Total cylinders 34769 Tracks per cylinder 32 Logical Volume Size 35603072 Expansion Size Limit 2147475456 Allocation class 1 Volume label "THPDEV" Relative volume number 0 Cluster size 16 Transaction count 1 Free blocks 35388096 Maximum files allowed 16711679 Extend quantity 5 Mount count 1 Mount status Process Cache name "_CAALMG$DKC100:XQPCACHE" Extent cache size 64 Maximum blocks in extent cache 3538809 File ID cache size 64 Blocks in extent cache 0 Quota cache size 0 Maximum buffers in FCP cache 4114 Volume owner UIC [RAIDMGR] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled. $ set vol/size=37932453 $1$dga368: $ sh dev $1$dga368/full Disk $1$DGA368: (CAALMG), device type HP OPEN-V, is online, mounted, fileoriented device, shareable, available to cluster, error logging is enabled. Error count 9 Operations completed 49341 Owner process "" Owner UIC [RAIDMGR] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 1 Default buffer size 512 Current preferred CPU Id 1 Fastpath 1 WWID 01000010:6006-0E80-0561-AC00-0000-61AC-0000-0170 Total blocks 39797376 Sectors per track 32 Total cylinders 38865 Tracks per cylinder 32 Logical Volume Size 37932453 Expansion Size Limit 2147475456 Allocation class 1 Volume label Cluster size Free blocks Extend quantity Mount status "THPDEV" Relative volume number 0 16 Transaction count 1 37717472 Maximum files allowed 16711679 5 Mount count 1 Process Cache name "_CAALMG$DKC100:XQPCACHE"
29
Extent cache size File ID cache size Quota cache size Volume owner UIC
64 Maximum blocks in extent cache 3771747 64 Blocks in extent cache 2329376 0 Maximum buffers in FCP cache 4114 [RAIDMGR] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD
Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled. $
30
Appendix
ThP Operation sequence
Create an ThP pool volume Add pool volumes to pool Create an ThP pool Create a SYS area Create an ThP volume
Monitor ThP pool free area Add pool volumes to pool Expand ThP pool capacity Expand the SYS area Prohibit ThP volume write
Delete pool ID
31
LUN Security CVS Cache Residency Manager HP XP Performance Control Parallel Access Volume (PAV) Data Exchange HMDE/FAL/FCU DB Validator Volume Retention Manager for Mainframe HP XP Disk/Cache Partition HP XP External Storage
Supported Supported Supported Supported Supported Similar to a traditional LDEV Similar to a traditional LDEV
Supported
32
Program Products Business Continuity Manager Presenting via Remote Web Console
Relationship Supported
ThP volume
Supported
33
Threshold
34
The possible number of volumes that may defined ThP VOL Capacity
Define one V-VOL per V-VOL group so that volume expansion will not be impeded.
46 MB to 4 TB (1kB=1024B)
Create V-VOL sizes as integral multiple of 42 MB since pages are allocated in 42 MB increments.
ThP volume OPEN-V An alert is reported when N% of the unused capacity of ThP volume could not be absorbed by the available free pool.
Possible to check the RAID level and pool ID via SCSI Inquiry.
A user configured alert warning can be provided via SNMP Trap for each LUN.
35
Always 80%.
36
Glossary
These terms used throughout the white paper aid in understanding the innovative solutions provided by HP servers and storage.
Term Array Group BC CHA CHP CLI CLPR CM CSW CV AE CVS DCR Disk Group DKA DKC DKU DR DWL FC FC AL SSID GB Gb GUI HA HBA HDU IOPS LDEV LDKC LUN MCU MP MTBF MTTDL MTTF Definition 4 Disk drives in a purchase order Business Copy Channel Host Adapter Channel Host Processor Command Line Interface Cache Logical Partition Cache Memory Cache Switch Command View Advanced Edition Custom Volume Set Data Cache LUN Residence See RAID Group Disk Control Adapter Disk control frame Disk array frame Disaster recovery Duplex Write Limit Fibre Channel Fibre Channel Arbitrated Loop Subsystem ID Giga byte Giga bit Graphical user interface High availability Host bus adapter Hard Disk box I/Os per second Logical device Logical Disk Control Frame SCSI Logical Unit Number Main Control Unit Micro Processor Mean Time Between Failure Mean Time To Data Loss Mean Time To Failure
37
Term MTTR nPar OLTP Parity group PCR PSA P-VOL QoS RAID Group RCU RWC SA SAN SLA SLPR SM SPOF SS S-VOL SVP TB THP UPS V-VOL WWN
Definition Mean Time to Repair HP nPartition (hard partition) Online Transaction Processing See RAID Group Partial Cache Residence Partition Storage Administrator Primary Volume Quality of Service The set of disks that make up a RAID set Remote Control Unit Remote Web Console Storage administrator Storage area network Service Level Agreement Storage Management Logical Partition Shared Memory Single point of failure Snapshot Secondary volume XP array Service Processor Tera Byte Thin Provisioning Uninterruptible power supply Virtual Volume World Wide Name, a unique 64-bit device identifier in a Fibre Channel storage area network
38