Sie sind auf Seite 1von 16

How to Upgrade SDD or Migrating SDD to SDDPCM on a Virtual I/O Server System on AIX Platform

Author Limei Shaw Che Lui Shum

02/ 22/ 2007 Version 8.0

Table of Contents
1. Overview ........................................................................................................................ 4 2. Scope............................................................................................................................... 4 3. Upgrading SDD from Legacy SDD to new SDD on VIO System ............................. 5 3.1 Procedures with Data Migration (Scenario 1)...................................................... 5 3.1.1 Prepare for Upgrade and Data Migration..................................................... 5 3.1.1.1 On VIO server ........................................................................................... 5 3.1.1.2 On VIO client ............................................................................................ 6 3.1.2 Upgrade SDD host attachment to v1006 or later and SDD to v1620 or later on VIO server ................................................................................................... 6 3.1.3 Create/export virtual target devices on VIO server with new SDD vpath devices supporting unique_id attribute........................................................ 7 3.1.4 Configure virtual devices backed up by new vpath devices (with unique_id attribute) on VIO client.......................................................................... 8 3.1.5 Migrating data on VIO client................................................................... 8 3.1.5.1 Migrating data on VIO client with LVM Access ................................... 8 3.1.5.2 Migrating data on VIO client with Direct Access .................................. 9 3.2 Procedures without Data Migration (Scenario 2) .............................................. 10 3.2.1 Single VIO server........................................................................................... 10 3.2.2 Multiple VIO servers, with no down time on the client system ................. 10 4. Migrating SDD to SDDPCM on VIO System........................................................... 12 4.1 Migrating from Legacy SDD to SDDPCM ......................................................... 12 4.2 Migrating from new SDD to SDDPCM .............................................................. 12 Trademarks ....................................................................................................................... 15 Notices .............................................................................................................................. 15

1. Overview
Starting from SDD v1620, SDD supports a new ODM attribute called unique_id to provide additional functionality for virtual target devices on the Virtual I/O Server (VIOS). Throughout this document, we will refer to these two different types of SDD as follows: Legacy SDD SDD version prior to v1620, which does not support unique_id attribute New SDD SDD v1620 or above, which supports unique_id attribute. On a VIOS, a vpath device may be in use by a virtual target device. It is strongly recommended that the customer migrate their client data from existing devices; to devices exported to the customer after SDD is updated, to avoid client data lose. This is because virtual devices created on SDD vpath devices configured with software prior to v1620 use a legacy addressing scheme for client I/O. The unique_id attribute allows the virtual target device to avoid putting meta-data on the physical volume exported to the client, storing meta-data on the physical volume requires client I/O be mapped so that it does not destroy the meta-data. The difference in the addressing scheme is transparent to the client but if a virtual target device is lost; for example a system administrator removes all devices on the VIOS; then when a new device is created it will use the new scheme and the client data is lost. It is not unlikely that a system administrator following a procedure that is no longer valid destroys the virtual target devices. There are also virtualization features that are soon to be released that are not supported with the legacy addressing scheme. Due to the differences in the addressing scheme used by VIOS, proper procedures are required when migrating from legacy SDD to new SDD in order to ensure safe migration of client data.

2. Scope
This document covers the procedures for the following scenarios on a VIOS system: Upgrading from legacy SDD to new SDD when client data migration is required Upgrading from legacy SDD to new SDD when client data migration is NOT required Migrating from legacy SDD to SDDPCM Migrating from new SDD to SDDPCM

3. Upgrading SDD from Legacy SDD to new SDD on VIO System


When upgrading legacy SDD to new SDD on VIO system, there are several different VIO server/client virtual device configuration scenarios in the virtual I/O environment. Following table indicates which configuration requires data migration. Scenario 1 2 SCSI target device on the VIOS backed by: Legacy SDD vpath device Logical volume created with SDD legacy vpath device Data Migration Required? Yes No

Scenario 1: Legacy SDD vpath device is used to create a virtual target device (vtscsi) on VIOS which is mapped to virtual initiator device, accessed through raw device or via LVM on client You need to perform data migration procedure on VIO client after user upgrades legacy SDD to new SDD on VIOS. Scenario 2: Logical volume is used to create a virtual target device (vtscsi) on VIOS which is mapped to virtual initiator device, accessed through raw device or via LVM on client Since the virtual target device is created with logical volume on the VIOS, no data migration is required. In summary, you only need to perform data migration if you meet ALL of the following criteria: Legacy SDD is currently installed on VIO Sever And virtual target device was created with legacy SDD vpath devices, used as the backing device. And you want to upgrade legacy SDD to new SDD, i.e. version 1.6.2.0 or later.

3.1 Procedures with Data Migration (Scenario 1)


3.1.1 Prepare for Upgrade and Data Migration 3.1.1.1 On VIO server To migrate data from a virtual device that is backed by a legacy SDD vpath device to new SDD vpath device with unique_id, it is necessary to have at least one spare LUN that is not in use. The LUN capacity should be at least the same as the largest physical volume that requires data migration on the VIO client. You can also have one spare LUN for each existing LUN to simplify the data migration process. Otherwise, the data migration should be started from the largest capacity of virtual disk to the smallest one on the client.

The following steps assume you have configured same number of spare LUNs from the storage as the number of legacy SDD vpath devices that are in use by virtual target devices. We also assume the spare LUNs can be used to create vpath devices that are the same or larger capacity than the legacy SDD vpath devices in use by the virtual target devices. You can get the capacity of a vpath device using the command bootinfo s. For example: bootinfo -s vpath10 Use the lsmap all command in the restricted padmin shell to find out which the virtual target devices are dependent on which legacy SDD vpath devices; then note the location code of the virtual adapters in the Physloc field. You will need the information displayed by lsmap throughout this procedure, therefore, save it in whatever manner is convenient for later retrieval. 3.1.1.2 On VIO client You need to backup the data on VIO client. If VIO client is mapped to only one VIOS, then after you back up the data on VIO client, shutdown the VIO client partition by running shutdown F. 3.1.2 Upgrade SDD host attachment to v1006 or later and SDD to v1620 or later on VIO server

Make sure you followed the instructions in the Prepare for Data Migration section, and have a backup of all partitions clients and VIOS partitions before proceeding. Check what is/are the SDD host attachment installed on your host and upgrade it/them to the following level(s): devices.fcp.disk.ibm.rte: 1.0.0.6 or later ibm2105.rte: 32.6.100.27 or later Install the host attachment at or later than the versions listed above and reboot the VIO server by running shutdown Fr command at the root shell environment. When the VIO server comes back, the system is ready for SDD upgrade: i. Escape to the root shell if you are in logged in as padmin by issuing the oem_setup_env command. ii. When virtual target devices are in Available state, all underlying SDD physical vpath devices are opened. In order to close these backed SDD physical vpath devices, you need to put virtual target device into Defined state. Unconfigure virtual devices (vtscsiX) into Define state by following command. rmdev -l vtscsiX

Attention: Make sure you only put the virtual target devices to Defined state. Do not turn on -d option when you run rmdev command, that will delete the virtual target devices, and cause a loss of client data. Repeat this command to all virtual vtscsi devices, which are mapped by legacy SDD vpath devices. If all the virtual target devices are mapped by legacy SDD vpath devices to a virtual host adapter, then you can run the following command to change the state of all of the virtual target devices to Defined, instead of multiple commands. rmdev -l vhostX -R iii. Run datapath query device command to ensure all mapped SDD physical vpath devices are in CLOSED state: datapath query device iv. Upgrade SDD to version 1.6.2.0 or later by running smitty install. v. The SDD vpath devices are now configured with a unique_id attribute. Run this command to confirm that each vpath device has unique_id attribute displayed. lsattr El vpathX vi. Check to make sure that all virtual target devices previously put into Defined state are now in Available state. lsdev C | grep vtscsi If not, run following command in root shell environment, to configure the virtual target device into Available state: mkdev -l vtscsiX Note: If you ran rmdev -l vhostX -R before, then run cfgmgr l vhost# to configure all the virtual target devices on this virtual adapter. 3.1.3 Create/export virtual target devices on VIO server with new SDD vpath devices supporting unique_id attribute

On VIO server environment, create new virtual target devices with new SDD vpath devices allocated for this migration. The new SDD vpath devices should be allocated based on the capacity of the legacy SDD vpath devices already in use by a particular vhost instance. Refer to Prepare for Data Migration section for details. You need to create the new virtual target device on the same host instance as the one the old virtual target device is associated with. You can find out the vhost instance from the Backing Device field of the lsmap command output. Then run the mkdev command to create the new virtual target device.

For example, mkdev V vpath24 p vhost1 If the mkdev command is successful, it will display the name of the virtual target device as Available. 3.1.4 Configure virtual devices backed up by new vpath devices (with unique_id attribute) on VIO client

If you shutdown the VIO client before the upgrade, power up the VIO client to configure new virtual initiator devices now. Go to HMC and activate the VIO client by right-clicking on the VIO client name and select Activate. If you did not shutdown VIO client before, run cfgmgr to configure new virtual devices on VIO client. 3.1.5 Migrating data on VIO client

There are different migration procedures depending on how the virtual initiator devices are accessed on the VIO client. There are two different access methods: LVM access - Application accesses via volume group that is created with the virtual initiator devices Direct access Application accesses through virtual initiator devices directly 3.1.5.1 Migrating data on VIO client with LVM Access Since the virtual device belongs to a volume group on VIO client, you can perform the data migration using the replacepv command. This command will automatically extend the new destination virtual devices into the volume group and copy the data from source virtual physical volume to destination virtual physical volume; then it will reduce the source virtual physical volume device from the volume group when the command completes. The syntax of the command is: replacepv <src_hdiskM> <dest_hdiskN> Here is an example. A non-rootvg volume group is originally created with one physical volume, for instance hdisk4. After the configuration in Section 4.0, a new physical volume hdisk10 is Available to the client. To perform data migration on this volume group, one will run this command: replacepv hdisk4 hdisk10 Repeat the replacepv command for each virtual physical volume that requires data migration.

If the volume group is a rootvg, you need to run the following commands to migrate data of rootvg. Before you start data migration on rootvg, you need to find out the current boot disk first, and write down the new virtual device name you are going to use for replacing the current boot disk: i. bootinfo b /* find out the current boot disk */ ii. iii. replacepv <src_hdiskA> <dest_hdiskB> for boot disk replacepv <src_hdiskC> <dest_hdiskD> for rest of the physical volumes of rootvg ln f /dev/r<dest_hdiskB> /dev/ipldevice bootlist m normal o <dest_hdiskB> savebase

iv. v. vi.

vii. sync;sync;sync; viii. bosboot ad /dev/ipldevice ix. chpv c <src_hdiskA>

Remove older virtual devices from VIO client, which were originally mapped by legacy SDD vpath device. rmdev dl src hdiskA rmdev dl src hdiskC On VIO server, remove corresponding virtual target devices, originally mapped by legacy SDD vpath devices. rmdev -dl vtscsiM 3.1.5.2 Migrating data on VIO client with Direct Access On VIO client, run the following command to find out the capacity of the old virtual device. bootinfo s hdiskX The output from the above command is in MB. To convert it into number of blocks, perform the following calculation: Total number of blocks = (Output from the command above + 1) * 2048 -2 blocks Run the following command to copy data from existing virtual hdisk to a new virtual hdisk mapped by new SDD vpath, by skipping the first 2 blocks dd if=/dev/hdiskX of=/dev/hdiskY skip=2 count=<Total number of blocks>

Remove virtual device from VIO client rmdev dl hdiskX On the VIO server, remove the corresponding virtual target device rmdev dl vtscsiX

3.2 Procedures without Data Migration (Scenario 2)


3.2.1 Single VIO server i. Backup data on the VIOS and the VIOS clients, then run the command shutdown F on all clients that are dependent on the VIOS. ii. Run the command rmdev l vhostX R on all vhost instances. iii. Follow the procedures in Section 2.0, Upgrade SDD host attachment to v1006 or later and SDD to v1620 or later on VIO server Ensure the virtual target devices are in the Available state.

3.2.2 Multiple VIO servers, with no down time on the client system SDD upgrade on a VIOS will require shutting down VIO clients, unless the clients are configured by redundant VIOS: either devices on the client are configured as MPIO multipath virtual devices, with paths to multiple VIO servers; or a mirrored volume group is created with mirrored physical volumes (or virtual disks) mapped from more than one VIO servers. If your VIO client has a redundant configuration, and you want to keep VIO client up and running during the SDD Host Attachment and SDD upgrade, you need to make sure the virtual devices on client are configured as MPIO virtual devices. These MPIO virtual devices must have at least two paths, each mapped from different VIOSs virtual target device, sharing the same LUN. Follow these steps: 1) Backup data on the VIOS and the VIO client 2) On VIO client, identify path(s) associated with the virtual target device(s) from the VIOS you are going to perform SDD upgrade. a. Run lspath command to determine which vscsi instances provide paths for a particular physical volume. For example: lspath l hdisk5

10

b. To find out the location code of the vscsi instances in use by the physical volume, run the lscfg command. For example: lscfg l vscsi0 c. The lsmap command displays the location code of the vhost instance in the Physloc field. The slot numbers of the Virtual SCSI client and server adapters are encoded in the location code. They are preceded by -V2-C. They can be used to determine virtual adapter mapping if they are not known, by using the HMC. 3) Run chpath l hdiskX -s disable p vscsiY command to disable particular path(s) associated with a particular physical volume, on a virtual adapter mapped from the VIOS you are going to upgrade SDD. This will stop I/O from routing to these paths. For example: chpath l hdisk0 s disable p vscsi0 4) Follow the procedures in Section 2.0, Upgrade SDD host attachment to v1006 or later and SDD to v1620 or later on VIO server 5) On VIO client, run chpath l hdiskX -s enable p vscsiY to enable the paths that were previously disabled. 6) Make sure I/O resumes on all these enabled paths. 7) Repeat steps 2 through 5 to perform SDD upgrade on another VIOS

11

4. Migrating SDD to SDDPCM on VIO System


Migration from SDD to SDDPCM on VIOS requires manually removing all SDD vpath devices, all SDD supported storage hdisks, and all virtual target devices on VIOS. If users cannot shutdown the VIO client, for instances, the VIO client has rootvg built with virtual devices, or there are critical applications running on client, then VIO client must be configured with multiple VIOS to ensure availability. Before the migration starts, user should remove path(s) from virtual device(s) on VIO client, so I/O will not be routed to the VIOS you are going to perform SDD to SDDPCM migration. After the migration completes on that VIOS, user should run cfgmgr to bring the path(s) back, then remove other path(s) of the virtual devices on client, so the migration can be performed on another VIO server.

4.1 Migrating from Legacy SDD to SDDPCM


If you want to migrate SDD version earlier than 1.6.2.0 to SDDPCM, you need to upgrade from SDD version earlier than 1.6.2.0 to version 1.6.2.0 or later first. Then migrate from SDD to SDDPCM by following the procedures in Migrating from new SDD to SDDPCM section.

4.2 Migrating from new SDD to SDDPCM


1) Backup data on the VIOS and the VIO client 2) If your VIO client does not have redundant configuration, shutdown the VIO client partition by running shutdown F. If your VIO client has redundant configuration and you want to keep VIO client up and running during the SDD Host Attachment and SDD upgrade, you need to make sure the virtual devices on client are configured as MPIO virtual devices. These MPIO virtual devices must have at least two paths, each mapped from different VIOSs virtual target device. Follow these steps: i. On VIO client, identify path(s) associated with the virtual target device(s) from the VIOS you are going to perform SDD upgrade. a. Run lspath command to determine which vscsi instances provide paths for a particular physical volume. For example: lspath l hdisk5

12

b. To find out the location code of the vscsi instances in use by the physical volume, run the lscfg command. For example: lscfg l vscsi0 c. The lsmap command displays the location code of the vhost instance in the Physloc field. The slot numbers of the Virtual SCSI client and server adapters are encoded in the location code. They are preceded by -V2-C. They can be used to determine virtual adapter mapping if they are not known, by using the HMC. ii. Run chpath l hdiskX -s disable p vscsiY command to disable particular path(s) associated with a particular physical volume, on a virtual adapter mapped from the VIOS you are going to upgrade SDD. This will stop I/O from routing to these paths. For example: chpath l hdisk0 s disable p vscsi0 3) On VIO server, record the relationship between storage devices and the virtual target devices by running lsmap all and record the unique ID and the virtual target device name. 4) On VIO server, if you are in restricted padmin shell, you need to first switch to root shell environment by issuing oem_setup_env command. 5) Unconfigure virtual devices (vtscsiX) into Define state by following command. rmdev -l vtscsiX 6) Unmount file systems and varyoff the volume groups that are associated with SDD vpath devices. 7) Remove all the SDD vpath devices by running rmdev dl dpo R. 8) Stop SDD Server. 9) Remove all the SDD supported storages hdisk devices by running rmdev dl hdiskX. 10) Deinstall SDD package. 11) Deinstall Host Attachment package(s). 12) Install the SDDPCM Host Attachment package. 13) Install SDDPCM package.

13

14) Reboot the VIO server. 15) When system comes back, check if all the virtual target devices become available and if the mapping is the same by running lsmap all command. 16) Varyon the volume groups and mount the files systems if the volume group auto varyon is not set. 17) If your VIO client does not have redundant configuration and you have shut down the VIO client partition in earlier step, you can bring up the client now. If your VIO client has redundant configuration and you did not shut down the client in earlier step, perform these procedures on the VIO client to recover the paths: i. Run chpath l hdiskX -s enable p vscsiY to enable the paths that were previously disabled. ii. Make sure I/O resumes on all these enabled paths. iii. Repeat steps 2 through 17 to perform SDD upgrade on another VIOS.

14

Trademarks
AIX and IBM are trademarks of the IBM Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

Notices
No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Product information and data have been reviewed for accuracy as of the date of initial publication. Product information and data are subject to change without notice. This document could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or program(s) described herein at any time without notice. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Consult your local IBM representative or IBM Business Partner for information about the product and services available in your area.

THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. IBM shall have no responsibility update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The performance data contained herein were obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While IBM has reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. The responsibility for use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's or user's ability to evaluate and integrate them into their operating environment. Customers or users attempting to adapt these techniques to their own environments do so at their own risk. IN NO EVENT SHALL IBM BE LIABLE FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO, LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to:

15

IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A.
International Business Machines Corporation 2007 IBM Systems Group 9000 S. Rita Road Tucson, AZ 85744 Printed in the United States of America 02-07 All Rights Reserved

16

Das könnte Ihnen auch gefallen