Beruflich Dokumente
Kultur Dokumente
EMC Corporation
171 South Street Hopkinton, MA 01748-9103 Corporate Headquarters: (508) 435-1000, (800) 424-EMC2 Fax: (508) 435-5374 Service: (800) SVC-4EMC
Trademark Information
EMC, CLARiiON, and Symmetrix are registered trademarks and EMC Control Center, PowerPath, SRDF, and TimeFinder are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.
ii
Contents
Chapter 2
Chapter 3
Chapter 4
iii
Contents
After Removing PowerPath ........................................................... 4-5 When a Storage System Device Is the Boot Device..................... 4-5
Chapter 5
PART 2 Chapter 6
Chapter 7
Chapter 8
iv
Contents
Chapter 9
Chapter 10
PART 3 Chapter 11
Chapter 12
Chapter 13
Contents
Chapter 14
Chapter 15
PART 4 Chapter 16
Chapter 17
Chapter 18
Chapter 19
vi
Contents
Chapter 20
Chapter 21
PART 5 Appendix A
Appendix B
Customer Support
Overview of Detecting and Resolving Problems ....................... Troubleshooting the Problem ....................................................... Before Calling the Customer Support Center ............................ Documenting the Problem ............................................................ Reporting a New Problem ............................................................ Sending Problem Documentation ................................................ B-2 B-3 B-4 B-5 B-6 B-7
vii
Contents
viii
Tables
PowerPath Feature Summary on AIX ....................................................... 5-2 PowerPath Feature Summary on Tru64 UNIX ...................................... 10-2 PowerPath Feature Summary on HP-UX ............................................... 15-2 PowerPath Installation Worksheet (Solaris Host) (part 1 of 2) ............ 20-5 PowerPath Installation Worksheet (Solaris Host) (part 2 of 2) ............ 20-6 PowerPath Feature Summary on Solaris ................................................ 21-2 Native Devices versus emcpower Devices ............................................. 21-7
ix
Tables
Preface
As part of its effort to continuously improve and enhance the performance and capabilities of the EMC product line, EMC periodically releases new versions of PowerPath. Therefore, some functions described in this manual may not be supported by all versions of PowerPath or the storage system hardware it supports. For the most up-to-date information on product features, see your product release notes. If a PowerPath feature does not function properly or as described in this manual, please contact the EMC Customer Support Center for assistance. Refer to Where to Get Help on page xvi for contact information. This guide describes how to install and remove PowerPath for UNIX Version 3.0 on each supported UNIX platform. It also describes platform-specific administrative tasks. Audience and Prerequisites This guide is intended for storage administrators and other information system professionals responsible for installing and maintaining PowerPath. In addition to understanding PowerPath, administrators should be familiar with the:
x x
Host operating system where PowerPath runs Applications used with PowerPath, such as clustering software
xi
Preface
Content Overview
This guide has the following chapters and appendixes: Part 1, PowerPath on AIX
x
Chapter 1, Installing PowerPath on an AIX Host, describes how to install and upgrade PowerPath on AIX. Chapter 2, Configuring a PowerPath Boot Device on AIX, describes how to configure a boot device and disable PowerPath on a storage system boot device. Chapter 3, PowerPath in an HACMP Cluster, describes how to install PowerPath and HACMP on a new host and integrate PowerPath into an existing HACMP environment. Chapter 4, Removing PowerPath from an AIX Host, describes how to uninstall PowerPath on AIX. Chapter 5, PowerPath Administration on AIX, discusses AIX issues and administrative tasks. Chapter 6, Installing PowerPath on a Tru64 UNIX Host, describes how to install and upgrade PowerPath on Tru64 UNIX. Chapter 7, Configuring a PowerPath Boot Device on Tru64 UNIX, discusses configuring a PowerPath boot device on Tru64 UNIX. Chapter 8, PowerPath in a Tru64 UNIX Cluster, describes how to plan for and install PowerPath and TruCluster clusters on new hosts. Chapter 9, Uninstalling PowerPath on a Tru64 UNIX Host, describes how to uninstall PowerPath on Tru64 in simple and clustered configurations. Chapter 10, PowerPath Administration on Tru64 UNIX, discusses Tru64 UNIX issues and administrative tasks.
Chapter 11, Installing PowerPath on an HP-UX Host, describes how to install and upgrade PowerPath on HP-UX. Chapter 12, Configuring a PowerPath Boot Device on HP-UX, describes how to configure a PowerPath device as the boot device. Chapter 13, PowerPath in an MC/ServiceGuard Cluster, describes how to install MC/Service Guard on new hosts, and integrate PowerPath into an existing environment.
xii
Preface
Chapter 14, Removing PowerPath from an HP-UX Host, describes how to uninstall PowerPath on HP-UX. Chapter 15, PowerPath Administration on HP-UX, discusses HP-UX issues and administrative tasks. Chapter 16, Installing PowerPath on a Solaris Host, describes how to install and upgrade PowerPath on Solaris. Chapter 17, Configuring a PowerPath Boot Device on Solaris, describes how to configure a PowerPath device as the boot device. Chapter 18, PowerPath in a Solaris Cluster, describes how to work with Sun Cluster and VERITAS Cluster Server. Chapter 19, Removing PowerPath from a Solaris Host, describes how to uninstall PowerPath on Solaris. Chapter 20, EMCPOWER Devices and Solaris Applications, describes how to use PowerPath with Solaris applications that use emcpower devices. Chapter 21, PowerPath Administration on Solaris, discusses Solaris issues and administrative tasks. Appendix A, PowerPath Patches, describes how to identify and obtain PowerPath patch releases. Appendix B, Customer Support, reviews the EMC process for detecting and resolving software problems, and provides essential questions that you should answer before contacting the EMC Customer Support Center.
Part 5, Appendixes
x
Here is the complete set of EMC enterprise storage documentation for PowerPath, all available from EMC Corporation:
x x
PowerPath Product Guide, EMC P/N 300-000-510 PowerPath for UNIX Installation and Administration Guide, EMC P/N 300-000-511 PowerPath for Windows Installation and Administration Guide, EMC P/N 300-000-512 PowerPath for Novell NetWare Installation Guide, EMC P/N 300-000-513 PowerPath for Linux Installation Guide, EMC P/N 300-000-514
PowerPath for UNIX Installation and Administration Guide
xiii
Preface
EMC Installation Roadmap for FC-Series Storage Systems, EMC P/N 069-001-166 EMC CLARiiON Host Connectivity Guide, EMC P/N 014-003-106 EMC Navisphere Manager Version 6.X Administrators Guide, EMC P/N 069-001-125 Connectrix Enterprise Storage Network System Topology Guide, EMC P/N 300-600-008 ESN Manager Product Guide, EMC P/N 300-999-210 Volume Logix Product Guide, EMC P/N 300-999-024 Symmetrix Fibre Channel Product Guide, Volumes I and II, EMC P/N 200-999-642 Symmetrix Open Systems Environment Product Guide, Volumes I and II, EMC P/N 200-999-563 EMC Control Center Symmetrix Manager for UNIX Product Guide, EMC P/N 300-999-234 EMC Control Center TimeFinder Manager for UNIX Product Guide, EMC P/N 300-999-240 Symmetrix Enterprise Storage Platform Product Guide, EMC P/N 200-999-556 Symmetrix High Availability Environment Product Guide, EMC P/N 200-999-566 The EMC product guide for your storage system model The EMC installation guide and vendor documentation for your HBA (Host Bus Adapter)
x x
x x x
x x
xiv
Preface
EMC uses the following conventions for notes, cautions, warnings, and danger notices.
A note presents information that is important, but not hazard-related.
CAUTION A caution contains information essential to avoid damage to the system or equipment. The caution may apply to hardware or software. This manual uses the following type style conventions:
AVANT GARDE
Keystrokes
x x
Palatino, bold
Dialog box, button, icon, and menu items in text Selections you can make from the user interface, including buttons, icons, options, and field names New terms or unique word usage in text Command line arguments when used in text Book titles
Palatino, italic
x x x
System prompts and displays and specific filenames or complete paths. For example:
working root directory [/usr/emc]: c:\Program Files\EMC\Symapi\db
xv
Preface
Obtain technical support by calling your local EMC sales office. For a list of EMC locations, go to this EMC Web site:
http://www.emc.com/contact/
For service, call EMC Customer Service: USA Canada: Worldwide: (800) 782-4362 (SVC-4EMC) (800) 543-4782 (543-4SVC) (508) 497-7901
Follow the voice menu prompts to open a service call. For additional information about EMC products and services available to customers and partners, refer to the EMC Powerlink Web site:
http://powerlink.emc.com
For information about products and technologies qualified for use with the EMC software described in this manual, go to this EMC Web page:
http://www.emc.com/horizontal/interoperability
Choose the link to EMC Interoperability Support Matrices, then the link to EMC Support Matrix. Your Comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please e-mail us at techpub_comments@emc.com to let us know your opinion or any errors concerning this manual.
xvi
PART 1
PowerPath on AIX
This section discusses PowerPath on an AIX host. Chapter 1, Installing PowerPath on an AIX Host This chapter describes how to install PowerPath and upgrade to a new version. Chapter 2, Configuring a PowerPath Boot Device on AIX This chapter describes how to configure a boot device and disable PowerPath on a storage system boot device. Chapter 3, PowerPath in an HACMP Cluster This chapter describes how to install PowerPath and HACMP on a new host and integrate PowerPath into an existing HACMP environment. Chapter 4, Removing PowerPath from an AIX Host This chapter describes how to uninstall PowerPath. Chapter 5, PowerPath Administration on AIX This chapter discusses PowerPath for AIX issues and administrative tasks.
1
Installing PowerPath on an AIX Host
This chapter describes how to install and upgrade PowerPath on an AIX host. It also discusses upgrading the operating system. The chapter covers the following topics:
x x x x x x x x
Before You Install ...............................................................................1-2 Installation Procedure .......................................................................1-5 After You Install .................................................................................1-9 Configuration Guidelines ...............................................................1-12 Upgrading PowerPath.....................................................................1-12 Installing a Patch ..............................................................................1-13 Upgrading AIX .................................................................................1-13 File Changes Caused by PowerPath Installation ........................1-13
1-1
Before you install PowerPath: Locate the PowerPath installation CD and, if you are installing PowerPath for the first time, your 24-digit registration number. The registration number is on the License Key Card delivered with the PowerPath media kit. (If you are upgrading from an earlier version of PowerPath, PowerPath will use your old key.) Verify that your environment meets the requirements in: Chapter 3, PowerPath Configuration Requirements, in the PowerPath Product Guide. That chapter describes the host-storage system interconnection topologies that PowerPath supports. Environment and System Requirements section of the EMC PowerPath for UNIX Version 3.0 Release Notes. That section describes minimum hardware and software requirements for the host and supported storage systems. We update the release notes periodically and post them on http:\\powerlink.emc.com. Before you install PowerPath, check the powerlink Web site, or contact your EMC customer support representative, for the most current information. Ensure that the storage system logical devices are configured for PowerPath support. Refer to the Symmetrix Open Systems Environment Product Guide, Volume I or the Installation Roadmap for FC-Series Storage Systems. Ensure that the CLARiiON SnapView utility admsnap and other CLARiiON host- and storage-system-based software is up to date. Vary off all volume groups that use storage system hdisk devices, except the root volume group (rootvg). If a file system or application is using these volume groups, unmount the file system or stop the application before varying off the volume group.
1-2
Ensure that any required EMC Symmetrix AIX Licensed Program Products (LPPs) are installed:
LPP Symmetrix.aix.rte Symmetrix.fcp.rte Symmetrix.fcscsi.rte Description AIX devices LLP IBM driver kit EMC driver kit Required For All configurations Fibre-attached devices Comments You must install at least one of these LLPs if you use fibre-attached devices. You can choose to install both.
Symmetrix.ha.rte
Clusters
Ensure that the AIX hdisk devices are configured properly: Each logical path that PowerPath will use to access a storage system device must have an hdisk configured for it. If the number of storage system hdisk devices is incorrect, complete the following procedure before installing PowerPath. This procedure configures the hdisks correctly for PowerPath: 1. Make sure all physical device connections are connected. 2. Remove the AIX hdisks corresponding to storage system devices: You can use the following command to remove hdisks corresponding to Symmetrix devices:
lsdev -CtSYMM* -Fname | xargs -n1 rmdev -dl
This command cannot delete hdisks in use. These hdisks do not need to be removed, and you can ignore any error messages. You must remove hdisks corresponding to CLARiiON devices manually. 3. Once all storage system hdisks are removed, run the /usr/lpp/Symmetrix/bin/emc_cfgmgr script to ensure that hdisks are configured for each path. This script invokes the AIX cfgmgr tool to probe each adapter bus separately. After it has run, there should be a storage system hdisk configured for each device on each path.
1-3
1
Ensure that the ownership and permission attributes of all hdisk devices are correct. PowerPath configuration sets the ownership and permission values of each hdiskpower device to match the values of one of the path devices. Disable the AIX Automatic Error Log Analysis (diagela). If diagela is enabled (the default) and a permanent hardware resource error is logged, diagela is invoked. Since PowerPath provides its own path testing logic, EMC recommends that you disable diagela. To do so, log in as root and enter:
/usr/lpp/diagnostics/bin/diagela DISABLE
Ensure that the SC_SIMPLE_Q flag is set for applications that use pass-through SCSI commands with devices handling I/O. Such applications must set the SC_SIMPLE_Q flag to indicate command tag queuing. If this flag is not set, the pass-through SCSI commands could fail. The user application is responsible for handling this condition. On AIX 5, if multiple logical partitions (LPARs) on the same host will use PowerPath to access the same storage system volumes, add the following line to the /etc/environment file on each LPAR:
PP_LPAR_KEY_FIX=1
This ensures that device reservations made from one LPAR will prevent access to the same device from a different LPAR.
If you do not plan to use LPARS, you need not add this line to /etc/environment.
Uninstall any version of Navisphere Application Transparent Failover (ATF) that is installed on the host.
1-4
Installation Procedure
You can install PowerPath directly from the CD-ROM, using either command line entries or the System Management Interface Tool (SMIT). The following sections describe how to:
x x x x
Mount the CD-ROM Install PowerPath at the command line Install PowerPath on an AIX 4.3 host using SMIT Install PowerPath on an AIX 5 host using SMIT
If you are upgrading from an earlier release of PowerPath, see Upgrading PowerPath on page 1-12 before you begin the installation.
To mount the PowerPath installation CD-ROM: 1. Log in as root. 2. Create the directory /cdrom to be the mount point for the CD-ROM. Enter:
mkdir /cdrom
3. Insert the PowerPath installation CD-ROM into the CD-ROM drive. 4. Mount the CD on /cdrom. Enter:
mount -v cdrfs -p -r /dev/cd0 /cdrom
5. Change to the directory containing the version of PowerPath you wish to install. Enter:
cd /cdrom/AIX/directory_name
where directory_name is one of: aix5contains the AIX 5 version of PowerPath aix43contains the AIX 4.3 version of PowerPath
Installation Procedure
1-5
1
Installing from the Command Line
To install PowerPath on an AIX 4.3 or AIX 5 host using command line entries: 1. Mount the CD-ROM in the CD-ROM drive and change to the appropriate directory (refer to Mounting the CD-ROM on page 1-5). 2. Install the software. Enter:
installp -ad install_directory EMCpower Refer to the man page for optional flags for the installp command.
PowerPath is installed on the host, but you must enter your license key and perform some other administrative tasks before PowerPath can run on the host. Refer to After You Install on page 1-9 for postinstallation information and instructions.
The SMIT procedure described in this section assumes you are running the X Window System version of SMIT. You can use the tty version of SMIT, provided you substitute the appropriate tty SMIT procedures.
To install PowerPath using SMIT: 1. Mount the CD-ROM in the CD-ROM drive and change to the appropriate directory (refer to Mounting the CD-ROM on page 1-5). 2. Open SMIT. Enter:
smit
1-6
3. On the main SMIT window, click Software Installation and Maintenance. 4. On the Software Installation and Maintenance window, click Install and Update Software. 5. On the Install and Update Software window, click Install and Update Software by Package Name (includes devices and printers). You see the following prompt:
INPUT device / directory for software
6. Enter . to indicate the current directory, and click OK. A Multi-select List opens. 7. Select EMCpower and click OK. Another Multi-select List opens. 8. Select the first line in the list, EMCpower, and click OK. The Install and Update Software dialog box opens. 9. Review the installation options, make any necessary changes, and click OK. 10. When prompted, click OK to confirm that you want to continue. The screen displays information about the installation, ending with:
+------------------------------------------------------+ Summaries: +------------------------------------------------------+ Installation Summary -------------------Name Level Part Event Result -------------------------------------------------------EMCpower.base 3.0.0.0 USR APPLY SUCCESS EMCpower.multi_path_clariio 3.0.0.0 USR APPLY SUCCESS EMCpower.multi_path 3.0.0.0 USR APPLY SUCCESS EMCpower.consistency_grp 3.0.0.0 USR APPLY SUCCESS
11. Close SMIT: Select Exit SMIT from the Exit menu. PowerPath is now installed on the host. You must enter your license key and perform other administrative tasks before PowerPath can run on the host. Refer to After You Install on page 1-9 for postinstallation information and instructions.
Installation Procedure
1-7
1
Installing from SMIT on AIX 5
The SMIT procedure described in this section assumes you are running the X Window System version of SMIT. You can use the tty version of SMIT, provided you substitute the appropriate tty SMIT procedures.
To install PowerPath using SMIT: 1. Mount the CD-ROM in the CD-ROM drive and change to the appropriate directory (refer to Mounting the CD-ROM on page 1-5). 2. Open SMIT. Enter:
smit
3. On the main SMIT window, click Software Installation and Maintenance. 4. On the Software Installation and Maintenance window, click Install and Update Software. 5. On the Install Software window, click Install and Update Software by Package Name (includes devices and printers). 6. Press F4 to open the Multi-select list of software to install. 7. Select the first line in the list, EMCpower ALL, and press ENTER. The Install and Update Software dialog box opens. 8. Review the installation options, make any necessary changes, and click OK. 9. When prompted, click OK to confirm that you want to continue. The screen displays information about the installation, ending with:
+------------------------------------------------------+ Summaries: +------------------------------------------------------+ Installation Summary -------------------Name Level Part Event Result -------------------------------------------------------EMCpower.base 3.0.0.0 USR APPLY SUCCESS EMCpower.multi_path_clariio 3.0.0.0 USR APPLY SUCCESS EMCpower.multi_path 3.0.0.0 USR APPLY SUCCESS EMCpower.consistency_grp 3.0.0.0 USR APPLY SUCCESS
10. Close SMIT: Select Exit SMIT from the Exit menu.
1-8
PowerPath is now installed on the host. You must enter your license key and perform other administrative tasks before PowerPath can run on the host. Refer to After You Install on page 1-9 for postinstallation information and instructions.
a. Enter:
emcpreg -install
c. Enter the 24-character alphanumeric sequence found on the License Key Card delivered with the PowerPath media kit and press ENTER. If you enter a valid registration key, you see the following output:
Key successfully installed. Registration key: If you enter an invalid registration key, the screen displays an error message and prompts you to enter a valid registration key. See the PowerPath Product Guide for a list of possible error messages returned by the emcpreg utility.
1-9
1
d. Press ENTER. You see the following output:
1 key(s) successfully registered.
This command returns a message indicating whether your registration number is valid. See the PowerPath Product Guide for a complete list of powermt messages. 3. Check the EMC support Web site or the PowerPath anonymous FTP site for any patches to PowerPath 3.0, and install any required patches. Refer to Appendix A, PowerPath Patches, for information on identifying, obtaining, and installing patches. 4. Initialize the PowerPath hdiskpower devices and make them available to the host. You can initialize devices using either command line entries or SMIT: To initialize devices at the command line, enter:
powermt config
b. On the System Management window, select Devices. c. Select PowerPath Disk. d. Select Configure All Powerpath Devices. e. Exit SMIT. 5. Unmount the CD. Enter:
cd / umount /cdrom
1-10
7. Perform other administrative tasks as necessary: Vary on volume groups. Vary on any existing volume groups that you varied off before installing PowerPath. Then remount any file systems you unmounted and restart any applications you stopped. You need not reconfigure these volume groups: The installation procedure migrates existing volume groups that use storage system devices from AIX hdisks to PowerPath hdiskpower devices. Note, however, that if you failed to vary off a volume group before installing PowerPath, this migration will fail. You will be able to vary off the volume group, but any vary on attempts result in errors. To correct this state, vary off the volume group, unconfigure PowerPath by running rmdev -l hdiskpowerN on each associated hdiskpower device, and then run powermt config. The varyon command should now succeed, and the volume group should be using hdiskpower devices. When defining new volume groups, use PowerPath hdiskpower devices, not AIX hdisk devices. Reconfigure applications that access AIX hdisks directly. If an application accesses AIX hdisks directly, rather than through a volume group (a DBMS, for example), you must reconfigure that application to use PowerPath hdiskpower devices if you want PowerPath load balancing and path failover functionality. Run powermt display dev=all to determine the correspondence between PowerPath hdiskpower devices and AIX hdisk devices.
You need not reconfigure applications that access hdisks through a volume group.
When adding new applications to your system that typically would access hdisks directly, configure them to use hdiskpower devices instead. If EMC Control Center is installed, run the command that refreshes the EMC Control Center database of device information. Refer to the documentation for your version of EMC Control Center.
1-11
Configuration Guidelines
If your PowerPath configuration includes a CLARiiON storage system, use the appropriate management commands to enable PowerPath failover on the storage system. PowerPath cannot see any devices, or paths to devices, on a storage system that is not properly configured for PowerPath.
Upgrading PowerPath
We update the PowerPath Release Notes periodically and post them on http:\\powerlink.emc.com. Before you upgrade PowerPath, check the powerlink Web site for the most current information.
You can upgrade PowerPath for AIX without uninstalling a previous version. You must, however, reboot the host: After an upgrade, PowerPath commands such as powermt do not work until the host is rebooted. Upgrading to PowerPath 3.0 from 2.x converts the existing 12-character license key in the /etc/powerpath_registration file to a 24-character key in the /etc/emcp_registration file. You need not reenter license information. To upgrade to PowerPath 3.0: 1. Follow the instructions in Installation Procedure on page 1-5.
Do not reenter your registration key.
1-12
Installing a Patch
Every patch release is accompanied by a Readme file that describes how to install the patch. Appendix A, PowerPath Patches, describes how to obtain patches and their accompanying ReadMe files.
Upgrading AIX
You must uninstall PowerPath before you upgrade AIX. After you complete the operating system upgrade, reinstall PowerPath.
Installing a Patch
1-13
1
./usr/sbin/pprootdev ./usr/lib/drivers/cgext ./usr/lib/drivers/mpcext ./usr/lib/libcg.so ./usr/lib/libcong.so ./usr/lib/libemcp_mp_rtl.so ./usr/lib/drivers/mpext ./usr/lib/libmp.a ./usr/sbin/emcpreg ./usr/sbin/powermt ./usr/share/man/man1/emcpreg.1 ./usr/share/man/man1/powermt.1 ./usr/share/man/man1/powerprotect.1
When you install Powerpath on AIX, the PowerPath template for error logging is updated. In addition, /etc/trcfmt is updated with the PowerPath trace format file. PowerPath installation causes the following ODM modifications:
x x
Additions to PdDv for hdiskpower and powerpath0 devices. Additions to PdAt for attributes for hdiskpower and powerpath0 devices. Additions to SMIT menus for PowerPath controls. Addition of Config_Rule to configure PowerPath after reboot. Updating of PdCn so the lsparent command works for PowerPath devices.
x x x
1-14
2
Configuring a PowerPath Boot Device on AIX
This chapter describes how to configure a PowerPath hdiskpower device as the boot device for an AIX host. The chapter covers the following topics:
x x
Setting Up a PowerPath Boot Device..............................................2-2 Disabling PowerPath on a Storage System Boot Device ..............2-6
2-1
All the path devices that make up the hdiskpower device must be considered valid boot devices by AIX. The boot device should not be visible to any other host attached to the same storage system. If using a storage system device as a boot device in an HACMP environment (with or without PowerPath), other hosts should not be able to address the boot device. The host's boot list must contain all hdisks that comprise the hdiskpower device being used as the boot device. Otherwise, the host may fail to boot if one or more paths is disabled while the machine tries to boot. The boot list is a list of hdisks stored in the hardware's NVRAM. At startup, the system searches for an AIX boot image in this list of hdisks. The contents of the list can be modified with the AIX bootlist command if AIX is running. If the system fails to boot, you can change the boot list in either of two ways: Boot the system from an installation device (CD or tape) into Maintenance Mode. Select the option to access the root volume group, and then run bootlist from the shell. Enter the System Management Services menu when the system starts, and use the Multiboot menu options to change the boot list. This method is faster, but it is more difficult to determine which devices listed in the menu correspond to the desired storage system device.
2-2
The pprootdev tool changes AIX configuration rules and updates the boot image so the AIX Logical Volume Manager will use hdiskpower devices to vary on the rootvg the next time the system is booted. pprootdev cannot change the state of rootvg on a running system. It does, however, modify ODM data that other tools use to determine what devices rootvg is using. For this reason, certain commands report information that may appear to be incorrect if they are run after pprootdev is run and before a system reboot. If the system contains sufficient internal storage, install and configure the operating system on the internal device(s) and then use the AIX alt_disk_install tool to clone the operating system image onto the storage system. If this is not feasible, follow this process to install a fresh copy of AIX directly onto a storage system device and use PowerPath to manage multiple paths to the root volume group: 1. Start with only a single connection to the storage subsystem. If you are using a switch, only one logical path should be configured. 2. Install AIX on a storage system device accessed by a SCSI adapter. 3. Install the current storage system drivers. 4. Reboot the host. 5. Use rmdev -d to delete any hdisks that are in the Defined state. 6. Install PowerPath (refer to Chapter 1, Installing PowerPath on an AIX Host). 7. Connect remaining physical connections between the host and the storage system. If you are using a switch, update the zone definitions to the desired configuration. 8. Make sure an hdisk is configured for each path (refer to Before You Install on page 1-2). 9. Run powermt config. 10. Use the pprootdev command to set up multipathing to the root device. Enter:
pprootdev on
11. Use the bootlist command to add all alternate path hdisk devices to the boot list. 12. Reboot the host.
2-3
2
Configuring an Existing PowerPath Installation This section describes the process for converting a system with AIX installed on internal disks to boot from storage system logical devices. The process first transfers a copy of the complete operating system from an internal disk to logical devices on a storage system. It then configures PowerPath so the root volume group takes advantage of multipathing and failover capabilities. This is the recommended process, as it allows you to revert to the internal disks in the event of a problem. Before you start:
x
Ensure that the AIX alt_disk_install LPP is installed on the system. The LPP is on the AIX installation CD. Apply the rte and boot_images filesets.
Then follow these steps: 1. Ensure that all device connections to the storage system are established. 2. Ensure that all hdisks are configured properly. (Refer to Before You Install on page 1-2.) 3. Run powermt config. 4. Use the rmdev command with the -d option to remove all PowerPath devices, including the powerpath0 device. PowerPath should remain installed, but all PowerPath devices must be deleted. 5. Run lsdev -Ct power. No devices should be listed in the output. 6. Determine which hdisks on the storage system will receive the copy of the operating system. 7. Run alt_install_disk -C hdisk_list to create the copy on the storage system hdisk(s). 8. Reboot the system. The system should boot using the hdisks specified in the previous step. 9. Run powermt config. 10. Run bootlist -m normal -o to determine which hdisk is in the bootlist. 11. Use powermt to determine which hdiskpower contains the hdisk in the boot list.
2-4
12. Use the bootlist command to include all the path hdisks for the hdiskpower found in the previous step. 13. Run pprootdev on. 14. Reboot the system. When the system comes up, rootvg should be using hdiskpower devices.
The AIX lsvg command, when used with the -p flag, displays devices in use by the specified volume group. However, this command is not designed to deal with PowerPath or with storage system logical devices that are addressable as different hdisk devices. In general, the output of the command lsvg -p vgname shows correct information, but several administrative tasks change the ODM and could cause lsvg to show misleading information. These tasks include:
x
Use of the pprootdev tool. This tool changes the ODM with the expectation that the system will be rebooted soon after. lsvg shows misleading device information when run after pprootdev. This is not an indication that something is wrong. A reboot corrects the situation but is not required. Use of cfgmgr to create new hdisk devices after PowerPath is already configured. Always run powermt config after adding new devices, to include them in PowerPaths configuration.
After a system boots from a PowerPath device, the bosboot tool cannot function correctly. This happens because of the state of the configuration after booting from a PowerPath device and the fact that bosboot expects the boot device to be an hdisk, not an hdiskpower device. There are several system administrative tasks that require the system boot image to be rebuilt, and these will fail if bosboot cannot run successfully. These tasks include applying certain software patches and using the mksysb utility to create a system backup. To address this limitation, the pprootdev tool provides a fix option that corrects the configuration to allow bosboot to work. Run pprootdev fix before undertaking any administrative task that runs bosboot. This corrects the configuration for bosboot but does not change the PowerPath boot switchthe next system boot still uses PowerPath. You need run the pprootdev fix command only once after a system has booted using PowerPath; after that, bosboot should function correctly until the system is booted again.
Setting Up a PowerPath Boot Device
2-5
2-6
3
PowerPath in an HACMP Cluster
This chapter describes how to install and configure PowerPath in an HACMP cluster. For more detailed information about clustering, refer to the Symmetrix High Availability Environment Product Guide or the Installation Roadmap for FC-Series Storage Systems. This chapter covers the following topics:
x x x
Installing PowerPath and HACMP on New Hosts.......................3-2 Integrating HACMP into a PowerPath Environment...................3-4 Integrating PowerPath into an Existing HACMP Cluster ...........3-5
3-1
3-2
3. On each remaining host in the cluster: a. Install PowerPath (refer to Chapter 1, Installing PowerPath on an AIX Host). If any hdisk attached to the host does not have a PVID or has a different PVID on different hosts, run rmdev on that hdisk. Then run the /usr/lpp/Symmetrix/bin/emc_cfgmgr script, followed by powermt config to configure the devices for the host. Do not define any volume groups. Instead, you will import the volume groups from the host on which you installed PowerPath above. b. Use the smit importvg command to import each volume group identified in step 2.b. c. Use the smit chvg command to change the auto activation status of each volume group that you imported in step 3.b. from yes to no. d. Install HACMP, following the instructions in the relevant AIX HACMP documentation. Configure HACMP to use the volume groups imported in step 3.b. 4. On all hosts: Start cluster services, using the smit clstart command. The volume groups and the underlying PowerPath hdiskpower devices are now under the control of the HACMP software.
3-3
3-4
1. On all hosts in the cluster: a. Make sure cluster services are running on each host in the cluster. If necessary, use the HACMP Start Cluster Services SMIT screen to start HACMP on a host. b. Download or copy the PowerPath convert script onto each host in the cluster. You can download the convert script from the EMC ftp site (ftp.emc.com), in the directory /pub/symm3000/aix/HACMP/ha4.3. The convert script runs the /usr/lpp/Symmetrix/bin/emc_cfgmgr script and the powermt config command. c. Use the HACMP Change/Show Cluster Events SMIT screen to configure the PowerPath convert script as a pre-event to the node_down_remote event. d. Use the HACMP Synchronize Cluster Resources option of the HACMP Cluster Resources SMIT screen to synchronize the cluster topology across all hosts in the cluster.
3-5
3
2. On each passive host (that is, each host that does not own cluster resources): a. Install PowerPath (refer to Chapter 1, Installing PowerPath on an AIX Host).
Do not run powermt config.
b. Connect the new paths between the storage system and the host. c. Vary off all volume groups on the host except rootvg. If a file system or application is using these volume groups, unmount the file system or stop the application before varying off the volume groups. When setting up new paths to existing devices, any other host that also has access to those devices cannot have SCSI reserves on them; therefore, any active volume groups on those devices must be varied off. Any applications that open the raw devices must close them. Any activity on a different host that would prevent the local host from being able to configure the local hdisks and read the PVIDs from the device must be stopped before trying to configure the hdisks for PowerPath on the local host. 3. On the active host: Use the HACMP Stop Cluster Services SMIT screen to perform a shutdown of the cluster services in takeover mode, which fails over the cluster resource to a passive host. During failover, the convert script configures paths and hdiskpower devices. HACMP detects the hdiskpower devices and varies on the volume groups. 4. On each passive host that takes over the cluster resources, repeat step 3, so that all passive nodes are configured.
3-6
5. On the original active host: a. Install PowerPath (refer to Chapter 1, Installing PowerPath on an AIX Host).
Do not run powermt config.
b. Connect the new paths between the storage system and the host. c. Vary off all volume groups on the host except rootvg. If a file system or application is using these volume groups, unmount the file system or stop the application before varying off the volume groups. d. Use the HACMP Start Cluster Services SMIT screen to fail over cluster resource to this host. During failover, the convert script configures paths and hdiskpower devices. HACMP detects the hdiskpower devices and varies on the volume groups. 6. On all hosts: Make sure cluster services are running on each host in the cluster. If necessary, use the HACMP Start Cluster Services SMIT screen to start HACMP on a host. 7. On the original active host: a. Use the HACMP Change/Show Cluster Events SMIT screen to remove the PowerPath convert script as a pre-event to the node_down_remote event. b. Use the HACMP Synchronize Cluster Resources option of the HACMP Cluster Resources SMIT screen to synchronize the cluster topology across all hosts in the cluster.
3-7
3-8
4
Removing PowerPath from an AIX Host
This chapter describes how to remove PowerPath from an AIX host. The chapter covers the following topics:
x x x x
Before Removing PowerPath ...........................................................4-2 Removing PowerPath........................................................................4-3 After Removing PowerPath .............................................................4-5 When a Storage System Device Is the Boot Device.......................4-5
4-1
If this error occurs, close the application that is using the hdiskpower device and repeat the uninstall. If you are removing PowerPath from the host entirely (that is, you are not reinstalling PowerPath after completing the removal procedure), disconnect all duplicate physical connections between the host and the storage system except one cable, leaving a single path. In addition, reconfigure any switches so devices appear only once. Check the powerlink Web site (http:\\powerlink.emc.com) for the most current information. We update the PowerPath Release Notes periodically and post them on the powerlink Web site.
4-2
Removing PowerPath
To remove PowerPath, you can use either command line entries or the SMIT utility. This section describes both procedures.
To remove PowerPath using command line entries: 1. Log in as root. 2. Remove the PowerPath software. Enter:
installp -u EMCpower
PowerPath is now removed from the host. Refer to After Removing PowerPath on page 4-5.
Removing PowerPath
4-3
4
Using SMIT
The SMIT procedure described in this section assumes you are running the X-windows version of SMIT. You can use the tty version of SMIT, if you substitute the appropriate tty SMIT commands. The SMIT commands shown in this procedure correspond to AIX 4.3 and AIX 5.
3. Select Software Installation and Maintenance, then select Software Maintenance and Utilities. 4. Select Remove Installed Software. The Remove Installed Software dialog box opens. 5. Press F4 to open the Multi-select list of installed software. 6. Use F7 to choose entries starting with EMCPower, and then press ENTER. 7. When prompted, confirm that you want to remove the software. The screen displays information about the removal, ending with:
+----------------------------------------------------------+ Summaries: +----------------------------------------------------------+ Installation Summary -------------------Name Level Part Event Result -----------------------------------------------------------EMCpower.consistency_grp 3.0.0.0 USR DEINSTALL SUCCESS EMCpower.multi_path 3.0.0.0 USR DEINSTALL SUCCESS EMCpower.multi_path_clariio 3.0.0.0 USR DEINSTALL SUCCESS EMCpower.base 3.0.0.0 USR DEINSTALL SUCCESS
8. From the EXIT menu, select EXIT SMIT. PowerPath is now removed from the host. Refer to After Removing PowerPath on page 4-5.
4-4
If EMC Control Center is installed, run the command that refreshes the EMC Control Center database of device information. Refer to the documentation for your version of EMC Control Center.
4-5
4-6
5
PowerPath Administration on AIX
visible Body Tag
This chapter discusses PowerPath issues and administrative tasks specific to AIX. Throughout this chapter, many procedural steps use powermt commands. For detailed descriptions of these commands, refer to the PowerPath Product Guide. This chapter covers the following topics:
x x x x x x x x x x x x x
PowerPath Feature Summary ..........................................................5-2 emc_cfgmgr Script .............................................................................5-3 PowerPath hdiskpower Devices ......................................................5-4 Bringing hdiskpower-based BCV Symmetrix Logical Devices Online ..................................................................................................5-7 Importing a Volume Group from a Remote Host..........................5-8 Changing the Target/LUN Address of a Storage System Logical Device ....................................................................................5-9 Adding New Devices to an Existing Configuration ...................5-10 Replacing an HBA that PowerPath Is Using Online...................5-10 Troubleshooting................................................................................5-10 Reconfiguring PowerPath Devices Online...................................5-12 Removing Logical Devices from PowerPath Configuration .....5-13 SMIT Screens.....................................................................................5-14 Error Messages .................................................................................5-15
5-1
PowerPath Feature Summary on AIX Feature I/O load balancing I/O failover IOCTL load balancing IOCTL failover Install without reboot Upgrade without reboot Upgrade without uninstall Deinstall without reboot Boot from PowerPath device Cluster support Fibre Channel support powermt utility powercf utility Enterprise consistency groups Add PowerPath devices online SCSI only Supported on AIX?
5-2
emc_cfgmgr Script
PowerPath requires that an hdisk be configured for each logical path it will use to access a storage system logical device. Under certain circumstances, however, AIX does not configure an hdisk for each logical path to a storage system logical device. Suppose, for example, you attach four new SCSI cables to an AIX host. Each cable addresses the same four storage system logical devices, and each of those devices at one time was part of a volume group and is configured with a PVID (which is written on the disk). You then reboot the host. When AIX boots, it does the device discovery on those new SCSI busses in one step. When it sees two or more devices with the same PVID, AIX creates only one hdisk. As a result, there is not an hdisk for each logical path; there will be only four new hdisks, even though you attached 16 new devices. To ensure that hdisks are configured correctly for PowerPath, PowerPath for AIX provides the script /usr/lpp/Symmetrix/bin/emc_cfgmgr. The emc_cfgmgr script invokes the AIX cfgmgr tool to probe each HBA separately, so the configuration program restarts before it gets to a situation where it might be confused by disks that appear to be the same. After emc_cfgmgr executes, a storage system hdisk is configured for each device on each path.
emc_cfgmgr Script
5-3
where x is the disk number. During installation, PowerPath creates an hdiskpower device for every logical device configured for the AIX host. After PowerPath is installed, both hdisk and hdiskpower devices exist on the host. The hdiskpower devices reside on top of the hdisk devices. You can run powermt display dev=all to determine the correspondence between PowerPath hdiskpower devices and AIX hdisk devices. For details, refer to the PowerPath Product Guide. Once PowerPath is installed, applications should direct I/O to hdiskpower devices. Using hdiskpower devices provides the PowerPath load-balancing and path failover functionality. PowerPath then selects the best path (hdisk) to handle the I/O. During installation, PowerPath migrates existing volume groups that use storage system hdisks to PowerPath hdiskpower devices. You need not reconfigure existing volume groups after installing PowerPath. If you have an application that accesses AIX hdisks directly rather than through a volume group (a DBMS, for example), you must reconfigure that application to use PowerPath hdiskpower devices if you want PowerPath load-balancing and path failover functionality. If an application does not access an hdisk directly, you need not reconfigure the application for PowerPath. When defining new volume groups, use PowerPath hdiskpower devices, and not AIX hdisk devices. If you add an application to your system that would typically access hdisks directly, configure the application to use hdiskpower devices instead. Although the underlying hdisk devices remain after PowerPath is installed, EMC recommends that you not use them for normal I/O,
5-4
because they might interfere with one another. It might not be possible to open hdisk devices if the parent hdiskpower device is open. Device reservations on the hdisk can interfere with device reservations on hdiskpower devices. Applications that use the SYMAPI cannot use both hdisk and hdiskpower devices.
AIX can boot from a PowerPath hdiskpower device. Using a PowerPath hdiskpower device as the boot device provides load balancing and path failover for the boot device. A PVID is a unique number written on the first block of the device. The Logical Volume Manager (LVM) uses this number to identify specific disks. When a volume group is created, the member devices of the group are simply a list of PVIDs. The LVM does not read each device when searching for member devices of a volume group; instead, it expects the PVIDs to be saved in the ODM, and it uses the ODM attribute when determining which device to open. The PVID for each device is stored in the ODM when the device is configured. When a device is made Available (including device creation and when the device begins in the Defined state), the configuration program tries to read the first block of the device. If it succeeds and the first block contains a valid PVID, the PVID value is saved as an attribute in the ODM for that device. Once the PVID is set in the ODM, it can be seen in the output of the lspv command. In a configuration with multiple paths to the same logical devices, multiple hdisks show the same PVID in the output of lspv. When the LVM needs to open a device, it picks the first hdisk in the list with the matching PVID.
PVIDs
The PVID for an hdiskpower device is set essentially the same way as an hdisk, but with an extra step or two. When an hdiskpower device is made Available, the configuration program tries to open the device and read the first block. Several conditions can prevent this read from succeeding, including:
x
There is a SCSI reservation on the device. This is usually caused by an active volume group using one of the hdisk paths on the local machine or varied on from a remote host. hdisk paths to the hdiskpower are marked dead because of a deleted hdisk device. This can prevent the configuration program from opening the device and reading the first block.
PowerPath hdiskpower Devices
5-5
5
These failure conditions happen primarily when PowerPath is being configured long after system boot and other programs are using hdisk devices on the local machine. If the hdiskpower configuration program cannot read the first block on the device, it cannot determine the PVID and will not be able to store it in the ODM for the hdiskpower device. When the configuration program for the hdiskpower device reads and stores the PVID for the hdiskpower device, it also removes the PVID from the ODM for the corresponding hdisk devices. This is done so the LVM will use the hdiskpower devices instead of the hdisks and take advantage of PowerPaths functionality. When configuring PowerPath devices, keep in mind that:
x
Deleting all hdiskpower devices does not erase PowerPaths knowledge of which hdisks correspond to paths to logical devices. To cause PowerPath to completely rebuild its configuration, you must unconfigure the powerpath0 device. hdisks need not be deleted to make them redo their PVID processing. They can be unconfigured by running rmdev -l hdisk# and reconfigured by rerunning cfgmgr on the bus or running mkdev -l hdisk#. To have PVIDs on hdiskpower devices, you need only put the hdisks into the Available state. You need not delete them, and you need not first get the PVID to appear in lspv output. You do, however, need to ensure that the associated path hdisks are not in use and the device is not reserved.
5-6
To bring hdiskpower-based BCV Symmetrix logical devices online: 1. Use the EMC management tool of your choice to split the BCV (and make it ready). 2. Use mkbcv to bring the BCV hdisks to the Available state. 3. Run powermt config. 4. Run powermt restore. If errors are reported, PowerPaths configuration has been changed. Verify that all paths are functioning and then run powermt check to remove all dead hdisks. Then rerun powermt config. You should now be able to run powermt restore without errors. 5. Verify that expected PVIDs are assigned to hdiskpower devices in lspv output. If they are not, ensure that corresponding hdisks are not in use or reserved (locally or remotely). Then unconfigure the corresponding hdiskpower devices (rmdev -l hdiskpower#) and reconfigure them (mkdev -l hdiskpower#). If the expected PVID is not set, the device could not be accessed due to either path failures or a conflict on the device.
5-7
5-8
5-9
Troubleshooting
This section describes some problems you might encounter and suggests how to resolve them. Problem You see the following error message:
A device is already configured at this location
Cause You cannot configure a defined hdisk if it has the same connection string (in lsdev output) as the corresponding hdiskpower and the hdiskpower device is in the Available state. Solution Run mkdev -l hdiskpower# for the corresponding hdiskpower device. This either changes the connection string for the hdiskpower device or unconfigures the hdiskpower device to allow the hdisk to
5-10
be configured. If the condition exists for multiple hdisks, you can run powermt config instead. Problem hdisk paths are marked as failed. Cause If you delete an hdisk (using rmdev -dl hdisk#) before removing it from PowerPaths configuration, PowerPath marks the hdisk paths as failed because it can no longer access the hdisk it expects to find. In some cases, an hdisk is present, but it points to the wrong storage system logical device. Solution To correction this situation: 1. Run powermt restore to test and mark dead all paths that are missing or point to the wrong logical device. 2. Run powermt check. When prompted to remove a dead path, respond with a to remove all dead paths. 3. Run powermt config to configure all hdisks that might be pointing to storage system logical devices different from the devices PowerPath is aware of. Problem
powermt display dev=all shows all paths as dead or unknown.
Cause Deleting and remaking hdisk devices while the powerpath0 device is in the Available state can put PowerPath in a state where it has incorrect path information for hdiskpower devices. The powermt restore command cannot restore these paths, because they no longer refer to the correct storage system logical device. Solution To correct this situation: 1. Run powermt restore. 2. Run powermt check. When prompted to remove a dead path, respond with a to remove all dead paths. 3. Run powermt config. 4. Verify that an hdisk is configured for each connection and device. Refer to Before You Install on page 1-2. If an hdisk is not configured, complete the procedure to correct the hdisk configuration and then run powermt config again.
Troubleshooting
5-11
Adding or removing HBAs Adding, removing, or changing storage system logical devices Changing the cabling routes between HBAs and storage system ports Adding or removing storage system interfaces
To reconfigure PowerPath devices: 1. Make sure all physical device connections are connected. 2. Run the /usr/lpp/Symmetrix/bin/emc_cfgmgr script to ensure that hdisks are configured for each path. This script invokes the AIX cfgmgr tool to probe each adapter bus separately. After it has run, there should be a storage system hdisk configured for each device on each path. 3. Test all configured paths. Enter:
powermt restore
5. Configure new devices and/or paths that were added to the system configuration. Enter:
powermt config
5-12
To remove logical devices from the PowerPath configuration: 1. Run powermt display dev=all to confirm the configuration of the logical device(s) from which paths will be removed. Check the number of existing paths. The path state should be alive for known good paths and dead for known bad paths. If there is a problem, correct it before proceeding. 2. Identify the physical paths to be removed or zoned out, and confirm that there are other paths to the affected logical devices. Otherwise, applications using those logical devices could experience I/O errors when you proceed. 3. Run powermt display dev=all to identify the PowerPath adapter number associated with the paths to be removed. In complex topologies, there can be multiple paths on an HBA. 4. Run powermt remove, specifying on the command line: The HBAto remove the entire HBA. The deviceto remove all paths to the specified logical device. Both HBA and deviceto remove a single path to the specified logical device. 5. Run rmdev to remove HBA devices or specific hdisks as needed. Refer to the AIX administration documentation for specific procedures and limitations.
5-13
5
6. Inspect the new PowerPath configuration. Run powermt display. The output should show fewer total paths than before. All paths should have a state of optimal. Run powermt display dev=all. All remaining paths associated with the affected logical devices should be displayed with a state of alive. Correct any issues detected above before saving the PowerPath configuration or using the new logical devices. 7. Run powermt save to save the new configuration.
SMIT Screens
PowerPath for AIX provides a set of System Management Interface Tools (SMIT) screens that implement powermt functionality. Using a SMIT screen relieves you of the burden of having to know powermt command syntax. To access the PowerPath for AIX SMIT screens, enter smit, press ENTER, and select Devices, PowerPath Disk. The PowerPath Disk SMIT screen opens (below), from which you can select the desired option:
5-14
Error Messages
In AIX, PowerPath provides error notification through the AIX errlog/errpt facility. The powermt utility reports errors to standard error (stderr). Refer to the PowerPath Product Guide for a complete list of PowerPath error messages. Refer to AIX documentation for information on the AIX errlog/errpt facility.
Error Messages
5-15
5-16
PART 2
PowerPath on Tru64 UNIX
This section discusses PowerPath on a Tru64 UNIX host. Chapter 6, Installing PowerPath on a Tru64 UNIX Host This chapter describes how to install and upgrade PowerPath. Chapter 7, Configuring a PowerPath Boot Device on Tru64 UNIX This chapter discusses configuring a PowerPath boot device. Chapter 8, PowerPath in a Tru64 UNIX Cluster This chapter describes how to plan for and install PowerPath and TruClusters. Chapter 9, Uninstalling PowerPath on a Tru64 UNIX Host This chapter describes how to uninstall PowerPath in simple and clustered configurations. Chapter 10, PowerPath Administration on Tru64 UNIX This chapter discusses PowerPath for Tru64 UNIX issues and administrative tasks.
6
Installing PowerPath on a Tru64 UNIX Host
This chapter describes how to install and upgrade PowerPath on a Tru64 UNIX host. It also discusses upgrading the operating system. The chapter covers the following topics:
x x x x x x x x
Before You Install ...............................................................................6-2 Installation Procedure .......................................................................6-3 After You Install .................................................................................6-6 Reinstalling PowerPath.....................................................................6-7 Upgrading PowerPath.......................................................................6-7 Installing a Patch ................................................................................6-8 Upgrading Tru64 UNIX.....................................................................6-8 File Changes Caused by PowerPath Installation ..........................6-8
6-1
Before you install PowerPath 3.0: Locate the PowerPath installation CD and, if you are installing PowerPath for the first time, your 24-digit registration number. The registration number is on the License Key Card delivered with the PowerPath media kit. (If you are upgrading from an earlier version of PowerPath, PowerPath will use your old key.) Verify that your environment meets the requirements in: Chapter 3, PowerPath Configuration Requirements, in the PowerPath Product Guide. That chapter describes the host-storage system interconnection topologies that PowerPath supports. Environment and System Requirements section of the EMC PowerPath for UNIX Version 3.0 Release Notes. That section describes minimum hardware and software requirements for the host and supported storage systems. We update the release notes periodically and post them on http:\\powerlink.emc.com. Before you install PowerPath, check the powerlink Web site, or contact your EMC customer support representative, for the most current information. Save your kernel and back up your system. Configure the Tru64 UNIX host so the storage system disk devices are incorporated into Tru64 UNIX. The storage system devices must be addressed on the Fibre channel or SCSI bus using Fibre channel or SCSI target IDs and Fibre channel or SCSI logical unit numbers (LUNs). Make sure the ports on the storage system are online. Refer to Symmetrix Open Systems Environment Product Guide, Volume I.
6-2
Installation Procedure
You can install PowerPath directly from the CD-ROM. The following sections describe how to:
x x
If you are upgrading from an earlier release of PowerPath, see Upgrading PowerPath on page 6-7 before you begin the installation.
To mount the CD-ROM: 1. Log in as root. 2. Insert the PowerPath installation CD into the CD-ROM drive. 3. Create the directory /cdrom to be the mount point for the CD-ROM. Enter:
mkdir /cdrom
For example: On V4.x, if the CD-ROM drive is on the c partition of rz5, enter:
mount /dev/rz5c /cdrom
Installation Procedure
6-3
6
Installing the Software
To install PowerPath: 1. Mount the CD-ROM in the CD-ROM drive (refer to Mounting the CD-ROM on page 6-3). 2. Enter:
setld -l ./EMC_PowerPath_30/kit
You may choose one of the following options: 1) ALL of the above 2) CANCEL selections and redisplay menus 3) EXIT without installing any subsets Estimated free diskspace(MB) in root:510.2 usr:1904.7 Enter your choices or press RETURN to redisplay menus. Choices (for example, 1 2 4-6):
You are installing the following optional subsets: Estimated free diskspace(MB) in root:510.2 usr:1904.7 Is this correct? (y/n):
6-4
4. Enter y. The screen displays the loading and configuration status of each subset sequentially, and then asks if you have a new registration key to enter:
=========== EMC PowerPath Registration =========== Do you have a new registration key or keys to enter?[n]
5. Are you upgrading from an earlier release of PowerPath, or installing PowerPath for the first time? If you are upgrading, enter n and continue with After You Install on page 6-6. PowerPath will convert your existing 12-character license key to a 24-character key. If you are installing PowerPath for the first time, enter y and continue with step 6. The screen displays the following output:
Enter the registration key(s) for your product(s), one per line, pressing Enter after each key. After typing all keys, press Enter again. Registration key:
6. Enter the 24-character alphanumeric sequence found on the License Key Card delivered with the PowerPath media kit and press ENTER. If you enter a valid registration key, the screen displays the following output:
Key successfully installed Registration key: If you enter an invalid registration key, the screen displays an error message and prompts you to enter a valid registration key. See the PowerPath Product Guide for a list of possible error messages returned by the emcpreg license registration utility.
Installation Procedure
6-5
6
7. Press ENTER. The screen displays the following output:
1 key(s) successfully registered.
The installation process then rebuilds the kernel. When installation is complete, the screen displays output like the following:
To activate your installed driver you must reboot node_name Installation of the subset is complete.
PowerPath is now installed on the host. You must reboot the host and perform other administrative tasks before PowerPath can run on the host. Refer to After You Install on page 6-6 for postinstallation information and instructions.
4. Verify that the driver can see the attached devices. Enter:
powermt display
5. If EMC Control Center is installed, run the command that refreshes the EMC Control Center database of device information. Refer to the documentation for your version of EMC Control Center. 6. Check the EMC support Web site or the PowerPath anonymous FTP site for any patches to PowerPath 3.0, and install any required patches. Refer to Appendix A, PowerPath Patches, for information on identifying, obtaining, and installing patches.
6-6
Reinstalling PowerPath
If files are accidently destroyed or changed, you may need to reinstall PowerPath. To reinstall, follow the procedure described in Installing the Software on page 6-4, with one exception: In step 2, enter the following command:
setld -l ./EMC_PowerPath_30/kit POWERCGL300 POWERCLI300 POWERDOC300 POWERDRV300 .
This form of the setld command installs all of PowerPaths four subpackages, even if they are already installed. In the alternative, you can uninstall PowerPath (refer to Chapter 9, Uninstalling PowerPath on a Tru64 UNIX Host) and then reinstall.
Upgrading PowerPath
We update the PowerPath Release Notes periodically and post them on http:\\powerlink.emc.com. Before you upgrade PowerPath, check the powerlink Web site for the most current information.
You can upgrade PowerPath for Tru64 UNIX without uninstalling a previous version. Upgrading to PowerPath 3.0 from 2.x converts the existing 12-character license key in the /etc/powerpath_registration file to a 24-character key in the /etc/emcp_registration file. You need not reenter license information. To upgrade to PowerPath 3.0: 1. Follow procedures in Installation Procedure on page 6-3.
You need not reenter your registration key.
2. Reboot the host. 3. Run powermt save to ensure the configuration file is consistent with the PowerPath 3.0 format.
If you upgrade with some paths dead and subsequently make those paths alive, first run powermt config to configure the paths and then run powermt save to update the configuration information.
Reinstalling PowerPath
6-7
Installing a Patch
Every patch release is accompanied by a Readme file that describes how to install the patch. Appendix A, PowerPath Patches, describes how to obtain patches and their accompanying ReadMe files.
6-8
./usr/opt/POW300/driver/sysconfigtab ./usr/opt/POW300/man ./usr/opt/POW300/man/man1 ./usr/opt/POW300/man/man1/emcpreg1 ./usr/opt/POW300/man/man1/powermt.1 ./usr/opt/POW300/man/man7 ./usr/opt/POW300/sbin ./usr/opt/POW300/sbin/cgmt_V40 ./usr/opt/POW300/sbin/cgmt_V50 ./usr/opt/POW300/sbin/cgmt_V51 ./usr/opt/POW300/sbin/emcpreg_V40 ./usr/opt/POW300/sbin/emcpreg_V50 ./usr/opt/POW300/sbin/emcpreg_V51 ./usr/opt/POW300/sbin/powermt_V40 ./usr/opt/POW300/sbin/powermt_V50 ./usr/opt/POW300/sbin/powermt_V51 ./usr/opt/POW300/shlib ./usr/opt/POW300/shlib/libcg.so_V40 ./usr/opt/POW300/shlib/libcg.so_V50 ./usr/opt/POW300/shlib/libcg.so_V51 ./usr/opt/POW300/shlib/libmp.so_V40 ./usr./opt/POW300/shlib/libmp.so_V50 ./usr/opt/POW300/shlib/libmp.so_V51 ./usr/opt/POW300/shlib/libpn.so_V40 ./usr/opt/POW300/shlib/libpn.so_V50 ./usr/opt/POW300/shlib/libpn.so_V51 ./usr/sbin/emcpreg
6-9
6-10
7
Configuring a PowerPath Boot Device on Tru64 UNIX
This chapter discusses configuring a PowerPath device as the boot device for a Tru64 UNIX host. The chapter covers the following topic:
x
7-1
To set up a storage system logical device as a boot/root logical device, you must first install the operating system. The system prompts you to identify the locations of the new root file system and the /usr tree. Specify a storage system logical device as the root device, using the normal disk device name (rz## or dskN). Tru64 UNIX requires that the root file system be located on partition a of a logical device; you may need to adjust the partition table of that logical device to ensure sufficient space. An entire logical device can be used for the root file system by enlarging the a partition to its maximum limits. The /usr file system location can be specified independently. It can be on another partition of the same logical device as the root file system (space permitting), a different storage system logical device, or a non-storage-system logical device.
After configuring the operating system, install PowerPath (refer to Chapter 6, Installing PowerPath on a Tru64 UNIX Host). During normal system startup, PowerPath becomes operational before the remount of the root file system and before the mount of local file systems. Before PowerPath becomes operational, path failover and load balancing are not available. Therefore, the path represented by the boot device name and the root file system name in /etc/fstab must function normally. You can use powermt to manage the paths to the root and /usr logical devices, keeping in mind that disabling or degrading such paths could cause the system to deadlock and hang.
7-2
8
PowerPath in a Tru64 UNIX Cluster
This chapter describes how to install and configure PowerPath in a TruCluster. The chapter covers the following topics:
x x x
8-1
Planning Considerations
In TruCluster 4.0.x, PowerPath is supported in clustered environments using:
x x
Direct SCSI connections to the storage system Fiber switches When adding PowerPath to an existing cluster installation or designing a new cluster installation including PowerPath, configure two or more SCSI bus adapters on each host with cabling to the storage system. This provides PowerPath with multiple hardware paths to manage. Refer to the Symmetrix Open System Environment Product Guide and the Symmetrix High Availability Product Guide for Symmetrix cabling instructions. In an ASE cluster (Availability cluster) or a Production cluster, Tru64 UNIX requires that the SCSI adapters that connect to external (shared) devices have the same logical number on all hosts. This results in the names of shared devices being the same on all hosts. For example, a storage system logical device accessed as /dev/rz73 on one cluster host would be accessed by the same name on all other cluster hosts. To meet this requirement, you may have to reassign the SCSI logical bus numbers. The cluster installation procedure provides a way to do this.
The preceding example uses Tru64 UNIX V4.0x device names. See Device Naming on page 10-3 for differences between naming in Tru64 UNIX V4.0x and Tru64 UNIX V5.x.
Review Tru64 cluster documentation carefully with regard to bus configuration and renumbering. Ideally, you will establish the final bus configuration before installing PowerPath. PowerPath then creates its initial power device configuration based on this bus configuration. If PowerPath is already installed and has been running when the bus numbering is changed, PowerPath sees that some devices have disappeared while new devices have
8-2
appeared and reports each such instance. This is not a problem, since PowerPath automatically adjusts its database to match the new configuration. Still, you should be aware that log and console messages will occur.
x
See Tru64 cluster documentation for specific requirements regarding cluster cabling configurations. TruCluster 4.x clusters use shared multi-initiator SCSI buses, using Y-cables or SCSI hubs. TruCluster 5.x clusters can be direct-connect buses. To summarize, each node in a TruCluster configuration running PowerPath will have two or more SCSI adapters for connection to the storage system. During cluster installation, the logical number of each bus will be made the same on all hosts.
With PowerPath installed and the additional ports connected, there are two or more names for each logical device; for example, /dev/rz41 and /dev/rz49. PowerPath accepts I/O to both equivalently, but the Distributed Lock Manager may not realize the two names are equivalent. Select one name and use it consistently for all applications.
The preceding example uses Tru64UNIX V4.x device names. See Device Naming on page 10-3 for differences between naming in Tru64 UNIX V4.x and Tru64 UNIX V5.x.
With PowerPath installed, failure of one path to a device is transparent to the Availability Manager and applications running on the node, and it does not cause an application failover to another node. PowerPath transparently routes I/O via the remaining valid paths for as long as necessary. If you need to shut down a host for hardware repair, use asemgr to relocate available services to other nodes.
Planning Considerations
8-3
To install PowerPath in a Tru64 UNIX V4.0x TruCluster: 1. On the first cluster node: a. Configure the node as specified by the Tru64 UNIX vendor documentation. b. Use asemgr to relocate any active available services from this node to another node that will remain in normal operation. (It is not necessary to remove this node from cluster membership.) c. Make any necessary hardware configuration changes; for example, installing additional SCSI adapters on the host and cabling them to appropriately configured storage system ports. Use the console command show devices to verify that expected devices are being seen. Refer to the Symmetrix Open System Environment Product Guide and the Symmetrix High Availability Product Guide for cabling instructions. 2. Configure the remaining cluster members as specified by the Tru64 UNIX vendor documentation. 3. On every cluster node, one node at a time. a. Install PowerPath. Reboot the system when so instructed. b. Use powermt display dev=all to confirm that the same set of power devices are available on this node as on nodes which already have PowerPath installed. c. Use asemgr to relocate services back to this node as desired.
The default cluster barrier mechanism for Tru64 UNIX V5.x TruCluster uses SCSI-3 Persistent Reservations for device locks. Persistent Reservation support requires:
x
Symmetrix Enginuity microcode 5567.33.xx (or higher) and support enabled for each cluster-visible device TruCluster V5.1 (or later) with Patch Kit-0003 (BL17) (or later)
8-4
If you can meet both requirements for Persistent Reservations, follow the instructions in Using the Default Cluster Barrier on page 8-5. If you cannot meet one or both requirements, follow the instructions in Using an Alternate Cluster Barrier on page 8-7. Using the Default Cluster Barrier To integrate PowerPath into a TruCluster on Tru64 UNIX V5.x using the default cluster barrier, follow these steps: 1. Upgrade to Symmetrix microcode 5567.33.xx or higher. 2. Using the IMPL Configuration Edit Volume screen, set the PER (SCSI Persistent) flag for each Symmetrix logical device in the TruCluster V5.x configuration. Load the IMPL configuration change to enable Persistent Reservation support. 3. Upgrade the TruCluster V5.1 hosts with Patch Kit-0003 (BL17) or later. 4. Configure the first cluster node and additional cluster members as specified by Tru64 UNIX vendor documentation. 5. Add the following EMC Symmetrix device entries (SCSI and Fibre Channel) to the /etc/ddr.dbase file of every cluster member, with ubyte[0]=8:
SCSIDEVICE # # Entry for Symmetrix SCSI devices # Type = disk Name = "EMC" "SYMMETRIX" PARAMETERS: TypeSubClass = hard_disk, raid BlockSize = 512 BadBlockRecovery = disabled DynamicGeometry = true LongTimeoutRetry = enabled DisperseQueue = false TagQueueDepth = 20 ReadyTimeSeconds = 45 InquiryLength = 160 RequestSenseLength = 160 PwrMgmt_Capable = false ATTRIBUTE: # ubyte[0] = 8 Disable AWRE/ARRE only, PR enabled # ubyte[0] = 25 Disable PR & AWRE/ARRE, Enable I/O Barrier Patch resets AttributeName = "DSBLflags" Length = 4 ubyte[0] = 8
8-5
8
SCSIDEVICE # # Entry for Symmetrix Fibre Channel devices # Type = disk Stype = 2 Name = "EMC" "SYMMETRIX" PARAMETERS: TypeSubClass = hard_disk, raid BlockSize = 512 BadBlockRecovery = disabled DynamicGeometry = true LongTimeoutRetry = enabled DisperseQueue = false TagQueueDepth = 20 ReadyTimeSeconds = 45 InquiryLength = 160 RequestSenseLength = 160 PwrMgmt_Capable = false ATTRIBUTE: AttributeName = "DSBLflags" Length = 4 ubyte[0] = 8
6. Run the ddr_config -c command on every cluster member to recompile the database. 7. Rebuild the kernel, and copy it to the boot disk. Enter:
doconfig -c node_name cp /sys/node_name/vmunix /vmunix
8. Shut down all cluster members using the shutdown -c now command. 9. Reboot each cluster member. 10. After the cluster is running, install PowerPath using a command shell on any member. The installation needs to be done only once and applies to all nodes in the cluster. 11. Reboot the nodes one at a time, to make PowerPath operational on the full cluster.
8-6
To integrate PowerPath into a TruCluster on Tru64 UNIX V5.x using an alternate cluster barrier, follow these steps: 1. If you are running TruCluster V5.0A with Patch Kit 2 or earlier, download the file 82108_v5_0a.tar from the Compaq FTP site (http://ftp1.support.compaq.com/public/unix/). Follow the instructions in the accompanying README file. Copy the patchs .mod files to the appropriate directories and rebuild the cluster node kernels to complete the patch installation.
If you are running TruCluster V5.1 (or V5.0A with Patch Kit-003/BL17 or later), the I/O Barrier Patch functionality is already in the build. A separate patch installation is not necessary.
2. Configure the first cluster node and additional cluster members as specified by Tru64 UNIX vendor documentation. 3. Add the following EMC Symmetrix device entries (SCSI and Fibre Channel) to the /etc/ddr.dbase file of every cluster member, with ubyte[0]=25:
SCSIDEVICE # # Entry for Symmetrix SCSI devices # Type = disk Name = "EMC" "SYMMETRIX" PARAMETERS: TypeSubClass = hard_disk, raid BlockSize = 512 BadBlockRecovery = disabled DynamicGeometry = true LongTimeoutRetry = enabled DisperseQueue = false TagQueueDepth = 20 ReadyTimeSeconds = 45 InquiryLength = 160 RequestSenseLength = 160 PwrMgmt_Capable = false ATTRIBUTE: # ubyte[0] = 8 Disable AWRE/ARRE only, PR enabled # ubyte[0] = 25 Disable PR & AWRE/ARRE, Enable I/O Barrier Patch resets AttributeName = "DSBLflags" Length = 4 ubyte[0] = 25 SCSIDEVICE # # Entry for Symmetrix Fibre Channel devices
8-7
8
# Type = disk Stype = 2 Name = "EMC" "SYMMETRIX" PARAMETERS: TypeSubClass BlockSize BadBlockRecovery DynamicGeometry LongTimeoutRetry DisperseQueue TagQueueDepth ReadyTimeSeconds InquiryLength RequestSenseLength PwrMgmt_Capable ATTRIBUTE: AttributeName = "DSBLflags" Length = 4 ubyte[0] = 25
= = = = = = = = = = =
hard_disk, raid 512 disabled true enabled false 20 45 160 160 false
4. Run the ddr_config -c command on every cluster member to recompile the database. 5. Rebuild the kernel, and copy it to the boot disk. Enter:
doconfig -c node_name cp /sys/node_name/vmunix /vmunix
6. Shut down all cluster members using the shutdown -c now command. 7. Reboot each cluster member. 8. After the cluster is running, install PowerPath using a command shell on any member. The installation needs to be done only once and applies to all nodes in the cluster. 9. Reboot the nodes one at a time, to make PowerPath operational on the full cluster.
8-8
9
Uninstalling PowerPath on a Tru64 UNIX Host
This chapter describes how to remove PowerPath from a Tru64 host. It also discusses removing PowerPath in a clustered configuration. The chapter covers the following topics:
x x x x
Before Removing PowerPath ...........................................................9-2 Removing PowerPath........................................................................9-2 After Removing PowerPath .............................................................9-3 Removing PowerPath from Clustered Environments ..................9-3
9-1
Removing PowerPath
To remove PowerPath from a Tru64 UNIX system: 1. Log in as root. 2. Check for installed versions. Enter:
setld -i | grep POWER
PowerPath displays the installed components. The output is similar to the following:
POWERCGL300 POWERCLI300 POWERDOC300 POWERDRV300 installed installed installed installed Symmetrix PowerPath Consistency Group Library Symmetrix PowerPath Command Line Interface Symmetrix PowerPath Documentation Symmetrix PowerPath Disk Driver
Even if you already removed PowerPath, setld -i | grep POWER displays the components. However, the status column does not display the word installed.
where xxx is the three-digit version number of the installed PowerPath software; for example:
setld -d POWERDRV300 POWERCLI300 POWERDOC300 POWERCGL300
The screen displays status as the driver is deleted from the kernel and the kernel is rebuilt.
9-2
After removing PowerPath you must reboot the host and, possibly, perform other administrative tasks. Refer to After Removing PowerPath on page 9-3.
9-3
9-4
10
PowerPath Administration on Tru64 UNIX
visible Body Tag
This chapter discusses PowerPath issues and administrative tasks specific to each Tru64 UNIX. Throughout this chapter, many procedural steps use powermt commands. For detailed descriptions of these commands, refer to the PowerPath Product Guide. This chapter covers the following topics:
x x x x x x x
PowerPath Feature Summary ........................................................10-2 Alternative Pathing in Tru64 UNIX V5.x......................................10-2 Boot Device Support........................................................................10-3 Device Naming.................................................................................10-3 Reconfiguring PowerPath Devices Online...................................10-4 Load Balancing and Failover on NUMA Configurations ..........10-6 Error Messages .................................................................................10-8
10-1
10
PowerPath Feature Summary on Tru64 UNIX Feature I/O load balancing I/O failover IOCTL load balancing IOCTL failover Install without reboot Upgrade without reboot Upgrade without uninstall Deinstall without reboot Boot from PowerPath device Tru64 V5.x UNIX (SCSI and Fibre Channel) Tru64 UNIX V4.0.x (SCSI only) Tru64 UNIX V5.x only Supported on Tru64 UNIX?
Cluster support Fibre Channel support powermt utility powercf utility Enterprise consistency groups Add PowerPath devices online
10
Device Naming
PowerPath for Tru64 UNIX supports native devices only; it does not support PowerPath pseudo devices.
In Tru64 UNIX V5.x, a native device takes the form dsk#. This is a native device in that it is provided by the operating system for use with applications. It is also similar to a PowerPath pseudo device in that the name does not represent a single path but rather a path set for a specific storage system logical device. The key point to remember with Tru64 UNIX V5.x is that you use the device name provided by the operating system and not a pseudo device name provided by PowerPath. In Tru64 UNIX V4, a native device describes a device special file of the following form:
rrzX#
where:
x x x
X is a null or a letter identifying the disk LUN. # is the rz number, calculated as (bus * 8) + target.
PowerPath does not support partition identifiers in device names. As a result, PowerPath recognizes only device special file names in the form rrzX# (for example, rrzf68).
10-3
10
Adding or removing HBAs Changing the cabling routes between HBAs and storage system ports Adding, removing, or changing storage system logical devices Adding or removing storage system interfaces
x x
Separate procedures follow for reconfiguring PowerPath devices on Tru64 UNIX V4.x and Tru64 UNIX V5.x.
To reconfigure PowerPath devices on Tru64 UNIX V5.x: 1. Make sure all physical device connections are connected. 2. Configure any new devices. Enter:
hwmgr -scan scsi
Although hwmgr -scan scsi returns immediately, the scan may continue working in the kernel; therefore, wait a few minutes before continuing. (The time needed for this command to complete is directly proportional to the number of new devices added online.) 3. List all configured devices, with the number of paths to each device. Enter:
hwmgr -show scsi
Examine the output to make sure the host sees all the devices configured on the storage system. 4. If necessary, create new device files. Enter:
dsfmgr -k
10-4
10
To reconfigure PowerPath devices on Tru64 UNIX V4.x: 1. Make sure all physical device connections are connected. 2. Configure any new devices. Enter:
scsimgr -scan_all
Although scsimgr -scan_all returns immediately, the scan may continue working in the kernel; therefore, wait a few minutes before continuing. (The time needed for this command to complete is directly proportional to the number of new devices added online.) 3. List all configured devices, with the number of paths to each device. Enter:
scu show edti
Examine the output to make sure the host sees all the devices configured on the storage system. 4. Remove unwanted devices. Enter:
powermt check
10-5
10
Autoconfiguration
Normally, PowerPath will automatically configure devices. In addition, the operator can a scan to find and configure devices, using powermt config. When the system is booted, PowerPath tries to find and configure all paths to all devices. In particular, the following steps are followed: 1. All devices and paths recorded in the /etc/powermt.custom file (if it exists) are added as specified in the file. 2. All HBAs are scanned for devices. All devices and paths found will be configured unless they conflict with information in the custom file. For example, if [bus/target/lun] 2/3/4 is recorded in
10-6
10
the custom file as connected to logical device 8888, but the logical device found on path 2/3/4 has an id of 9999, then the path will be marked as dead (invalid). This is done to prevent data corruption which could result from accessing the wrong logical device. If 9999 is the desired logical device, for example, as the result of a storage system or cable reconfiguration, then the old device must be manually removed using powermt remove. Following that, the new device will be configured in the next scan, for example, by a manual powermt config command entered by the operator. In this case, the operator should be sure to use powermt save after the new devices have been configured to update the powermt.custom file. During normal operation, devices which become available (for example, because a cable is plugged in) may be automatically configured if an application attempts to use them. This assumes that the device was not already configured from data in the custom file. Behavior of Tru64 V4 and V5 is slightly different in this regard:
x
V4 When an application attempts to use (open) a device name which has no known device connected, PowerPath will check to see if a device has been connected, and if so, configure it and allow the open to proceed. If other paths are already known for the device, then the new path will be associated with those for load balancing and failover. Other possible paths are not scanned for, however, so only the one new path will be added to the known list of paths. Hence, when adding multiple paths to a running system, powermt config should be run to find and configure all paths to all new or existing devices. V5 When an application tries to use (open) a device, a check is made for any new paths that may have become available to that device. This is done regardless of whether there are existing paths. powermt config must be run, however, to configure the first or additional paths to devices that are not in active use, to see them appear in powermt display output.
In general, there can be a delay between the physical steps taken to make paths exist (e.g., plugging in a cable) and the point at which a powermt config command will find them. This is due to HBA setup, but usually it is not more than a few seconds.
10-7
10
Error Messages
PowerPath reports errors to the /var/adm/syslog.dated/timestamp/kern.log file, where timestamp is the day when the message was written. The powermt utility reports errors to standard error (stderr). Refer to the PowerPath Product Guide for a complete list of PowerPath error messages. You can use the Tru64 UNIX syslogd command to control where the messages are reported. For details, refer to the Tru64 UNIX syslogd (1M) man page.
10-8
PART 3
PowerPath on HP-UX
This section discusses PowerPath on an HP-UX host. Chapter 11, Installing PowerPath on an HP-UX Host This chapter describes how to install and upgrade PowerPath. Chapter 12, Configuring a PowerPath Boot Device on HP-UX This chapter describes how to configure a PowerPath device as the boot device. Chapter 13, PowerPath in an MC/ServiceGuard Cluster This chapter describes how to install MC/Service Guard on new hosts, and how to integrate PowerPath into an existing environment. Chapter 14, Removing PowerPath from an HP-UX Host This chapter describes how to uninstall PowerPath on HP-UX hosts. Chapter 15, PowerPath Administration on HP-UX This chapter discusses PowerPath for HP-UX issues and administrative tasks.
11
Installing PowerPath on an HP-UX Host
This chapter describes how to install and upgrade PowerPath on an HP-UX host. It also discusses upgrading the operating system. The chapter covers the following topics:
x x x x x x x
Before You Install ............................................................................. 11-2 Installation Procedure ..................................................................... 11-3 After You Install ............................................................................... 11-5 Upgrading PowerPath..................................................................... 11-6 Installing a Patch .............................................................................. 11-7 Upgrading HP-UX ........................................................................... 11-7 File Changes Caused by PowerPath Installation ........................ 11-7
11-1
11
Before you install PowerPath 3.0: Locate the PowerPath installation CD and, if you are installing PowerPath for the first time, your 24-digit registration number. The registration number is on the License Key Card delivered with the PowerPath media kit. (If you are upgrading from an earlier version of PowerPath, PowerPath will use your old key.) Verify that your environment meets the requirements in: Chapter 3, PowerPath Configuration Requirements, in the PowerPath Product Guide. That chapter describes the host-storage system interconnection topologies that PowerPath supports. Environment and System Requirements section of the EMC PowerPath for UNIX Version 3.0 Release Notes. That section describes minimum hardware and software requirements for the host and supported storage systems. We update the release notes periodically and post them on http:\\powerlink.emc.com. Before you install PowerPath, check the powerlink Web site, or contact your EMC customer support representative, for the most current information. If you run HP-UX 11.0, install the following HP-UX patches and patch bundles: HP-UX general release patches (March 2001 or later) HP-UX Hardware Enablement and critical patches (March 2001 or later) Quality Pack for HP-UX (March 2001 or later) If you run HP-UX 11i, install the following HP-UX patches and patch bundles: PHCO_24630 GOLDBASE11i (June 2001 or later) GOLDAPPS11i (June 2001 or later)
11-2
11
Ensure that the c bit is set on the Symmetrix interface. Failure to set the c bit can cause data corruption. Configure the HP-UX host so storage system disk devices are incorporated into HP-UX. The storage system devices must be addressed on the SCSI bus or Fibre Channel using SCSI target IDs and SCSI LUNs. Make sure the ports on the storage system are online. Refer to Symmetrix Open Systems Environment Product Guide, Volume I.
Installation Procedure
You can install PowerPath directly from the CD-ROM, or you can install from the compressed tar file distributed on the CD-ROM. The following sections describe how to:
x x x
Mount the CD-ROM Install PowerPath directly from the CD-ROM Install PowerPath from the compressed tar file.
If you are upgrading from an earlier release of PowerPath, see Upgrading PowerPath on page 11-6 before you begin the installation.
To mount the PowerPath installation CD-ROM: 1. Log in as root. 2. Insert the CD-ROM into the CD-ROM drive. 3. Mount the CD on your file system. For example, to mount the CD on /mnt, enter:
mount -F cdfs -r /dev/dsk/c#t#d# /mnt
You can use the EMC inq facility to determine the c#t#d# device for the CD-ROM drive.
Installation Procedure
11-3
11
Installing from the CD-ROM
To install PowerPath directly from the installation CD-ROM: 1. Mount the CD-ROM in the CD-ROM drive (refer to Mounting the CD-ROM on page 11-3). 2. Install the software. For example, if /mnt is the mount point, enter:
swinstall -x autoreboot=true -x mount_all_filesystems=false \ -s /mnt/HP/EMCpower.HP.3.0.0 EMCpower
After swinstall completes, the host is rebooted automatically. Then you must register PowerPath on the host and perform other administrative tasks. Refer to After You Install on page 11-5 for postinstallation information and instructions.
To install PowerPath from the compressed tar file: 1. Mount the CD-ROM in the CD-ROM drive (refer to Mounting the CD-ROM on page 11-3). 2. Copy the compressed tar file to the /tmp directory. For example, if /mnt is the mount point, enter:
cp /mnt/HP/EMCpower.HP.3.0.0/EMCPower.HP.3.0.0.bXXX.tar.Z /tmp
After swinstall completes, the host is rebooted automatically. Then you must register PowerPath on the host and perform other administrative tasks. Refer to After You Install on page 11-5 for postinstallation information and instructions.
11-4
11
a. Enter:
emcpreg -install
c. Enter the 24-character alphanumeric sequence found on the License Key Card delivered with the PowerPath media kit and press ENTER. If you enter a valid registration key, you see the following output:
Key successfully installed. Registration key: If you enter an invalid registration key, the screen displays an error message and prompts you to enter a valid registration key. See the PowerPath Product Guide for a list of possible error messages returned by the emcpreg utility.
11-5
11
3. Configure PowerPath devices. Enter:
powermt config
4. If EMC Control Center is installed, run the command that refreshes the EMC Control Center database of device information. Refer to the documentation for your version of EMC Control Center. 5. Check the EMC support Web site or the PowerPath anonymous FTP site for any patches to PowerPath 3.0, and install any required patches. Refer to Appendix A, PowerPath Patches, for information on identifying, obtaining, and installing patches.
Upgrading PowerPath
We update the PowerPath Release Notes periodically and post them on http:\\powerlink.emc.com. Before you upgrade PowerPath, check the powerlink Web site for the most current information.
You can upgrade PowerPath for HP-UX without uninstalling a previous version. Upgrading to PowerPath 3.0 from 2.x converts the existing 12-character license key in the /etc/powerpath_registration file to a 24-character key in the /etc/emcp_registration file. You need not reenter license information. To upgrade to PowerPath 3.0, follow the instructions and procedures in:
x x
Installation Procedure on page 11-3. After You Install on page 11-5 (note that you need not reenter your registration key).
11-6
11
Installing a Patch
Every patch release is accompanied by a Readme file that describes how to install the patch. Appendix A, PowerPath Patches, describes how to obtain patches and their accompanying ReadMe files.
Upgrading HP-UX
You must uninstall PowerPath before you upgrade HP-UX. After you complete the operating system upgrade, reinstall PowerPath.
Installing a Patch
11-7
11
/usr/sbin/emcpreg /sbin/powermt /sbin/emcpstartup /sbin/init.d/emcp /sbin/init.d/emccg
11-8
11
11-9
11
11-10
12
Configuring a PowerPath Boot Device on HP-UX
This chapter describes how to configure a PowerPath device as the boot device for an HP-UX host. The chapter covers the following topic:
x
12-1
12
1. Log in as root. 2. Install PowerPath (refer to Chapter 11, Installing PowerPath on an HP-UX Host). 3. Boot the host in single-user mode. 4. Mount the /usr and /var file systems. 5. Create a new volume group containing the target PowerPath device. Before proceeding, make sure the device you are selecting is not used by any file system or volume group. (In the examples below, c2t0d0 identifies the PowerPath device.) a. Initialize the boot device and create the LIF utilities and LIF AUTO file on the boot device. Enter:
pvcreate -B -f /dev/rdsk/c2t0d0 mkboot /dev/rdsk/c2t0d0 mkboot -a "hpux" /dev/rdsk/c2t0d0
12-2
12
c. Determine the next available minor number for the group device, and create a character special file for the group device. Enter:
ls -l /dev/*/group mknod /dev/vgboot/group c 64 0x020000
where 0x020000 is the next available minor number. d. Create the volume group. Enter:
vgcreate /dev/vgboot /dev/dsk/c2t0d0
If more space is necessary, extend the volume group using vgextend to include additional PowerPath devices. 6. In the new volume group, create the logical devices (root, stand, dump, swap, and usr) required to boot the host. Their sizes should be equal to or larger than the sizes of the corresponding logical devices in the current boot disk. Enter:
lvcreate lvcreate lvcreate lvcreate -L -L -L -L 200 300 200 400 -N -N -N -N altstand -r n -s y -C y /dev/vgboot altroot -r n -s y -C y /dev/vgboot altswap -r n -s y -C y /dev/vgboot altusr /dev/vgboot
7. Create file systems, making sure the file system types (HFS or VXFS) match their respective types in the current boot disk. Enter:
newfs -F vxfs /dev/vgboot/raltroot newfs -F hfs /dev/vgboot/raltstand newfs -F vxfs /dev/vgboot/raltusr
8. Create mount points, mount the filesystems, and check that their sizes match the sizes in the current boot disk. Enter:
mkdir mkdir mount mount /altroot /altusr /dev/vgboot/altroot /altroot /dev/vgboot/altusr /altusr
9. Copy the root, stand, and usr file systems to their alternate locations mounted on altroot, altstand, and altusr, respectively. Enter:
cd / find . -xdev| cpio -pdmux /altroot cd /stand find . -xdev| cpio -pdmux /altstand cd /usr find . -xdev| cpio -pdmux /altusr
12-3
12
10. Update the BDRA for the new volume group, to ensure that the host can boot from the alternative disk. Enter:
lvlnboot lvlnboot lvlnboot lvlnboot -b -r -s -d /dev/vgboot/altstand /dev/vgboot /dev/vgboot/altroot /dev/vgboot /dev/vgboot/altswap /dev/vgboot /dev/vgboot/altswap /dev/vgboot
12. Update the /altroot/etc/fstab file to reflect the root, stand, swap, usr, and any other file systems. Also, in fstab, change the volume group name and logical device names to reflect their correct assignments:
# System /etc/fstab file. Static information about filesystems # See fstab(4) and sam(1M) for further details on configuring devices /dev/vgboot/altroot / vxfs delaylog 0 1 /dev/vgboot/altstand /stand hfs defaults 0 1 /dev/vg00/lvol4 /tmp vxfs delaylog 0 2 /dev/vg00/lvol6 /opt vxfs delaylog 0 2 /dev/vgboot/altusr /usr vxfs delaylog 0 2 /dev/vg00/lvol8 /var vxfs delaylog 0 2 /dev/dsk/c3t2d0 /cdrom cdfs ro,suid 0 0
12-4
12
15. To create redundancy in the event that the primary boot path fails, configure an alternate path using the LVM pvlink feature. For example, enter:
vgextend /dev/vgboot /dev/dsk/c3t0d0
where c3t0d0 identifies the boot power device from a different hardware path. 16. Change the alternate boot path to the new hardware path, by entering the following at the Main Menu: Enter command or menu> prompt while the host boots:
path alternate hardware_path
This example assumes the var, tmp, and opt file systems are still located on their original logical devices on the non-power device boot disk. If disk space permits, you can migrate these file systems to logical devices created on the new boot volume group.
12-5
12
12-6
13
PowerPath in an MC/ServiceGuard Cluster
This chapter describes how to install and configure PowerPath in an MC/ServiceGuard cluster. For more detailed information about clustering, refer to the Symmetrix High Availability Environment Product Guide. This chapter covers the following topics:
x x x
Installing PowerPath and MC/ServiceGuard on Fresh Hosts ..13-2 Integrating PowerPath in an MC/ServiceGuard Cluster ..........13-4 Integrating MC/ServiceGuard in a PowerPath Environment ..13-5
13-1
13
13-2
13
c. Run vgchange -a n volume_group to deactivate the shared volume group. d. Install MC/ServiceGuard following the instructions in the relevant HP documentation. Configure MC/ServiceGuard to use the shared volume group identified in step 2.b. 4. On all hosts in the cluster: a. To prevent package volume groups from being activated at system boot time, set the AUTO_VG_ACTIVATE flag to 0 in the /etc/lvmrc file. Then include all the volume groups that are not cluster bound in the custom_vg_activation function. Volume groups that will be used by packages should not be included anywhere in the file, since they will be activated and deactivated by package control scripts. The root volume group does not need to be included in custom_vg_activation, since it is activated automatically before /etc/lvmrc is used at boot time. b. Start cluster services using the cmrunnode command. Alternatively, enable automatic cluster startup by setting the AUTOSTART_CMCLD flag to 1 in the /etc/rc.config.d/cmcluster file. With automatic cluster startup, the host joins the cluster at boot time.
Automatic cluster startup is the preferred way to start a cluster.
Once cluster services are started, the shared volume group and its underlying PowerPath devices are under the control of MC/ServiceGuard for failure monitoring and detection and for automated shutdown and failover of critical data services.
13-3
13
13-4
13
13-5
13
5. On all hosts in the cluster: a. To prevent package volume groups from being activated at system boot time, set the AUTO_VG_ACTIVATE flag to 0 in the /etc/lvmrc file. Then include all volume groups that are not cluster bound in the custom_vg_activation function. Volume groups that will be used by packages should not be included anywhere in the file, since they will be activated and deactivated by package control scripts. The root volume group does not need to be included in
custom_vg_activation, since it is activated automatically before /etc/lvmrc is used at boot time.
b. Start cluster services using the cmrunnode command. Alternatively, enable automatic cluster startup by setting the AUTOSTART_CMCLD flag to 1 in the /etc/rc.config.d/cmcluster file. With automatic cluster startup, the host joins the cluster at boot time.
Automatic cluster startup is the preferred way to start a cluster.
Once cluster services are started, the shared volume group and its underlying PowerPath devices are under the control of MC/ServiceGuard for failure monitoring and detection and for automated shutdown and failover of critical data services.
13-6
14
Removing PowerPath from an HP-UX Host
This chapter describes how to remove PowerPath from an HP-UX host. The chapter covers the following topics:
x x x
Before Removing PowerPath .........................................................14-2 Removing PowerPath......................................................................14-2 After Removing PowerPath ...........................................................14-4
14-1
14
Removing PowerPath
The following sections describe how to remove PowerPath with:
x x
3. In the SD RemoveSoftware Selection window: a. Select EMCpower. b. Choose Mark for Remove from the Actions menu. The word Yes appears in front of EMCpower. c. Select EMCpower. d. Choose Remove (Analysis) from the Actions menu. 4. In the Remove Analysis window, click OK after analysis has completed and the Status field reads Ready. 5. In the Confirmation, click Yes and then click Yes again.
14-2
14
6. In the Remove window, click Done after removal has completed and the Status field reads Completed. 7. In the Note, click OK to reboot.
The screen displays an explanation of the menus and navigational tools. 3. Select RETURN and press ENTER. 4. At the SD RemoveSoftware Selection screen: a. Use the down/up arrow key and the space bar to select EMCpower. b. Use the Tab key to select File. c. Use the right arrow key to select Actions and press ENTER. d. Use the down arrow key to select Mark for Remove and press ENTER. The word Yes appears in front of EMCpower. e. Use the spacebar to select EMCpower. f. Use the Tab key to select File. g. Use the right arrow key to select Actions, and press ENTER. h. Use the down arrow key to highlight Remove (analysis), and press ENTER. 5. At the Remove Analysis screen, when analysis has completed and the status is Ready, use the Tab key to select OK, and press ENTER. 6. At the Confirmation screen, select Yes and then select Yes again.
Removing PowerPath
14-3
14
7. At the Remove window, select Done after removal has completed and the status is Ready. 8. At the Note window, select OK to reboot.
14-4
15
PowerPath Administration on HP-UX
visible Body Tag
This chapter discusses PowerPath issues and administrative tasks specific to HP-UX. Throughout this chapter, many procedural steps use powermt commands. For detailed descriptions of these commands, refer to the PowerPath Product Guide. This chapter covers the following topics:
x x x x x x
PowerPath Feature Summary ........................................................15-2 Boot Device Support........................................................................15-3 Device Naming.................................................................................15-3 Reconfiguring PowerPath Devices Online...................................15-4 LVM Alternate Links .......................................................................15-5 Error Messages .................................................................................15-5
15-1
15
PowerPath Feature Summary on HP-UX Feature I/O load balancing I/O failover IOCTL load balancing IOCTL failover Install without reboot Upgrade without reboot Upgrade without uninstall Deinstall without reboot Boot from PowerPath device Cluster support Fibre Channel support powermt utility powercf utility Enterprise consistency groups Add PowerPath devices online Yes on SCSI and FC_AL; not supported on a fabric topology Supported on HP-UX?
15-2
15
Device Naming
PowerPath for HP-UX supports only native devices. A native device describes a device special file of one of the following forms:
x x
where:
x x
The c # is the instance number for the interface card. The t # is the target address of the storage system logical device on the bus. The d # is the storage system logical device at the target.
15-3
15
Adding or removing HBAs Adding, removing, or changing storage system logical devices Changing the cabling routes between HBAs and storage system ports Adding or removing storage system interfaces
To reconfigure PowerPath devices: 1. Verify that the HP-UX host recognizes the connected devices. Enter:
ioscan -f
3. Remove any device files for which the underlying devices no longer exist. Enter:
rmsf -a
15-4
15
Error Messages
PowerPath reports errors to the /var/adm/syslog/syslog.log file. The powermt utility reports errors to standard error (stderr). Refer to the PowerPath Product Guide for a complete list of PowerPath error messages. You can use the HP-UX syslogd command to control where the messages are reported. Refer to the HP-UX syslogd (1M) manual page for more information.
15-5
15
15-6
PART 4
PowerPath on Solaris
This section discusses PowerPath on a Solaris host. Chapter 16, Installing PowerPath on a Solaris Host This chapter describes how to install and upgrade PowerPath on a Solaris host. Chapter 17, Configuring a PowerPath Boot Device on Solaris This chapter describes how to configure a PowerPath device as the boot device. Chapter 18, PowerPath in a Solaris Cluster This chapter describes how to work with Sun Cluster and VERITAS Cluster Server. Chapter 19, Removing PowerPath from a Solaris Host This chapter describes how to remove PowerPath from a Solaris host. Chapter 20, EMCPOWER Devices and Solaris Applications This chapter describes how to install and configure emcpower devices with various Solaris system applications. Chapter 21, PowerPath Administration on Solaris This chapter discusses PowerPath for Solaris issues and administrative tasks.
16
Installing PowerPath on a Solaris Host
This chapter describes how to install and upgrade PowerPath on a Solaris host. It also discusses upgrading the operating system. The chapter covers the following topics:
x x x x x x x x
Before You Install .............................................................................16-2 Installation Procedure .....................................................................16-4 After You Install ...............................................................................16-7 Configuration Guidelines ...............................................................16-9 Upgrading PowerPath...................................................................16-10 Installing a Patch ............................................................................16-12 Upgrading Solaris ..........................................................................16-12 File Changes Caused by PowerPath Installation/Upgrade ....16-12
16-1
16
Before you install PowerPath 3.0: Note that installing or upgrading PowerPath requires you to reboot the host. Plan to install or upgrade PowerPath when a reboot will cause minimal disruption to your site. Locate the PowerPath installation CD and, if you are installing PowerPath for the first time, your 24-digit registration number. The registration number is on the License Key Card delivered with the PowerPath media kit. (If you are upgrading from an earlier version of PowerPath, PowerPath will use your old key.) Verify that your environment meets the requirements in: Chapter 3, PowerPath Configuration Requirements, in the PowerPath Product Guide. That chapter describes the host-storage system interconnection topologies that PowerPath supports. Environment and System Requirements section of the EMC PowerPath for UNIX Version 3.0 Release Notes. That section describes minimum hardware and software requirements for the host and supported storage systems. We update the release notes periodically and post them on http:\\powerlink.emc.com. Before you install PowerPath, check the powerlink Web site, or contact your EMC customer support representative, for the most current information. Ensure that the storage system logical devices are configured for PowerPath support. Refer to the Symmetrix Open Systems Environment Product Guide, Volume I or the Installation Roadmap for FC-Series Storage Systems. Ensure that the CLARiiON SnapView utility admsnap and other CLARiiON host- and storage-system-based software is up to date. Uninstall any earlier version of PowerPath that is installed on the host. Refer to Upgrading PowerPath on page 16-10.
16-2
16
Uninstall any version of Navisphere Application Transparent Failover (ATF) that is installed on the host. Reconfigure applications and system services that use ATF pseudo device names to use standard Solaris native named devices (cXtXdXsX) instead. Set up the SCSI target and LUN or Fibre Channel port and LUN addresses. Refer to the Symmetrix Open Systems Environment Product Guide, Volume I or Installation Roadmap for FC-Series Storage Systems. In the file /etc/system, make sure the timeout value is set to 60 seconds (which minimizes path failover time without compromising online storage system microcode upgrades). The entry must be a hexadecimal number: set sd: sd_io_time = 0x3C Format, partition, and label the storage system devices (refer to the Symmetrix Open Systems Environment Product Guide, Volume I or Installation Roadmap for FC-Series Storage Systems). Do not, however, use or mount these devices before you install PowerPath.
16-3
16
Installation Procedure
You can install PowerPath directly from the PowerPath installation CD-ROM using pkgadd. The following sections describe how to:
x x
To mount the PowerPath installation CD-ROM: 1. Log in as root. 2. Insert the CD-ROM into the CD-ROM drive. If the CD mounts automatically, continue with Installing the Software on page 16-5. If the CD does not mount automatically, you must mount it manually. Continue with step 3. 3. Mount the CD on your file system. For example, to mount the CD on /cdrom/cdrom0, enter:
mount -F hsfs -r /dev/dsk/cxtydzs0 /cdrom/cdrom0
where x, y, and z are values specific to the hosts CD-ROM drive. For example:
mount -F hsfs -r /dev/dsk/c0t2d0s0 /cdrom/cdrom0
16-4
16
To install PowerPath: 1. If you do not have a graphics terminal, run script filename, to record pkgadd output in the specified file. After pkgadd completes, use CTRL-D to stop recording the output. 2. Change to the SOLARIS directory. For example, enter:
cd /cdrom/cdrom0/SOLARIS
(Note the required space and period after the -d parameter.) You see the following prompt:
The following packages are available: 1 EMCpower EMC PowerPath (all) 3.0.0_b080 Select package(s) you wish to process (or all to process all packages). (default: all) [?,??,q]:
Installation Procedure
16-5
16
You see the following prompt:
Do you want to continue with the installation of <EMCpower> [y,n,?]
6. Enter y and press ENTER. The screen displays information about the installation, ending with the following prompt:
Installation of <EMCpower> was successful. The following packages are available: 1 EMCpower EMC PowerPath (all) 3.0.0_b080 Select package(s) you wish to process (or all to process all packages). (default: all) [?,??,q]:
PowerPath is now installed on the host. You must now register PowerPath, reboot the host, and perform other administrative tasks. Refer to After You Install on page 16-7 for postinstallation instructions.
16-6
16
a. Enter:
emcpreg -install
c. Enter the 24-character alphanumeric sequence found on the License Key Card delivered with the PowerPath media kit and press ENTER. If you enter a valid registration key, you see the following output:
Key successfully installed. Registration key: If you enter an invalid registration key, the screen displays an error message and prompts you to enter a valid registration key. See the PowerPath Product Guide for a list of possible error messages returned by the emcpreg utility.
16-7
16
2. Remove the CD-ROM: a. If a volume manager is not running, unmount the CD-ROM. For example, enter:
eject
b. Eject the CD-ROM and remove it from the CD-ROM drive. 3. Reboot the host. Enter:
reboot -- -rv
5. If the output of powermt display dev=all indicates that some storage system logical devices are not configured as PowerPath devices: a. Configure any missing logical devices. Enter:
powercf -i powermt config
b. Rerun powermt display dev=all to confirm that these logical devices are configured as emcpower devices. If you plan to enable R1/R2 boot disk failover, see R1/R2 Boot
Failover Support on page 21-3.
6. If EMC Control Center is installed, run the command that refreshes the EMC Control Center database of device information. Refer to the documentation for your version of EMC Control Center.
16-8
16
7. Check the EMC support Web site or the PowerPath anonymous FTP site for any patches to PowerPath 3.0, and install any required patches. Refer to Appendix A, PowerPath Patches, for information on identifying, obtaining, and installing patches. 8. Reconfigure applications to use emcpower devices. If you plan to use emcpower devices with a volume manager, file system application, or database manager, you must reconfigure the application to use emcpower devices. Refer to Chapter 20, EMCPOWER Devices and Solaris Applications.
Configuration Guidelines
If your PowerPath configuration includes a CLARiiON storage system, use the appropriate management commands to enable PowerPath failover on the storage system. PowerPath cannot see any devices, or paths to devices, on a storage system that is not properly configured for PowerPath.
Configuration Guidelines
16-9
16
Upgrading PowerPath
We update the PowerPath Release Notes periodically and post them on http:\\powerlink.emc.com. Before you upgrade PowerPath, check the powerlink Web site for the most current information.
Upgrading to PowerPath 3.0 from 2.x converts the existing 12-character license key in the /etc/powerpath_registration file to a 24-character key in the /etc/emcp_registration file. You do not need to reenter license information. To upgrade to PowerPath 3.0 on a Solaris host: 1. Uninstall the earlier version of PowerPath: a. Close I/O opens from any application, such as a database, that directly opened a PowerPath device. b. Unmount all file systems based on PowerPath devices. c. Deactivate all volume groups based on PowerPath devices. d. If you have any custom configuration settings, save them with powermt save. e. Remove the earlier version of PowerPath, following the procedure in the PowerPath Product Guide for that version. 2. Install PowerPath 3.0. Refer to Installation Procedure on page 16-4.
You need not reenter your registration key.
3. If you saved custom configuration settings (see step 1.d.): a. Load the saved settings. Enter:
powermt load file=/etc/powermt.custom.pre-pp3.0.0.
b. Inspect your configuration to verify all custom settings are correct. Enter:
powermt display dev=all
If the display reveals unexpected adapter entries, you can remove them using powermt remove. If you are unsure whether to remove them, contact EMC Customer Support.
16-10
16
4. Resume I/O to PowerPath devices: a. Start all volume groups based on PowerPath devices. b. Mount all file systems based on PowerPath devices. c. Bring back online any applications you shut down or suspended (see step 1.a.).
2. If the volboot file contains no emcpower devicesand if the VxVM rootdg contains no c#t#d# devicesadd one emcpower device to volboot. Enter:
vxdctl add disk emcpowerNc
where emcpowerNc is a PowerPath device configured in rootdg. 3. Remove the earlier version of PowerPath. 4. Install PowerPath 3.0. (You need not reenter the license information.)
If the VxVM rootdg contains only PowerPath devices, VxVM will not start properly until the PowerPath installation is complete.
5. Configure the VERITAS Volume Manager disk groups for PowerPath. Enter:
/etc/powervxvm setup /etc/powervxvm online
PowerPath supports Solaris native names and emcpower names; it does not support safe names. If you use emcpower names, PowerPath preserves these names when you upgrade to version 3.0. If you use safe names, you must reconfigure your applications to use native names or emcpower names after upgrading.
Upgrading PowerPath
16-11
16
Installing a Patch
Every patch release is accompanied by a Readme file that describes how to install the patch. Appendix A, PowerPath Patches, describes how to obtain patches and their accompanying ReadMe files.
Upgrading Solaris
You must uninstall PowerPath before you upgrade Solaris. After you complete the operating system upgrade, reinstall PowerPath.
/etc/emcpreg, /etc/powermt, /etc/powercf, /etc/powerprotect, /etc/cgmt, /etc/emcpup, and /etc/cvtconf These are PowerPath's management utilities. /kernel/drv/emcp This is the PowerPath driver module. The 64-bit version is created with the same name in the /kernel/drv/sparcv9 subdirectory. /kernel/drv/emcp.conf This is the emcp driver's configuration file. It is created or edited by the /etc/powercf
utility during install or when run from the command line. The file is renamed /kernel/drv/emcp.conf.saved and left behind by pkgrm. This saved file is moved back to /kernel/drv/emcp.conf by a subsequent execution of pkgadd.
x
are PowerPath miscellaneous kernel modules. The 64-bit versions are created with the same names in the /kernel/misc/sparcv9 subdirectory.
16-12
16
PowerPath's current settings. It is created during install or upgrade or when a user runs powermt save. The file is renamed /etc/powermt.custom.saved and left behind by pkgrm. This saved file is moved back to /etc/powermt.custom by a subsequent execution of pkgadd.
x
/dev/emcp This is a symbolic link to /devices/pseudo/emcp@0:0, the 0th instance of the emcp driver,
which serves as the driver's administration device. This link must exist, and the linked node must be functional for the powermt utility to work.
x
/etc/emcpower_mode-dir - This text file contains the last booted kernel architecture, 32- or 64-bit, and the location of the PowerPath package repository so PowerPath boot scripts can find the correct versions of some of the utilities when the kernel architecture changes between boots. It is created at install and overwritten at each boot. /etc/emcp_registration This registration key file is created by the user before PowerPath installation. It is renamed /etc/emcp_registration.saved and left behind by pkgrm. This saved file is moved back to /etc/emcp_registration by a subsequent execution of pkgadd. /usr/lib/libemcpcg.so, /usr/lib/libemcpmp.so, and /usr/lib/libemcppn.so - These dynamic libraries are used by
various PowerPath utilities. The 64-bit versions are created with the same names in the /usr/lib/sparcv9 subdirectory. A symbolic link named /usr/lib/libcg.so, pointing to /usr/lib/libemcpcg.so or /usr/lib/sparcv9/libemcpcg.so, is created for backward compatibility with the version of the symmapi library.
x
/etc/powermt.custom.datestamp.load_failed This file may be created at install time if a saved powermt.custom file (that is, /etc/powermt.custom.saved restored by pkgadd) fails to load for any reason. /etc/rcS.d/S24powerstartup and/etc/rcS.d/S63powershift
These files run at boot time to complete the boot time configuration.
16-13
16
Solaris File Modified by Installation
Installing PowerPath on Solaris modifies the /etc/system configuration file is modified as follows:
x
Adds forceload statements for the driver and miscellaneous kernel modules. Adds set statements for kernel stksize variables to increase default kernel stack sizes and avoid stack overflow panics. Modifies /etc/profile and /etc/.login. These root shell profiles are modified to include the path install_directory/EMCpower/bin in the root users PATH environment variable.
16-14
17
Configuring a PowerPath Boot Device on Solaris
This chapter describes how to configure a PowerPath device as the boot device for a Solaris host. The chapter covers the following topic:
x
17-1
17
Refer to the Solaris OpenBoot documentation for complete instructions on modifying and managing the boot process on a specific Solaris host.
2. Enter:
eeprom | grep nvramrc > /etc/nvramrc.orig
3. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 4. Locate the native device from which you are booting and correlate this device to an emcpower device. Enter:
/etc/powermt display dev=all
17-2
17
Scroll through the output until you locate the native device used as the boot device; for example, c1t6d0s0. In this example, this native device corresponds to emcpower6a. 5. Identify the device node that corresponds to the emcpower device. Enter:
ls l /dev/dsk/emcpower6a
Looking at the output, you can see that /pseudo/emcp@6:a,blk corresponds to emcpower6a. You will use this value in the next step. 6. Using a text editor such as vi, make the following changes to the /etc/system file: Add this line above the forceload: drv/emcp statements:
forceload: drv/sd
7. Using a text editor such as vi, edit the /etc/vfstab file, replacing each native partition (c#t#d#s#) for the boot device with an emcpower partition name. In this example, you would replace c1t6d0s0 with emcpower6a. You must change both the /dev/dsk and /dev/rdsk entries. 8. Restart the host. Enter:
reboot -- -r
17-3
17
Non-Storage-System Devices
This section describes how to move a boot device from a non-storage-system disk to a PowerPath device.
You may also need to configure Solaris Open Boot according to the documentation for your HBA.
To move a boot device to a PowerPath device: 1. If PowerPath is not already installed, install it (refer to Chapter 16, Installing PowerPath on a Solaris Host). 2. On the target emcpower device, create /, /usr, /export/home, /var, /opt, swap, and other required partitions, with sizes that match the sizes of the current boot disks partitions. 3. Mount each emcpower device partition. In the following example, emcpower0 identifies the emcpower device:
mount /dev/dsk/emcpower0a /mnt mkdir /mnt/usr mount /dev/dsk/emcpower0g /mnt/usr
4. For each emcpower device partition, copy the contents of the current boot disks corresponding partition. The following example copies the / and /usr partitions:
cd / find . -xdev | cpio -pudm /mnt cd /usr find . -xdev | cpio -pudm /mnt/usr
5. Add back any mount points and special directories, such as /proc, that are excluded by the -xdev argument of find. Enter:
mkdir /mnt/proc
6. Enter:
installboot /usr/platform/uname -i/lib/fs/ufs/bootblk /dev/rdsk/emcpower0a
7. Edit /mnt/etc/vfstab to use /dev/dsk/emcpower0a as the / partition, /dev/dsk/emcpower0g as the /usr partition, /dev/dsk/emcpower0b as the swap partition, and so on. For file system partitions, be sure to change the /dev/dsk/c#t#d#s# and /dev/rdsk/c#t#d#s# device names to /dev/dsk/emcpower#p and /dev/rdsk/emcpower#p, respectively (where # is the device number and p is the partition letter).
17-4
17
9. For the emcpower device that will be the new boot device, record the output of the following command:
/etc/powermt display dev=0
10. Find the boot device information needed to reconfigure the boot disk eeprom setting. Enter:
ls -l /dev/dsk/c2t0d0s2
11. Halt the host. 12. At the ok prompt, use a path line from the powermt display output in step 9, to set up an alias for an sd device that corresponds to emcpower0. For example, enter:
nvalias emcpower0 /iommu/sbus/QLGC,isp@0,10000/sd@0,0
13. While still at the ok prompt, change the boot device to the new alias. Enter:
setenv boot-device emcpower0
17-5
17
15. At the ok prompt, enter:
printenv auto-boot?
This displays the value of the auto-boot? variable. 16. If the auto-boot? variable is not set to true, boot the host.
Recovery Procedure
If after configuring the PowerPath device as the boot device, you cannot boot the host, follow the procedure in this section to correct the problem. You may have made a typing error when editing the /etc/system and /etc/vfstab files. To boot the host: 1. Insert the Solaris Operating System CD-ROM into the hosts CD-ROM drive. 2. At the ok prompt, enter:
boot cdrom s
3. Mount the storage system boot device that is experiencing the problem. For example, enter:
mount /dev/dsk/c1t6d0s0 /a
4. Enter:
TERM=sun-cmd export TERM
5. Check the /etc/system and /etc/vfstab files against the changes you made to these files when you set up multipathing to the storage system boot device (refer to Configuring a PowerPath Device as the Boot Device on page 17-2). Use a text editor such as vi to correct any problems you find. 6. Shut down the host. Enter:
shutdown y g5 i0
8. Remove the Solaris Operating System CD-ROM from the hosts CD-ROM drive. 9. Reboot the host. Enter:
boot r
17-6
18
PowerPath in a Solaris Cluster
This chapter describes how to install and configure PowerPath in Solaris cluster environments. For more general information on clustering, refer to the Symmetrix High Availability Environment Product Guide or the Installation Roadmap for FC-Series Storage Systems. The chapter covers the following topics:
x x x
PowerPath in a Sun Cluster 2.2......................................................18-2 PowerPath in a Sun Cluster 3.0......................................................18-4 PowerPath in a VERITAS Cluster Server Cluster........................18-6
18-1
18
Installing PowerPath and Sun Cluster on fresh systemsthat is, where neither the PowerPath nor the cluster software is installed on any of the hosts to be included in the cluster. Refer to Installing PowerPath and Sun Cluster 2.2 on Fresh Systems on page 18-2. Integrating PowerPath into an existing Sun Cluster. Refer to Integrating/Upgrading PowerPath into an Existing Sun Cluster 2.2 on page 18-3.
6. Initialize the root disk group on all nodes. 7. Initialize the PowerPath devices on all nodes. 8. Start cluster services on the master node. 9. Designate/create shared disk groups on the master node. 10. Create logical volumes from the designated shared disks.
18-2
18
11. Create a logical host. 12. Start cluster services on the other (non-master) nodes.
18-3
18
Installing PowerPath and Sun Cluster on fresh systemsthat is, where neither the PowerPath nor the cluster software is installed on any of the hosts to be included in the cluster. Refer to Installing PowerPath and Sun Cluster 3.0 on Fresh Systems on page 18-4. Integrating PowerPath into an existing Sun Cluster. Refer to Integrating/Upgrading PowerPath in an Existing Sun Cluster 3.0 on page 18-5.
5. Initialize the root disk group on all nodes. 6. Initialize the PowerPath devices on all nodes. 7. Start cluster services on the master node. 8. Designate/create shared disk groups on the master node. 9. Create logical volumes from the designated shared disks. 10. Register the disk group. 11. Start cluster services on the other (non-master) nodes.
18-4
18
2. Install or upgrade PowerPath on the node (refer to Chapter 16, Installing PowerPath on a Solaris Host). 3. Start cluster services on the node. Enter:
reboot
Wait for the node to be fully reintegrated into the cluster before proceeding to the next node.
18-5
18
Installing PowerPath and VCS on new hostsneither PowerPath nor VCS is installed on any of the host that will be in the cluster. Refer to Installing PowerPath and VCS on New Hosts on page 18-6. Installing PowerPath into an existing VCS cluster. Refer to Integrating/Upgrading PowerPath in an Existing VCS Cluster on page 18-7.
18-6
18
7. Configure the service group by adding the resources that you defined in step 5 to the file /etc/VRTSvcs/conf/config/main.cf. The disk or logical device resources should use PowerPath c#t#d# devices. In addition, if you are using service group heart beat disks, they should use PowerPath c#t#d# devices. 8. Restart all VCS cluster hosts to read the new configuration. The first host that you restart will read the new configuration file. The remaining hosts will rebuild their local configuration file from the host that was started first. 9. Verify that the service group is up and running, and use either the VCS GUI or the hagrp command to verify that it can successfully fail over to all hosts in the cluster. 10. Add other service groups as needed.
18-7
18
18-8
19
Removing PowerPath from a Solaris Host
This chapter describes how to remove PowerPath from a Solaris host. The chapter covers the following topics:
x x x
Before Removing PowerPath .........................................................19-2 Removing PowerPath......................................................................19-2 Removing PowerPath from a Storage System Boot Device.......19-7
19-1
19
Removing PowerPath
If you edited the file /etc/vfstab to mount emcpower devices, you must complete the steps in Rebuilding the /etc/vfstab.no_EMCpower File on page 19-5 before removing PowerPath. To remove PowerPath from a Solaris host: 1. If you have a boot, dump, or file system device, discontinue use of PowerPath devices system-wide as follows: a. Edit the /etc/system file to remove all references to PowerPath devices. Remove the following lines:
forceload drv/emcp forceload misc/emcpcg forceload misc/emcpmp forceload misc/emcppn forceload misc/emcpioc rootdev:/pseudo/emcpower@x:a,blk The rootdev: line appears only if you booted off an emcpower device.
19-2
19
and continue with step 2 below. e. If the files differ, follow the steps in Rebuilding the /etc/vfstab.no_EMCpower File on page 19-5 to restore the /etc/vfstab.no_EMCpower file before continuing. 2. If you have a database partition, discontinue use of PowerPath devices by following these steps: a. Stop the database manager. b. Unmount the PowerPath devices. c. Edit the appropriate database configuration files so they no longer refer to emcpower devices. The copies of configuration files that you saved during installation may be useful. 3. If you are removing PowerPath from the host entirely (that is, you are not reinstalling PowerPath after completing this procedure), disconnect all duplicate physical connections between the host and the storage system except for one cable, leaving a single path. 4. Remove PowerPath. Enter:
/usr/sbin/pkgrm EMCpower
Removing PowerPath
19-3
19
6. Enter y. The screen displays information about the removal process, ending with:
Removal of <EMCpower> was successful.
7. The removal process saves the following files with the .saved extension: /kernel/drv/emcp.conf /etc/powermt.custom
/etc/emcp_registration
If you are removing PowerPath from the host entirely, use the rm command to remove them. 8. Determine whether the PowerPath driver was unloaded from the host. Enter:
modinfo | egrep "emcp"
9. If modinfo returns with no output, the driver is not loaded on the host, and you do not need to reboot the host. Restart any database managers that you stopped in step 2. If modinfo generates output, reboot the host to unload the driver. Enter:
reboot -- -rv
10. If EMC Control Center is installed, run the command that refreshes the EMC Control Center database of device information. Refer to the documentation for your version of EMC Control Center.
19-4
19
This command returns the disk component of the boot-device environment variable; for example:
boot-device=power8
b. Enter:
eeprom nvramrc
c. Scan the output for the c#t#d#s# path corresponding to the disk component of the boot-device environment variable you identified in step 1.a. In this example, power8 is the boot-device alias and /sbus@3,0/QLGC,isp@0,10000/sd@0,0 is the boot device.
Removing PowerPath
19-5
19
Perform steps 2 through 5 in a separate window. 2. When you have found a c#t#d#s# device for each emcpower device in /etc/vfstab, copy /etc/vfstab to /etc/vfstab.no_EMCpower. Enter:
cp /etc/vfstab /etc/vfstab.no_EMCpower
4. Use the editor to substitute the c#t#d#s# device for the emcpower device. Be sure to change both the block and raw device entries and the swap entries if necessary. 5. Save the file and exit the editor. 6. Save the vfstab file and install the rebuilt vfstab file. Enter:
mv /etc/vfstab /etc/vfstab.EMCpower mv /etc/vfstab.no_EMCpower /etc/vfstab
7. Reboot the host. 8. Continue with the deinstallation process described in Removing PowerPath on page 19-2.
19-6
19
b. Determine the boot path associated with this native device. For example, enter:
ls -al /dev/dsk/c1t6d0s0
c. Compare the boot path from step 1b with the original boot path. Enter:
cat /etc/nvramrc.orig
2. If the boot path listed in step 1b differs from that in the nvramrc.orig file, update the boot path to reflect the value listed in step 1b. Enter:
eeprom nvramrc=devalias symmboot/pci@1f,4000/scsi@4/disk@6,0
3. Verify that the boot path was changed to the new value. Enter:
eeprom
4. Restore the versions of /etc/system and /etc/vfstab that do not contain references to PowerPath. Enter:
cp /etc/system.no_EMCpower /etc/system cp /etc/vfstab.no_EMCpower /etc/vfstab
5. Remove PowerPath (refer to Removing PowerPath on page 19-2). 6. If you did not reboot the host after removing PowerPath, do so now. Enter:
reboot -- -r
If the host fails to boot, refer to Recovery Procedure on page 17-6 for suggested actions.
19-7
19
19-8
20
EMCPOWER Devices and Solaris Applications
This chapter describes how to install and configure PowerPath emcpower devices with various Solaris system applications.
The procedures in this chapter apply only to applications that use emcpower devices. If applications use native devices, these procedures are unnecessary: Simply configure these applications as you would for any Solaris disk device. If the application does not directly access Solaris sd devices, you need not configure it for PowerPath. For example, if the database accesses devices that are managed by a logical volume manager, the logical volume manager is configured for PowerPath, not for the database.
When to Use emcpower Devices ...................................................20-2 Preparation........................................................................................20-2 Solstice DiskSuite 4.2 .......................................................................20-8 VERITAS Volume Manager (VxVM) ...........................................20-13 UNIX File System (UFS)................................................................20-14 UFS with Solstice DiskSuites UFS Logging...............................20-16 VERITAS File System (VxFS)........................................................20-20 Network File System (NFS) ..........................................................20-21 Oracle...............................................................................................20-23 Sybase ..............................................................................................20-30
20-1
20
Systems that use PowerPath devices as root, swap, or user devices. Chapter 17, Configuring a PowerPath Boot Device on Solaris, describes how to install and configure PowerPath on such systems. Systems that use dynamic reconfiguration (DR) without alternative pathing support from either Sun Alternative Pathing (AP) or VERITAS Dynamic Multipathing (DMP). This chapter describes how to install and configure applications on such systems.
You can manage a CLARiiON storage system with either PowerPath or DMP, but not with both.
Preparation
Collect System Information
To collect system information: 1. Make a list of all storage system devices used by your applications. You can use the worksheet in Table 20-1 on page 20-5 and Table 20-2 on page 20-6. If your applications include a database package, you may need help from your database administrator In compiling this list. Look at the configuration files for your database system and any other applications, the /etc/vfstab file, all devices under VERITAS volume control, and the output of the Solstice DiskSuite metastat command.
20-2
20
2. Record the device nodes; for example, /dev/dsk/c1t0d0s3. Most applications access device nodes via links found in
/dev/[r]dsk. In /dev/[r]dsk, use ls -l to list the device nodes.
3. Record the file attributes (owner, group, permissions) of all device nodes found in step 2. Do not list the attributes of the links. Change to the actual device node directory; for example, /devices/iommu@f,e0000000/... List and record the attributes of the actual device nodes in your worksheet. 4. Save a copy of every configuration or setup file that refers to disk devices. These files will help you restore your configuration if you encounter problems.
Refer to the appropriate sections for instructions on installing and configuring PowerPath and specified applications:
For this application Solstice DiskSuite Versions 4.2 VERITAS Volume Manager (VxVM) UNIX File System (UFS) UFS with Solstice DiskSuites UFG Logging VERITAS File System (VxFS) Network File System (NFS) Oracle Sybase Refer to Page 20-8 EMC PowerPath for UNIX Version 3.0 Release Notes Page 20-14 Page 20-16 Page 20-20 Page 20-21 Page 20-23 Page 20-30
Preparation
20-3
20
Find Corresponding Devices
Under Solaris, certain SysV identifiers (c#t#d#s#) correspond to certain sd devices. When you collected system information, you determined the SysV identifiers used by your applications. After installing PowerPath, the SysV identifiers are associated with certain emcpower devices; therefore, you can determine which emcpower device corresponds to which c#t#d# device and set the owner, group, and permissions for the emcpower device to match the sd device. To determine which SysV identifiers match which emcpower devices, follow these steps: 1. Enter:
powermt display dev=0a
2. Examine the I/O Path column for the relevant SysV identifiers. 3. When you find an identifier for emcpower0a, record it on the worksheet on the appropriate line.
Slice information is not displayed. Slices in SysV identifiers are designated s0, s1, s2, and so on. They correspond exactly to emcpower slices designated a, b, c, and so on. Thus, if device c0t0d0 corresponds to device emcpower0, slice c0t0d0s2 corresponds to slice emcpower0c.
4. Continue displaying each emcpower device in turn (dev=1a, dev=2a, and so on), until you have identified an emcpower device for each SysV identifier.
20-4
20
To update the emcpower attributes: 1. Modify the owners, groups, and permissions of the emcpower devices so they match the owners, groups, and permissions of the original sd devices. Use the information from your worksheet and the standard UNIX commands chown, chgrp, and chmod. Make sure you modify the actual device nodes, not the link nodes. 2. Test your changes by using your applications. If you have problems during testing, check your worksheet information and the changes you made to the emcpower node attributes.
A sample worksheet (in two parts) is shown in Table 20-1 and Table 20-2.
PowerPath Installation Worksheet (Solaris Host) (part 1 of 2) and now refers to Solaris link c#t#d#s# which points to actual device node
which is configured In
Preparation
20-5
20
Table 20-1
PowerPath Installation Worksheet (Solaris Host) (part 1 of 2) and now refers to Solaris link c#t#d#s# which points to actual device node
which is configured In
Table 20-2
PowerPath Installation Worksheet (Solaris Host) (part 2 of 2) and has permissions Solaris link corresponds to PowerPath emcpowern link which points at this device file that will inherit the attributes
which is owned by
with group
xx
20-6
20
Table 20-2
PowerPath Installation Worksheet (Solaris Host) (part 2 of 2) and has permissions Solaris link corresponds to PowerPath emcpowern link which points at this device file that will inherit the attributes
which is owned by
with group
xx
Preparation
20-7
20
New Installation of Solstice DiskSuite and PowerPath on page 20-8 describes how to install and configure PowerPath and Solstice DiskSuite when neither package is installed on the host. Reconfiguring Solstice DiskSuite for PowerPath on page 20-10 describes how to reconfigure Solstice DiskSuite for PowerPath. Using Metadevices with UFS on page 20-13 describes how to use metadevices with UFS. You must first configure Solstice DiskSuite for PowerPath.
This sections assumes you have superuser privileges and can access Solaris AnswerBook online documentation.
CAUTION After installing PowerPath and linking emcpower device partitions to emcpower devices slices, you cannot use the metatool GUI for DiskSuite.
To install and configure PowerPath and Solstice DiskSuite when neither package is installed on the host: 1. Install the Solstice DiskSuite package. Install the AnswerBook package but not the DiskSuite Tool. Do not reboot yet. 2. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 3. Determine which PowerPath emcpower devices you will use to create DiskSuite metadevices. Refer to the worksheet (Table 20-1 on page 20-5 and Table 20-2 on page 20-6) on which you collected system information. It should list the /dev/[r]dsk/c#t#d#s# device links with the corresponding /dev/[r]dsk/emcpower#p device links.
20-8
20
4. Run format to partition the PowerPath emcpower devices. The emcpower devices appear as emcpower#, where # is the PowerPath device instance number. Use the Partition submenu of the format utility, just as you would for an sd device link of the form c#t#d#. To find out which c#t#d# links correspond to a particular PowerPath device, run /etc/powermt display dev=emcpower#, and read the Path Link column. Reserve at least three partitions of three cylinders each for use as DiskSuite replica database locations. You do not need to partition any sd (c#t#d#) devices. 5. Link emcpower device partitions a-h to emcpower device slices s0-s7 using the following script:
#!/bin/sh cd /dev/dsk for device in ls emcpower*a | /usr/bin/sed -e s/a/ / do echo "making links for /dev/dsk/${device}" ln ${device}a ${device}s0 ln ${device}b ${device}s1 ln ${device}c ${device}s2 ln ${device}d ${device}s3 ln ${device}e ${device}s4 ln ${device}f ${device}s5 ln ${device}g ${device}s6 ln ${device}h ${device}s7 done cd /dev/rdsk for device in ls emcpower*a | /usr/bin/sed -e s/a/ / do echo "making links for /dev/rdsk/${device}" ln ${device}a ${device}s0 ln ${device}b ${device}s1 ln ${device}c ${device}s2 ln ${device}d ${device}s3 ln ${device}e ${device}s4 ln ${device}f ${device}s5 ln ${device}g ${device}s6 ln ${device}h ${device}s7 done
Creating these links allows you to use the DiskSuite metaset command with emcpower devices.
20-9
20
6. Set up replica databases on emcpower#p partitions, where # is the PowerPath device instance number and p is a letter denoting the partition, or slice, of the device you want to use as a replica. Refer to the relevant Solstice DiskSuite documentation. Remember that PowerPath partitions a-h correspond to sd slices 0-7. 7. Follow the instructions in the relevant Solstice DiskSuite documentation to build the types of metadevices you need, but use /dev/[r]dsk/emcpower#p device link names wherever the instructions list /dev/[r]dsk/c#t#d#s# device link names. 8. Run metainit to create the metadevices defined in the /etc/opt/SUNWmd/md.tab file. After you create metadevices on emcpower devices, you can use the metadevices as raw, character devices or block devices with the UNIX file system (UFS). Refer to Using Metadevices with UFS on page 20-13 for more information on using metadevices as block devices with UFS.
To reconfigure Solstice DiskSuite for PowerPath: 1. Back up all storage-system-resident host data. 2. Back up the current DiskSuite configuration by making a copy of /etc/opt/SUNWmd/md.tab and recording the output of the metastat and metadb -i commands. 3. Stop all applications running on DiskSuite metadevices. 4. Unmount all DiskSuite file systems. 5. Make sure all sd device links used by DiskSuite are entered in md.tab and come up properly after a reboot. 6. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host).
20-10
20
7. Link emcpower device partitions a-h to emcpower device slices s0-s7, by using the following script:
#!/bin/sh cd /dev/dsk for device in ls emcpower*a | /usr/bin/sed -e s/a/ / do echo "making links for /dev/dsk/${device}" ln ${device}a ${device}s0 ln ${device}b ${device}s1 ln ${device}c ${device}s2 ln ${device}d ${device}s3 ln ${device}e ${device}s4 ln ${device}f ${device}s5 ln ${device}g ${device}s6 ln ${device}h ${device}s7 done cd /dev/rdsk for device in ls emcpower*a | /usr/bin/sed -e s/a/ / do echo "making links for /dev/rdsk/${device}" ln ${device}a ${device}s0 ln ${device}b ${device}s1 ln ${device}c ${device}s2 ln ${device}d ${device}s3 ln ${device}e ${device}s4 ln ${device}f ${device}s5 ln ${device}g ${device}s6 ln ${device}h ${device}s7 done
Creating these links allows you to use the DiskSuite metaset command with emcpower devices. 8. Check the worksheet (Table 20-1 on page 20-5 and Table 20-2 on page 20-6) on which you collected system information. It should list the /dev/[r]dsk/c#t#d#s# device links with the corresponding /dev/[r]dsk/emcpower#p device links. Run /etc/powermt display dev=all to display the emcpower#p device links. 9. Delete each replica database configured with a /dev/[r]dsk/c#t#d#s# device. For example, enter:
metadb -d -f device
20-11
20
10. Replace each /dev/[r]dsk/c#t#d#s# device with the corresponding /dev/(r)dsk/emcpower#p device. For example, enter:
metadb -a -f -c # emcpowerdevice
where # is the number of metadatabase replicas. Refer to the relevant Solstice DiskSuite documentation for information on metadatabase replicas. You can run metadb -i to list the c#t#d# devices and find the number of metadatabase replicas needed for each device. 11. Edit the /etc/opt/SUNWmd/md.tab file by replacing each c#t#d#s# device link name, except internal boot devices, with the corresponding emcpower#p device link name.
CAUTION Do not replace any c#t#d# devices in the internal boot device partitions. If you do, the host will not boot. If you have a problem in this step, reverse the action in step 9, reinstall the original md.tab into /etc/opt/SUNWmd, uninstall PowerPath, and reboot. 12. Run metaclear to delete all logical devices defined in md.tab except the internal boot device logical devices.
CAUTION To avoid a system crash, do not delete the c#t#d# devices in the internal boot device partitions. 13. Run metainit to recreate all DiskSuite metadevices replaced by emcpower devices in md.tab, except internal boot device logical devices. 14. Restart applications. If you have any problems with DiskSuite metadevices after this step, restore the original md.tab file, reverse the action in step 9, and repeat steps 12-13. Mount any file systems that were unmounted in step 4, and restart applications.
20-12
20
After you reconfigure Solstice DiskSuite for PowerPath, you can use metadevices as raw, character devices or block devices with the UNIX file system (UFS). Refer to Using Metadevices with UFS on page 20-13 for more information on using metadevices as block devices with UFS.
To use a metadevice as a block device with UFS: 1. Create a UFS on the metadevice (for example, d2). Enter:
newfs /dev/md/rdsk/d2
3. Edit the /etc/vfstab file to include an entry for the new file system of the type ufs at the mount point you created. For example, enter:
/dev/md/dsk/d2 /dev/md/rdsk/d2 /d2 ufs 1 yes -
where: d2 is the mount point. 1 is the fsck pass number. yes indicates mount at boot. - (which specifies none) is the mount option. Refer to the vfstab man pages for more information.
20-13
20
If you have not yet created UFSs, refer to New Installation of PowerPath and UFS on page 20-14 If you have already created UFSs, refer to UFS Reconfiguration on page 20-15.
This section assumes you have superuser privileges and can access the Solaris AnswerBook online documentation.
To install PowerPath and create UFSs: 1. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 2. Determine which PowerPath emcpower devices you will use as file system devices. Refer to the worksheet (Table 20-1 on page 20-5 and Table 20-2 on page 20-6) on which you collected system information. It should list the /dev/[r]dsk/c#t#d#s# device links with the corresponding /dev/[r]dsk/emcpower#p device links. Run /etc/powermt display dev=all to display the emcpower#p device links. 3. Partition the selected volumes using the Solaris format utility. 4. Create file systems on the selected PowerPath emcpower devices, using the appropriate utilities for the type of file system you will use. For the standard Solaris UNIX file system, enter:
newfs /dev/rdsk/emcpowerNp
where N is the emcpower device instance of the selected volume, and p is the partition identifier in the range a-h. PowerPath partitions a-h correspond to sd slices 0-7. 5. Create mount points for the new file systems. 6. Install the file systems into /etc/vfstab. Make sure you set the mount at boot field to yes. 7. Mount all volumes defined in /etc/vfstab. Enter:
mountall
20-14
20
UFS Reconfiguration
To reconfigure existing UFSs for PowerPath: 1. Find all UFS device link names (files named /dev/[r]dsk/c#t#d#s#) by looking in /etc/vfstab. 2. Stop all applications that are using the c#t#d#s# devices to be replaced by PowerPath emcpower devices. 3. Unmount all file systems on the c#t#d#s# devices. 4. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 5. Match the sd device link names found in step 1 with PowerPath device link names (files named /dev/[r]dsk/emcpowerNp) by running /etc/powermt display dev=all. Refer to the worksheet (Table 20-1 on page 20-5 and Table 20-2 on page 20-6) on which you collected system information. It should list the /dev/[r]dsk/c#t#d#s# device links with the corresponding /dev/[r]dsk/emcpower#p device links. 6. Make a backup copy of the /etc/vfstab file. 7. Edit /etc/vfstab, replacing each instance of an sd device link named /dev/[r]dsk/c#t#d#s#, except the internal boot device, with the corresponding emcpower#p device link name.
CAUTION Do not modify the internal boot disk device entry. If you do, the host will not boot. PowerPath partitions a-h correspond to sd slices 0-7. 8. Mount all volumes defined in /etc/vfstab. Enter:
mountall
Verify that all file systems defined in /etc/vfstab are mounted correctly. If you have a problem with any file system after this step, restore the original /etc/vfstab file and reboot to restore UFS service. Review your steps and try again.
20-15
20
If you have not yet set up Solstice DiskSuites UFS logging facility, refer to New Installation of PowerPath and DiskSuite UFS Logging on page 20-16. If you have already set up the Solstice DiskSuite logging facility, refer to DiskSuite UFS Logging Reconfiguration on page 20-18.
This sections assumes you have superuser privileges and can access the Solaris AnswerBook online documentation. It also assumes you are familiar with the format(1M), newfs(1M), and vfstab(4), manual pages.
To PowerPath and set up Solstice DiskSuites UFS logging facility: 1. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 2. Determine which PowerPath emcpower devices you will use as Solstice DiskSuite metadevices with UFS logging. Refer to the worksheet (Table 20-1 on page 20-5 and Table 20-2 on page 20-6) on which you collected system information. It should list the /dev/[r]dsk/c#t#d#s# device links with the corresponding /dev/[r]dsk/emcpowerNp device links. Run /etc/powermt display dev=all to display the emcpowerNp device links. 3. Partition the PowerPath emcpower devices by selecting them in the Solaris format utility. Be sure to create partitions for UFS logging devices as well as for UFS master devices. 4. Install Solstice DiskSuite if it has not already been installed, and initialize the metadatabase replicas. Refer to the relevant Solstice DiskSuite documentation.
20-16
20
5. Edit the /etc/opt/SUNWmd/md.tab file, adding the Solstice DiskSuite metatrans devices and corresponding emcpower devices. For example, add the following line to create a metatrans device with UFS logging:
d64 -t emcpower0g emcpower1b
where: emcpower0g is the master device component of the metatrans device. emcpower1b is the UFS logging component of the metatrans device. 6. Create the metatrans devices defined in /etc/opt/SUNWmd/md.tab. For example, enter:
metainit d64
7. Create mount points for each metatrans devices you created in step 6. 8. Create file systems on the metatrans devices, using newfs. For example, enter:
newfs /dev/md/rdsk/d64
9. Install the file systems in /etc/vfstab, specifying /dev/md/(r)dsk/dmetadevice_number for the raw and block devices. Make sure you set the mount at boot field to yes.
CAUTION If /etc/vfstab is edited incorrectly, the host may not boot. Refer to the vfstab man pages for more information. 10. Mount all volumes defined in /etc/vfstab. Enter:
mountall
20-17
20
DiskSuite UFS Logging Reconfiguration
To reconfigure an existing Solstice DiskSuite UFS logging facility for PowerPath: 1. Make a list of the DiskSuite metatrans devices for all existing UFS logging file systems, by looking in the /etc/vfstab file. You also can run the following command to display a list of metatrans devices configured in the system:
metastat | grep Trans
Make sure all configured metatrans devices are correctly set up in /etc/opt/SUNWmd/md.tab. If they are not set up, set them up before continuing. Save a copy of md.tab. 2. Stop all applications running on all metatrans devices (determined in step 1 from the metastat command). 3. Unmount all file systems on the metatrans devices (determined in step 1 from /etc/vfstab file). 4. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 5. Match the device names found in step 1 with sd device link names (files named /dev/[r]dsk/c#t#d#s#), with the metastat command. 6. Match the sd device link names found in step 5 with PowerPath device link names (files named /dev/[r]dsk/emcpowerNp), by running /etc/powermt display dev=all. Refer to the worksheet (Table 20-1 on page 20-5 and Table 20-2 on page 20-6) on which you collected system information. It should list the /dev/[r]dsk/c#t#d#s# device links with the corresponding /dev/[r]dsk/emcpower#p device links. 7. Edit /etc/opt/SUNWmd/md.tab, replacing each c#t#d#s# device link nameexcept internal boot deviceswith the corresponding emcpower#p device link name.
CAUTION Do not replace any c#t#d# devices in the internal boot device partitions. If you do, the host will not boot.
20-18
20
8. Run metaclear to delete all volumes defined in md.tab, except the internal boot device volumes.
CAUTION To avoid a system crash, do not delete the c#t#d# devices in the internal boot device partitions. 9. Run metainit to recreate all DiskSuite metadevices replaced by emcpower devices in the md.tab file, except the internal boot device volumes. 10. Mount all volumes defined in /etc/vfstab. Enter:
mountall
20-19
20
Before making these changes on a Solaris 8 host, you must install the 108901-03 patch, which is available on Sun Microsystems Web site (http:\\www.sun.com).
New kernel stack size parameters do not take affect until the next boot. To avoid a stack overflow panic, reboot as soon as possible.
20-20
20
Install and configure PowerPath with the UFS. Refer to UNIX File System (UFS) on page 20-14. Install PowerPath. Install Solstice DiskSuite, if not already installed. Create or reconfigure Solstice DiskSuite metadevices on emcpower devices. Create and mount file systems on the Solstice DiskSuite metadevices. Refer to Solstice DiskSuite 4.2 on page 20-8. Install PowerPath. Install VERITAS Volume Manager, if not already installed. Create or reconfigure the VERITAS volumes on emcpower devices. Create and mount file systems on the VERITAS volumes. Refer to the EMC PowerPath for UNIX Version 3.0 Release Notes.
Then you can export the file systems as shared Network File Systems (NFS), as described in this section. You should have superuser privileges and access to the Solaris AnswerBook online documentation. You should also be familiar with the format(1M), newfs(1M), vfstab(4), dfstab(4), and share(1M) commands. Refer to the relevant Solaris system administration documentation. To export file systems as shared Network File Systems: 1. Determine the file system you want to export as a shared network file system. 2. In the /etc/dfs/dfstab file, define the file system to be exported. For example, enter:
share -F nfs -o rw /fs/customer
where /fs/customer is a mounted UFS that was created on a PowerPath emcpower device, a Solstice DiskSuite metadevice with an emcpower device, or a VERITAS volume with an emcpower device. 3. Repeat step 2 for each file system you want to export.
20-21
20
4. Stop the NFS server. Enter:
/etc/init.d/nfs.server stop
6. Run share to verify that the file systems were exported on the NFS.
20-22
20
Oracle
This section describes how to install and set up Oracle Version 8 for use with emcpower devices. The section consists of two parts:
x
New Installation of PowerPath and Oracle on page 20-23 describes what to do if Oracle is not yet installed. Oracle Reconfiguration on page 20-26 describes what to do if Oracle is already installed.
This section assumes you have superuser privileges and can access the Solaris AnswerBook online documentation.You should have the relevant Oracle documentation available for reference.
Follow one of the procedures in this section if you are installing Oracle and PowerPath for the first time:
x
Refer to If Using a File System below if you plan to use Oracle with a file system. Refer to If Using Raw Partitions on page 20-24 if you plan to use Oracle with raw partitions.
If using a file system: 1. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 2. Create and mount Oracle file systems on one or more PowerPath partitions. Oracle8 requires four mount points for OFA-compliant installation, one for software and three for data devices. 3. Install Oracle to a file system, following the instructions in the relevant Oracle8 documentation. During the installation, you are asked to name mount points. Supply the mount points of the file systems you created on the PowerPath partitions.
Oracle
20-23
20
If Using Raw Partitions If using raw partitions: 1. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 2. Use Suns admintool to add the user, oracle, and dba group, or create the Oracle Software Owner user in the servers local /etc/passwd file. 3. Complete the Oracle pre-installation tasks described in the relevant Oracle8 documentation: a. Install Oracle on a file system residing on a PowerPath partition. b. Set up the Oracle users ORACLE_BASE and ORACLE_HOME environment variables to be directories of this file system. c. Create two more PowerPath-resident file systems on two other PowerPath volumes. d. Make sure each of the three device mount points has a subdirectory named oradata. This subdirectory is used as a control file and redo log location for the Installers Default Database. Oracle recommends using raw partitions for redo logs. e. To use PowerPath raw partitions as redo logs, create symbolic links from the three redo log locations to PowerPath raw device links (files named /dev/rdsk/emcpowerNp, where N is the PowerPath instance number and p is the partition ID), which point to partitions of the appropriate size. 4. Determine which PowerPath emcpower devices you will use as Oracle database devices. 5. Partition the selected devices, using the Solaris format utility. If PowerPath raw partitions are to be used by Oracle as database devices, be sure to leave disk cylinder 0 of the associated volume unused. This protects UNIX disk labels from corruption by Oracle. 6. Make sure the Oracle Software Owner has read and write privileges to the selected PowerPath raw partition device files under the /devices directory. Ensure that the group ownership for these devices is dba.
20-24
20
7. Set up symbolic links in the oradata directory, under the first of the three mount points created in step 2. Link the database files control01[1-3].ctl, redosid0[1-3].log, system01.dbf, rbs01.dbf, temp01.dbf, users01.dbf, and tools01.dbf to PowerPath raw device links (files named /dev/rdsk/emcpowerNp) pointing to partitions of the appropriate size. db is the name of the database you are creating; the default is test. 8. Install the Oracle8 Server, following the instructions in the Oracle8 installation documentation. a. When you run orainst /m, make sure you are logged in as the Oracle Software Owner. b. Select Install New Product Create Database Objects. c. Select Raw Devices as the storage type. d. Specify the raw device links set up in steps 2-5 for the redo logs and database files of the default database. 9. To set up other Oracle8 databases, set up the parameter file, control files, redo logs, and database files. Follow the guidelines in the relevant Oracle8 for UNIX administration documentation. Make sure any raw devices and file systems you set up reside on PowerPath volumes. 10. Launch svrmgrl. Log in using the user account and password designated during the install. 11. If desired, use the create tablespace SQL command to create a new tablespace.
Oracle
20-25
20
Oracle Reconfiguration
Follow one of the procedures in this section to reconfigure an existing Oracle installation for PowerPath:
x
Refer to If Using a File System below if you plan to use Oracle with a file system. Refer to If Using Raw Partitions on page 20-27 if you plan to use Oracle with raw partitions.
If using a file system: 1. Record the SysV partition identifier(s) for the partitions where the Oracle file system(s) resides. The SysV partition identifiers have the form c#t#d#s#. You can get this information from /etc/vfstab once you know the location of the Oracle files. Your database administrator can give you the location or, if you use OFA methodologies, you can check for directories with the name oradata. 2. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 3. As superuser, change to the directory where you installed the PowerPath utilities. 4. Find the PowerPath identifier(s) that match the SysV partition identifier(s). a. Enter:
powermt display dev=0
b. Check the I/O Path field in the device display for a string in the form c#t#d# that matches the SysV identifier where the Oracle files are located. (Refer to the PowerPath Product Guide for information on the fields in powermt output.) For example, if the Oracle files are on c1t3d2s2, look for c1t3d2. If you find it, you know /dev/dsk/emcpower0c is the same as /dev/dsk/c1t3d2s2. emcpower partition identifiers end in abcdefg rather than s0, s1, s2, and so on. c. Write this matching identifier down. If you do not see it, enter:
powermt display dev=1
d. Check the display. e. Continue, if necessary, with dev=2, dev=3, and so on until you find all the partition matchups.
20-26
20
5. Mount the file systems, following the instructions in the Oracle8 installation documentation. Use PowerPath partition identifiers rather than the original Solaris identifiers. For example, if you found emcpower2c to be the PowerPath identifier for c1t3d2s2, enter:
mount /dev/dsk/emcpower2c /oracle/mp1
rather than:
mount /dev/dsk/c1t3d2s2 /oracle/mp1
This section describes how to reconfigure Oracle for PowerPath. If your Oracle8 installation accesses Solstice DiskSuite metadevices and not sd devices, refer instead to Solstice DiskSuite 4.2 on page 20-8. If your application accesses VERITAS logical volumes, refer instead to the EMC PowerPath for UNIX Version 3.0 Release Notes.
All Oracle8 control, log, and data files are accessed either directly from mounted file systems or via links from the oradata subdirectory of each Oracle mount point set up on the server. As a result, the converting an Oracle installation from sd to PowerPath two parts is a two-part process:
x
Changing the Oracle mount points physical devices in /etc/vfstab from sd device partition links to the PowerPath device partition links that access the same physical partitions Recreating any links to raw sd device links to point to raw PowerPath device links that access the same physical partitions
To reconfigure Oracle for PowerPath: 1. Back up your Oracle databases, including all database files, control files, and redo logs. 2. Map sd device files used by Oracle to PowerPath device files. You can use the worksheet in Table 20-1 on page 20-5 and Table 20-2 on page 20-6 or create your own worksheet with the following columns:
File Attributes Actual Device Node Owner Group
Permissions
Oracle
20-27
20
3. Obtain the sd device names for the Oracle mounted file systems: a. Look up the Oracle mount points in /etc/vfstab. b. Extract the corresponding sd device link name, for example, /dev/dsk/c1t1d0s5. 4. Write the sd device names in the Oracle Device Link column. 5. Launch sqlplus with user name system and password manager. 6. List the locations of all data files used by Oracle. Enter:
select tablespace_name,file_name from sys.dba_data_files;
7. Determine the underlying device on which each data file resides. You can obtain this information by looking up mounted file systems in /etc/vfstab (as in step 3) or extracting raw device link names directly from the output of the select command. 8. Write the underlying device names in the Oracle Device Link column. 9. Fill in the Actual Device Node column by running ls l on each device link listed in the Oracle Device Link column and extracting the link source device filename. 10. Fill in the File Attributes columns by running ls l on each node in the Actual Device Node column. 11. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 12. Fill in the PowerPath Device Links column by matching each c#t#d#s# device link listed in the Oracle Device Link column with its associated emcpowerN device link name, by running /etc/powermt display dev=all. emcpowerNp partition names use the letters a-h in the p position to indicate slices 0-7 in the corresponding c#t#d#s# slice names. 13. Fill in the PowerPath Device Nodes column by running ls -l on each PowerPath device link and tracing back to the link source file.
20-28
20
14. Check your completed worksheet for accuracy. It should look similar to the following sample:
File Attributes Oracle Device Link Permis- PowerPath Owner Group sions Device Node oracle dba 644 PowerPath Device Link
15. Change the attributes of each node in the PowerPath Device Node column to match the attributes in the File Attributes columns. Use the UNIX chown, chgrp, and chmod commands. 16. Make a copy of the /etc/vfstab file. 17. Edit /etc/vfstab, changing each Oracle device link to its corresponding PowerPath device link. 18. For each link found in an oradata directory, recreate the link using the appropriate PowerPath device link as the source file instead of the associated sd device link listed in the Oracle Device Link column. As you perform this step, generate a reversing shell script that can restore all the original links in case of error. 19. Reboot the system. 20. Verify that all file system and database consistency checks pass.
Oracle
20-29
20
Sybase
This section describes how to install and set up Sybase Version 11.0.2 for use with emcpower devices. The section consists of two parts:
x
New Installation of PowerPath and Sybase describes what to do if neither Sybase nor PowerPath is installed. Sybase Reconfiguration on page 20-31 describes how to reconfigure Sybase for PowerPath.
This section assumes you have superuser privileges and can access the Solaris AnswerBook online documentation. Have the relevant Sybase documentation available for reference.
If neither Sybase nor PowerPath is installed: 1. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 2. Create the Sybase System Administrator user. 3. Determine which PowerPath emcpower devices you will use as Sybase database devices. 4. Partition the selected volumes, using the Solaris format utility. If PowerPath raw partitions are to be used by Sybase as database devices, leave disk cylinder 0 of the associated volume unused. This protects UNIX disk labels from corruption by Sybase. Refer to the relevant Sybase documentation. Make sure the Sybase System Administrator has read and write privileges to the selected PowerPath raw partitions, or change the ownership of the raw partitions to sybase. 5. Make sure you are logged in as the Sybase System Administrator. Then install and configure Sybase. Refer to the relevant Sybase documentation: a. Specify an appropriate PowerPath raw partition as the Sybase master device. b. Specify appropriate PowerPath raw partitions as the Sybase sybsystemprocs and sybsecurity database devices.
20-30
20
6. Initialize PowerPath devices for use as Sybase database devices using the disk init command, as described in the relevant Sybase administration documentation. 7. Specify the PowerPath device link name in the physname argument. These PowerPath-based database devices can now be set up for use as any type of Sybase system or user database.
Sybase Reconfiguration
Follow the instructions in this section if you already installed Sybase and need to reconfigure it for PowerPath. If your Sybase installation accesses Solstice DiskSuite metadevices and not sd devices, refer instead to Solstice DiskSuite 4.2 on page 20-8. If you installation accesses VERITAS logical volumes, refer instead to the EMC PowerPath for UNIX Version 3.0 Release Notes.
The process described in this section can take 30 minutes to 3 hours to complete, depending on the number of database devices used by Sybase and the time required for Sybase to initialize itself. PowerPath does not support tape dump devices or conversion of an existing master database or sybsecurity device from an sd device to a PowerPath device. To reconfigure Sybase for PowerPath: 1. Dump your Sybase databases. Include system tables residing on the master, sybsystemprocs, and sybsecurity devices. 2. Create a worksheet with the columns and headings shown below.
File Attributes Sybase Device Name Sybase Physical Name Actual Device Node Permissions PowerPath Device Node PowerPath Device Link
Owner
Group
3. Log in as the Sybase System Administrator. 4. Launch isql. 5. List the Sybase device names of all Sybase database devices in column 1 of the worksheet, and list the Sybase physical name in column 2 of the worksheet. Use the sp_helpdevice system procedure. Each Sybase physical name is a symbolic link in the form
/dev/[r]dsk/c#t#d#s#.
Sybase
20-31
20
6. Fill in the Actual Device Node column by running ls l on each link and obtaining the name of its source file. 7. Fill in the File Attributes columns by running ls l on each actual device node. You need the owner, group, and permissions fields. 8. Install PowerPath (refer to Chapter 16, Installing PowerPath on a Solaris Host). 9. Fill in the PowerPath Device Link column by matching each c#t#d#s# device link listed in the Sybase Physical Names column with its associated emcpower#p device link name, by running /etc/powermt display dev=all.
emcpower#p partition names use the letters a-h in the p position to indicate slices 0-7 in the corresponding /dev/[r]dsk/c#t#d#s# slice names.
10. Fill in the PowerPath Device Nodes column by running ls l on each PowerPath device link and tracing back to the link source file. 11. Check your completed worksheet for accuracy. Its entries should be similar to the following:
Sybase Device Name sysprocsdev Sybase Physical Name /dev /rdsk /c1t3d1s1 File Attributes Actual Device Node /devices /sbus@1f,0,1000 /sd@3,1:b,raw Owner sybase Group sybase Permissions crw-r ----PowerPath Device Node /devices /pseudo /emcpower@4:b, raw PowerPath Device Link /dev/rdsk/emcpower4b
12. Change the file attributes of each node in the PowerPath Device Node column to match the attributes in the File Attributes columns, using the UNIX chown, chgrp, and chmod commands. 13. Change the master device in the $SYBASE/install/RUN_servername file to the associated emcpowerNp device link name. For the remaining steps in this process, enlist the help of your Sybase Security Officer. Record the isql update procedures in these steps as a script file. Construct a reverse script file to restore the original sd physical names in case of error.
20-32
20
15. Stop all Sybase clients and transactions. 16. Shut down the database server, and restart it in single user mode. 17. For each row of your worksheet, enter:
update sysdevices set phyname = PowerPath_device_link_column_entry go
18. Enter:
commit sp_configure allow updates, 0 go shutdown go
19. Restart Sybase. 20. Verify that all database consistency checks pass.
Sybase
20-33
20
20-34
21
PowerPath Administration on Solaris
visible Body Tag
This chapter discusses PowerPath issues and administrative tasks specific to Solaris. Throughout this chapter, many procedural steps use powermt commands. For detailed descriptions of these commands, refer to the PowerPath Product Guide. This chapter covers the following topics:
x x x x x x x x x
PowerPath Feature Summary ........................................................21-2 Booting a Solaris Host .....................................................................21-3 Boot Device Support........................................................................21-3 R1/R2 Boot Failover Support.........................................................21-3 Device Naming.................................................................................21-5 Reconfiguring PowerPath Devices Online...................................21-8 Dynamic Reconfiguration.............................................................21-10 powercf Configuration Utility......................................................21-13 Error Messages ...............................................................................21-16
21-1
21
PowerPath Feature Summary on Solaris Feature I/O load balancing I/O failover IOCTL load balancing IOCTL failover Install without reboot Upgrade without reboot Upgrade without uninstall Deinstall without reboot Boot from PowerPath device Cluster support Fibre Channel support powermt utility powercf utility Enterprise consistency groups Add PowerPath devices online Supported on Solaris?
21-2
21
Solaris 2.6
21-3
21
R1/R2 Supported Configurations
EMC supports the following specific R1/R2 configurations:
x x
Each boot host is connected to only one Symmetrix. All R1 devices reside on one Symmetrix, Symmetrix A, and are visible only to a single host, Host A. All R2 devices reside on a separate Symmetrix, Symmetrix B, and are visible only to the identical host in reserve, Host B. Each R1 device has only one mirror. (Concurrent SRDF is not supported.) SymCLI 4.3 is installed as the default version of SymCLI on the boot disk. SRDF is managed from either of the following two facilities: EMC Control Center Management Server Symmetrix Service Processor
After installing PowerPath, if you plan to enable R1/R2 boot disk failover, run the powercf -Z command while booted on the R1 copy of the boot disk.
21-4
21
Device Naming
PowerPath for Solaris presents PowerPath-enabled storage system logical devices to the operating system by all their native devices plus a single PowerPath-specific pseudo device. Applications and operating system services can use any of these devicesnative or pseudoto access a PowerPath-enabled storage system logical device.
Native Device
A native device describes a device special file of one of the following forms:
x x
where:
x x
The c # is the instance number for the interface card. The t # is the target address of the storage system logical device on the bus. The d # is the storage system logical device at the target. The s # is the slice, ranging from 0 to 7.
x x
Pseudo Device
A pseudo device describes a device special file of one of the following forms:
x x
where:
x x
Slices in Sys V identifiers are designated s0, s1, s2, and so on. They correspond exactly to emcpower slices designated a, b, c, and so on. Therefore, if device c0t0d0 corresponds to device emcpower0, slice c0t0d0s2 corresponds to slice emcpower0c.
Device Naming
21-5
21
Selecting a Device Naming Convention
After PowerPath is installed, a host has both native devices and emcpower devices enabled and available for use. Both native devices and emcpower devices can be active simultaneously on a host. Native devices are preferable for most installations. Native devices:
x
Provide full support for VERITAS Volume Manager, including both sliced and simple disks. Eliminate the need to modify applications to provide Powerpath multipathing and path failover functionality. Allow applications such as a volume manager or database to directly access PowerPath logical devices through native Solaris disk devices. Multipathed storage system logical devices appear only once within the pseudo device name space. Pseudo device names follow storage system logical device serial numbers. (Native device names are based on HBA, target, and logical device.) Implementing the boot-time boot path failover feature of PowerPath requires emcpower devices. The mount -o remount of the root device for read/write during single-user startup fails if the unaliased value of the boot-device eeprom variable does not match the mount device specified for / in /etc/vfstab. Specifying a rootdev entry with an emcpower device in /etc/system overcomes the remount problem. Suns Dynamic Reconfiguration (DR) feature works transparently with emcpower devices. Refer to Dynamic Reconfiguration on page 21-10 for information on using DR to add and remove HBAs in a PowerPath environment.
21-6
21
Table 21-2 summarizes the functional differences between native devices and emcpower devices in the Solaris environment.
Table 21-2
Native Devices versus emcpower Devices Function I/O failover I/O load balancing Booting Reboot (no reconfiguration) Reboot (reconfiguration) Native Device Does not support backup boot path. No impact (Partial support) If a path is missing, Powerpath does not create a replacement c#t#d# device. Limitations c#t#d#s# paths are removed. PowerPath selects the specific path. When the No Redirect load-balancing policy is set, native devices deliver I/O to the device to which it would go if PowerPath were not installed. If the path has failed, the I/Os will fail. No PowerPath selects an arbitrary path. When the No Redirect load-balancing policy is set, pseudo devices select a configured path to deliver I/O. All subsequent I/O is delivered to that path. If the selected path has failed, the I/O to the emcpower device also will fail. emcpower Device Supports backup boot path. No impact (Full support)
Support for VxVM sliced disks Support for VxVM simple logical devices Support for DMPa interaction DR transparency IOCTL deterministic path selection
a. You can manage a CLARiiON storage system with either PowerPath or DMP, but not with both.
Device Naming
21-7
21
Adding or removing HBAs Adding, removing, or changing storage system logical devices Changing the cabling routes between HBAs and storage system ports Adding or removing storage system interfaces
CAUTION If you are trying to recover from a SCSI bus ID conflict, you must reboot the host before completing the following procedure. The host reboot assures the integrity of the underlying SCSI driver layers so that PowerPath can find all devices on the bus. To reconfigure PowerPath devices: 1. Update the /kernel/drv/sd.conf file to include target/logical device entries for all multipath storage system logical devices. 2. Create the device nodes. Enter the appropriate command:
On hosts running Solaris 7 and 8 Solaris 2.6 Enter devfsadm -c drvconfig;disks;devlinks
3. Use the format command to verify that all devices were created. 4. Verify that the emcpower devices are accessible. For example, enter:
format /dev/rdsk/emcpower1a
21-8
21
5. At the format prompt, enter inquiry. The screen displays the emcpower devices inquiry data; for example:
Vendor: EMC Product: SYMMETRIX Revision: 5x6x format>
6. Enter quit, to end the format process. 7. Create the new device nodes. Enter:
powercf -q
21-9
21
Dynamic Reconfiguration
The Solaris Dynamic Reconfiguration (DR) feature allows you to add or remove an HBA from a Solaris system while it continues to run. You can logically attach and detach system boards from the operating system without the halting and rebooting. For example, with DR you can detach a board from the operating system, physically remove and service the board, and then re-insert the board and re-attach it to the operating system. You can do all of this without halting the operating system or terminating any user application. PowerPath supports DR. The following procedures describe how to use DR to add and remove HBAs in a PowerPath environment. As you perform these procedures, have available the Sun Dynamic Reconfiguration documentation for your platform.
If you have a custom PowerPath configuration that you have not yet saved it, run powermt save before completing the procedures in this section, to save your configuration changes. Run powermt load after completing these procedures to restore your configuration.
To use DR to add an HBA to a Solaris system in a PowerPath configuration: 1. Add the new HBA to the system, following the instructions in the Sun Dynamic Reconfiguration documentation. 2. Configure the new HBA. Enter:
powermt config
21-10
21
To use DR to remove an HBA from a Solaris host in a PowerPath configuration: 1. Correlate the c#t#d#s# device special file of the HBA being removed with the PowerPath adapter number for that HBA. The PowerPath adapter number is used in the powermt remove adapter command later in this process. On 10000 class systems: Start the dr shell. Enter:
dr
The prompt changes to dr>. From within the dr shell, list the devices and corresponding c#t#d#s# device special files on the I/O board being removed. For example, for I/O board 1, enter:
drshow 1
drshow displays all device special files that point to HBAs on the I/O board. In the example above, board 1 Slot 0 has a single device (HBA) attached named c2t0d1s2.
Dynamic Reconfiguration
21-11
21
Associate the device special file (identified above with drshow) with a PowerPath adapter number. Enter:
powermt display dev=all
In the output, locate device c2t0d1. Notice that the adapter number for c2t0d1 is 0. Therefore, 0 is the adapter number that you would use as an argument to powermt remove adapter in the next step. 2. Use powermt remove adapter to remove the HBA from the PowerPath configuration. Enter:
powermt remove HBA=#
where # corresponds to the PowerPath adapter number identified in step 1. 3. Disconnect the HBA following the instructions in the Sun Dynamic Reconfiguration documentation.
21-12
21
Adding or removing HBAs Adding, removing, or changing storage system logical devices Changing the cabling routes between HBAs and storage system ports Adding or removing storage system interfaces
Refer to Reconfiguring PowerPath Devices Online on page 21-8 for instructions on reconfiguring PowerPath devices on Solaris.
The powercf utility resides in the /etc directory. You must have superuser privileges to use powercf. To run powercf on a Solaris host, type the command, plus any options, at the shell prompt.
emcp.conf File
The /kernel/drv/emcp.conf file lists the primary and alternate path to each storage system logical device and the storage system device serial number for that logical device. The powercf -i and powercf -q commands update the existing emcp.conf file or create a new one if one does not already exist.
21-13
21
Syntax Arguments
powercf -i|p|q
All versions of powercf scan HBAs for single-ported and multiported storage system logical devices and compare those logical devices with PowerPath device entries in emcp.conf.
-i
Runs powercf in interactive mode, prompting you to accept or reject any addition or deletion of storage system devices in emcp.conf. If powercf -i discovers any PowerPath device listed in emcp.conf but not in the storage system, it displays the entry and asks whether you want to delete it from emcp.conf. For example:
Could not validate the entry: --------------------------------------emcpower40: volume ID = 05501057 --------------------------------------Would you like to remove this entry from /kernel/drv/emcp.conf? [y/n]: y removing emcpower40
If powercf -i discovers any logical devices (volumes) in the storage system but not in emcp.conf, it asks whether you want to add the entry to the file. For example:
Could not find config file entry for: --------------------------------------volume ID = 0550100C --------------------------------------Would you like to add this volume to /kernel/drv/emcp.conf? [y/n]: y adding emcpower2
If you add a PowerPath device, powercf saves a primary path to the device. After you review all proposed emcp.conf changes and if you approve any changes, powercf asks whether to write the new file. If you answer n (no), powercf exits. If you answer y (yes), powercf rewrites the emcp.conf, saves a backup copy of any existing emcp.conf file as emcp.conf.bak, and asks whether you want to display emcp.conf. If you answer y (yes), it displays the file with the more command.
21-14
21
-p
Prints information on any inconsistencies between the storage system logical devices found in the HBA scan and the emcpower device entries in emcp.conf.
powercf -p does not create a new emcp.conf file or change an existing one. powercf -p also does not create a backup copy of the existing emcp.conf file; therefore, keep a backup copy of emcp.conf on diskette or tape, for use in case of system disk
Runs powercf in quiet mode. Updates the emcp.conf file by removing PowerPath devices not found in the HBA scan and adding new PowerPath devices that were found. Saves a primary and an alternate path to each PowerPath device.
powercf -q runs automatically during PowerPath installation.
CAUTION Running powercf -q on a host with failed adapter paths or devices may result in Powerpath devices being removed from emcp.conf. Do not run powercf -q if you are unsure of the state of your system. Instead, use powercf -i to run interactively.
CAUTION If powercf -q changes the pseudo device configuration, there may be applications that need to update configurations accordingly. For example, applications dependent on the SymAPI database may require running symcfg discover after powercf -q. It may be necessary to reboot prior to running symcfg discover in order to activate pseudo device mapping changes in the kernel.
21-15
21
Error Messages
PowerPath reports any errors, diagnostic messages, and failover recovery messages to the system console and to the file /var/adm/messages. Refer to the PowerPath Product Guide for a complete list of PowerPath error messages.
21-16
PART 5
Appendixes
This section contains supplementary information about PowerPath. Appendix A, PowerPath Patches This appendix describes how to identify and obtain patch releases of PowerPath. Appendix B, Customer Support This appendix describes how to obtain help for resolving software problems.
A
PowerPath Patches
This appendix describes how to identify and obtain PowerPath patch releases. The appendix covers the following topic:
x
PowerPath Patches
A-1
PowerPath Patches
PowerPath anonymous ftp site PowerPath patches are released as required. Typically, each platform has different patch levels (including none). Identifying Patches, which follows, describes the numbering scheme that identifies patch levels.
Identifying Patches
PowerPath releases are identified by a three-digit version number: Major.Minor.Patch In a full product release, Patch is zero (0). For example: 3.0.0 Patch file names are formatted as follows:
EMCPower.Platform.Major.Minor.Patch[.Build].tar.Z
where Platform is one of the following: AIX, HP, DIGITAL (for Tru64 UNIX), SOLARIS, LINUX, NT4, W2000, or NETWARE. Major is a one-digit major version number. Minor is a one-digit minor version number. Patch is a one-digit patch number. Build is an optional build number. For example:
EMCPower.SOLARIS.3.0.1.b005.tar.Z
Installing Patches
A PowerPath patch can be applied only to a full release or an earlier (lower numbered) patch release with the same Major and Minor number. The patchs Readme file describes the contents of the patch and provides installation instructions.
A-2
B
Customer Support
This appendix reviews the EMC process for detecting and resolving software problems, and provides essential questions that you should answer before contacting the EMC Customer Support Center. This appendix covers the following topics:
x x x x x x
Overview of Detecting and Resolving Problems ......................... B-2 Troubleshooting the Problem .......................................................... B-3 Before Calling the Customer Support Center ............................... B-4 Documenting the Problem............................................................... B-5 Reporting a New Problem ............................................................... B-6 Sending Problem Documentation................................................... B-7
Customer Support
B-1
Customer Support
Problem Detection
Contact the EMC Customer Support Center: (800) SVC-4EMC U.S.: Canada: (800) 543-4SVC Worldwide: (508) 497-7901
Figure B-1
B-2
Customer Support
Please do not request a specific support representative unless one has already been assigned to your particular system problem.
B-3
Customer Support
B-4
Customer Support
B-5
Customer Support
B-6
Customer Support
E-mail FTP U.S. mail to the following address: EMC Customer Support Center 45 South Street Hopkinton, MA 01748-9103 If the problem was assigned a number or a specific support representative, please include that information in the address as well.
B-7
Customer Support
B-8
Index
A
AIX adding devices online 5-12 configuration requirements 1-2 device naming conventions 5-4 diagela, disabling 1-3, 1-4 emc_cfgmgr script 5-3 files changed by PowerPath 1-13 hdisk devices, configuring 1-3 hdiskpower devices, initializing 1-10 installing PowerPath 1-2 installing PowerPath from SMIT AIX 4.3 1-6 AIX 5 1-8 installing PowerPath from the command line 1-6 LPPs required 1-3 multipathing, disabling 1-12 PowerPath boot device, configuring 2-2 PowerPath feature summary 5-2 pprootdev command 1-12 registering PowerPath 1-9 registration key 1-9 removing PowerPath 4-2 removing PowerPath boot device 4-5 SMIT screens 5-14 upgrading 1-13 upgrading PowerPath 1-12 AIX Automatic Error Log Analysis, disabling 1-4 AIX clusters. See HACMP cluster AIX hdiskpower devices 5-4 AIX hdisks, configuring for PowerPath 5-3 AIX PowerPath error messages 5-15
AIX PowerPath SMIT screen 5-14 AIX volume groups importing 5-8 varying on 1-11 AIX, adding devices 5-10 AIX, troubleshooting 5-10 Alternative pathing and PowerPath, Tru64 UNIX 10-2
B
BCV devices, hdiskpower-based 5-7 Boot device. See PowerPath boot device bosboot tool 2-5
C
Cluster. See HACMP cluster, MC/ServiceGuard cluster, Sun Cluster 2.2, Sun Cluster 3.0, TruCluster, VCS Configuration requirements AIX 1-2 HP-UX 11-2 Solaris 16-2 Tru64 UNIX 6-2 Configuring PowerPath devices using powercf 21-13 Customer support B-3 Customer Support Center B-7
i-1
Index
D
Device mapping, changing on AIX 5-9 Device naming AIX 5-4 HP-UX 15-3 Solaris 21-5 Tru64 UNIX V4.x 10-3 V5.x 10-3 Devices, adding online AIX 5-12 HP-UX 15-4 Solaris 21-8 Tru64 UNIX 10-4 diagela, disabling 1-4 DR 21-10 Dynamic Reconfiguration 21-10
H
HACMP cluster installing with PowerPath 3-2 integrating PowerPath into 3-5 integrating with PowerPath 3-4 HBA, replacing, AIX 5-10 hdisk devices, configuring for PowerPath 1-3, 5-3 hdisk, configuring, troubleshooting 5-10 hdisk, deleting, troubleshooting 5-11 hdiskpower devices 5-4 Host bus adapter adding (Solaris) 21-10 removing (Solaris) 21-11 HP-UX adding devices online 15-4 clusters. See MC/ServiceGuard cluster configuration requirements 11-2 device naming conventions 15-3 installing PowerPath 11-2 LVM alternate links 15-5 patches required HP-UX 11.0 11-2 HP-UX 11i 11-2 PowerPath boot device configuring 12-2 LVM alternate links 15-5 PowerPath feature summary 15-2 registering PowerPath 11-5 removing PowerPath 14-2 upgrading 11-7 upgrading PowerPath 11-6 HP-UX LVM alternate links 15-5 HP-UX PowerPath error messages 15-5
E
ECC. See EMC Control Center EMC Control Center 1-11, 4-5, 6-6, 9-3, 11-6, 14-4, 16-8, 19-4 EMC Powerlink Web site 1-xvi emc_cfgmgr script 1-3, 5-3 emcp.conf file 21-13 emcpower devices 21-5 and Network File System (NFS) 20-21 and Oracle 20-23 and Solstice Disksuite 20-8 and Sybase 20-30 and UFS logging 20-16 and UNIX File System (UFS) 20-14 and VERITAS File System (VxFS) 20-20 and VERITAS Volume Manager 20-13 attributes, updating 20-5 finding corresponding SysV identifiers 20-4 emcpower devices versus native devices 20-2 Error messages AIX 5-15 HP-UX 15-5 Solaris 21-16 Tru64 UNIX 10-8 /etc/vfstab (Solaris), rebuilding 19-5
I
Installing PowerPath AIX 1-2 AIX 4.3 1-6 AIX 5 1-8 HP-UX 11-2 Solaris 16-2 Tru64 UNIX 6-2
i-2
Index
L
Load balancing on NUMA (Tru64 UNIX) 10-6 Logical devices, removing from PowerPath configuration, AIX 5-13 LPPs 1-3 lsvg command 2-5
M
MC/ServiceGuard cluster installing with PowerPath 13-2 integrating PowerPath into 13-4 integrating with PowerPath 13-5
N
Naming conventions AIX 5-4 HP-UX 15-3 Solaris 21-5 Tru64 UNIX 10-3 Native devices HP-UX 15-3 Solaris 21-5 Tru64 UNIX 10-3 Network File System (NFS) and emcpower devices 20-21 NUMA (Tru64 UNIX) 10-6
O
Oracle and emcpower devices 20-23
PowerPath device emcpower 21-5 hdiskpower 5-4 native HP-UX 15-3 Solaris 21-5 Tru64 UNIX 10-3 PowerPath devices, installing, troubleshooting 1-11, 4-5, 6-6, 9-3, 11-6, 14-4, 16-8, 19-4 PowerPath devices, reconfiguring online AIX 5-12 HP-UX 15-4 Solaris 21-8 Tru64 UNIX 10-4 PowerPath error messages AIX 5-15 HP-UX 15-5 Solaris 21-16 Tru64 UNIX 10-8 PowerPath feature summary AIX 5-2 HP-UX 15-2 Solaris 21-2 Tru64 UNIX 10-2 pprootdev command 2-3, 2-5, 2-6 Pseudo devices AIX 5-4 Solaris 21-5 PVID 5-5
R
R1/R2 boot failover support, Solaris 21-3 Reconfiguring PowerPath devices online AIX 5-12 HP-UX 15-4 Solaris 21-8 Tru64 UNIX 10-4 Registering PowerPath AIX 1-9 HP-UX 11-5 Solaris 16-7 Tru64 UNIX 6-5 Registration key AIX 1-9 HP-UX 11-5 Solaris 16-7 Tru64 UNIX 6-5
PowerPath for UNIX Installation and Administration Guide
i-3
P
powercf configuration utility (Solaris) 21-13 Powerlink Web site 1-xvi PowerPath boot device, AIX configuring 2-2 disabling PowerPath 2-6 lsvg command 2-5 pprootdev command 2-3 PowerPath boot device, HP-UX 12-2 PowerPath boot device, Solaris configuring 17-2 troubleshooting 17-6 PowerPath boot device, Tru64 UNIX 7-2
Index
Removing logical devices from PowerPath configuration 5-13 Removing PowerPath AIX 4-2 HP-UX 14-2 Solaris 19-2 Tru64 UNIX 9-2 TruCluster 9-3
T
Technical support B-3 Tru 64 UNIX cluster. See TruCluster Tru64 UNIX adding devices online 10-4 alternative pathing and PowerPath 10-2 configuration requirements 6-2 device naming conventions 10-3 installing PowerPath 6-2 load balancing 10-6 NUMA 10-6 PowerPath boot device, configuring 7-2 PowerPath feature summary 10-1, 10-2 registering PowerPath 6-5 reinstalling PowerPath 6-7 removing PowerPath 9-2 upgrading 6-8 upgrading PowerPath 6-7 Tru64 UNIX PowerPath error messages 10-8 TruCluster installing PowerPath in planning 8-2 V4.0x 8-4 V5.x 8-4 removing PowerPath 9-3
S
SMIT screens 5-14 Solaris adding devices online 21-8 adding host bus adapter 21-10 configuration requirements 16-2 device naming conventions 21-5 dynamic reconfiguration 21-10 installing PowerPath 16-2 PowerPath boot device configuring 17-2 troubleshooting 17-6 PowerPath feature summary 21-2 R1/R2 boot failover support 21-3 registrating PowerPath 16-7 removing host bus adapter 21-11 removing PowerPath 19-2 removing PowerPath from a boot device 19-7 upgrading 16-12 upgrading PowerPath 16-10 Solaris boot device, removing PowerPath 19-7 Solaris clusters. See Sun Cluster 2.2, Sun Cluster 3.0, VCS Solaris emcpower devices. See emcpower devices Solaris PowerPath error messages 21-16 Solstice DiskSuite and emcpower devices 20-8 Sun Cluster 2.2 installing with PowerPath 18-2 integrating PowerPath into 18-3 Sun Cluster 3.0 installing with PowerPath 18-4 integrating PowerPath into 18-5 Sybase and emcpower devices 20-30
U
UFS and emcpower devices 20-14 UFS Logging and emcpower devices 20-16 UFS metadevices 20-13 Uninstalling PowerPath AIX 4-2 HP-UX 14-2 Solaris 19-2 Tru64 UNIX 9-2 TruCluster 9-3 UNIX File System (UFS) and emcpower devices 20-14 Upgrading AIX 1-13 Upgrading HP-UX 11-7 Upgrading PowerPath AIX 1-12 HP-UX 11-6 Solaris 16-10 Solaris with VxVM 16-11 Tru64 UNIX 6-7
i-4
Index
V
VCS installing with PowerPath 18-6 integrating PowerPath into 18-7 VERITAS Cluster Server. See VCS
VERITAS File System (VxFS) and emcpower devices 20-20 VERITAS Volume Manager and emcpower devices 20-13 Volume groups importing 5-8 varying on 1-11
i-5
Index
i-6