Beruflich Dokumente
Kultur Dokumente
20 Release Notes
Legal Notices Copyright 1998-201 Hewlett-Packard Development Company, L.P. 1 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.21 and 12.212, Commercial 1 Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendors standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. UNIX is a registered trademark in the United States and other countries, licensed exclusively through The Open Group.
Contents
Publishing History..........................................................................................6 1 Serviceguard Version A.1 1.20 Release Notes..................................................7
Announcements........................................................................................................................7 Platform Dependencies.........................................................................................................7 April 201 Patches...............................................................................................................7 1 Serviceguard Bundled Components - New Product Structure......................................................7 Serviceguard Optional Products Not Bundled..........................................................................8 Serviceguard A.1 1.19 Is the Required Basis for Rolling Upgrades.................................................8 New Cluster Manager..........................................................................................................8 Quorum Server Upgrade Required if You Are Using an Alternate Address...................................9 Serviceguard Manager Available from the System Management Homepage (SMH)......................9 Support for Mixed-OS Clusters (HPUX 1 v2 and 1 v3).........................................................9 1i 1i New Support for Version 5.1 Service Pack 1 (SP1) of Veritas Products from Symantec....................9 Support for Veritas Products from Symantec.............................................................................9 Version 4.1, 4.2, or 4.3 of HPVM Required.............................................................................9 ipnodes Entries Needed in /etc/nsswitch.conf.......................................................................10 Legacy Packages...............................................................................................................10 STORAGE_GROUP Deprecated......................................................................................10 Do Not Use .rhosts ............................................................................................................10 cmviewconf Obsolete.........................................................................................................10 Serviceguard Extension for Faster Failover Obsolete...............................................................10 RS232 Heartbeat Obsolete.................................................................................................10 Token Ring and FDDI Obsolete............................................................................................11 Parallel SCSI Dual Cluster Lock Obsolete...............................................................................11 Parallel SCSI Not Supported for Lock LUN.............................................................................11 Cluster Name Restrictions....................................................................................................11 Optimizing Performance when Activating LVM Volume Groups.................................................11 High Availability Consulting Services....................................................................................12 HP-UX 1 v3 Features Important to Serviceguard....................................................................12 1i New Bundles................................................................................................................12 Support for Veritas 5.1 SP1 from Symantec.......................................................................12 Native Multipathing, Veritas DMP, and Related Features in HP-UX 1 v3..............................12 1i PV Links..................................................................................................................12 PCI Error Recovery....................................................................................................12 Online Replacement of LAN Cards Requires Patch.............................................................13 Whats in this Release.............................................................................................................13 New Features for A.1 1.20 April 201 Patches.........................................................................13 1 New Features for A.1 1.20....................................................................................................13 Serviceguard on HP-UX 1 v3.............................................................................................14 1i Whats Not in this Release..................................................................................................15 About the New Features.....................................................................................................15 Modular CFS Packages for Reducing Package Usage.........................................................15 Advantages of Modular CFS Packages........................................................................16 Improved Performance while Halting a Non-detached Multi-node Package............................16 Support for Veritas 5.1 SP1 on HP-UX 1 v3 Only.............................................................16 1i Easy Deployment...........................................................................................................16 Advantages of Easy Deployment.................................................................................17 Limitations of Easy Deployment...................................................................................17 Halting a Node or the Cluster while Keeping Packages Running (Live Application Detach)......18
Contents 3
What You Can Do....................................................................................................18 Cluster-wide Device Special Files (cDSFs)..........................................................................18 Points To Note..........................................................................................................19 Where cDSFs Reside.................................................................................................19 Checking the Cluster Configuration and Components.........................................................20 Limitations...............................................................................................................21 Cluster Verification and ccmon...................................................................................22 NFS-mounted File Systems..............................................................................................22 The LVM and VxVM Volume Monitor...............................................................................24 Serviceguard Manager...........................................................................................................24 DSAU Integration...............................................................................................................24 Native Language Support...................................................................................................25 What You Can Do.............................................................................................................25 New Features....................................................................................................................26 Current Limitations of Serviceguard Manager.........................................................................27 Browser and SMH Issues................................................................................................27 Pages Launched into New Window rather than Tabs with Firefox Tabbed Browsing...........27 Internet Explorer 7 Zooming Problem..........................................................................28 HP System Management Homepage (HPSMH) Timeout Default.......................................28 Help Subsystem.................................................................................................................28 Before Using HP Serviceguard Manager: Setting Up...............................................................28 Launching Serviceguard Manager........................................................................................29 Patches and Fixes...............................................................................................................29 Features Introduced Before A.1 1.20............................................................................................29 Features First Introduced in Serviceguard A.1 1.19 Patches.........................................................29 Features First Introduced Before Serviceguard A.1 1.19 .............................................................30 About olrad..................................................................................................................30 About vgchange -T........................................................................................................30 About the Alternate Quorum Server Subnet.......................................................................30 About LVM 2.x..............................................................................................................30 About cmappmgr..........................................................................................................31 HPVM 4.1 Support for Windows 2008 Guests.............................................................31 About Device Special Files (DSFs)....................................................................................32 Support for HP Integrity Virtual Machines (HPVM).............................................................33 About HPVM and Cluster Re-formation Time.................................................................33 Access changes as of A.1 1.16..........................................................................................33 Considerations when Upgrading Serviceguard.............................................................34 Considerations when Installing Serviceguard................................................................34 Documents for This Version......................................................................................................34 Further Information..................................................................................................................35 Compatibility Information and Installation Requirements...............................................................35 Compatibility....................................................................................................................35 Mixed Clusters..............................................................................................................35 Mixed Serviceguard Versions.....................................................................................35 Mixed Hardware Architecture.....................................................................................36 Mixed HP-UX Operating-System Revisions....................................................................36 Compatibility with Storage Devices..................................................................................39 Bastille Compatibility.....................................................................................................39 Before Installing Serviceguard A.1 1.20 .................................................................................40 Memory Requirements........................................................................................................40 Port Requirements...............................................................................................................40 Ports Required by Serviceguard Manager.........................................................................41 System Firewalls.................................................................................................................41 Installing Serviceguard on HP-UX..............................................................................................41 Dependencies...................................................................................................................41
4 Contents
Installing Serviceguard.......................................................................................................41 If You Need To Disable identd.............................................................................................43 Upgrading from an Earlier Serviceguard Release...................................................................43 Upgrade Using DRD......................................................................................................44 Rolling Upgrade Using DRD.......................................................................................44 Non-Rolling Upgrade Using DRD................................................................................44 Restrictions for DRD Upgrades....................................................................................45 Veritas Storage Management Products.............................................................................46 Rolling Upgrade............................................................................................................46 Requirements for Rolling Upgrade to A.1 1.20................................................................47 Requirements for Rolling Upgrade to A.1 1.19.................................................................47 Obtaining a Copy of Serviceguard A.1 1.19...................................................................47 Rolling Upgrade Exceptions............................................................................................47 HP-UX Cold Install.....................................................................................................47 HP Serviceguard Storage Management Suite and standalone CVM product.....................47 Migrating to Agile Addressing if Using Cluster Lock Disk...............................................47 Upgrading from an Earlier Release if You Are Not Using Rolling Upgrade (Non-Rolling Upgrade)..........................................................................................................................48 Uninstalling Serviceguard........................................................................................................48 Patches for this Version............................................................................................................48 QXCR1000575890: OLR of a LAN Card in SG cluster fails on HP-UX 1 v3.............................49 1i Fixed in This Version...............................................................................................................49 A.1 1.20 Defects Fixed in the April 201 Patches......................................................................50 1 Defects Fixed in A.1 1.20......................................................................................................50 Problems Fixed in this Version of the Serviceguard Manager Plug-in..........................................52 Known Problems.....................................................................................................................53 Known Problems for Serviceguard........................................................................................53 Known Problems for Serviceguard Manager..........................................................................54 About Serviceguard Releases...................................................................................................54 Types of Releases and Patches.............................................................................................54 Platform Release............................................................................................................54 Feature Release.............................................................................................................54 Patch...........................................................................................................................54 Supported Releases............................................................................................................54 Version Numbering............................................................................................................55 Release Notes Revisions..........................................................................................................55 Native Languages ..................................................................................................................55
Contents
Publishing History
Table 1 Publishing History
Printing Date April 201 1 Part Number 59001493 Edition First Edition
This new edition provides information on Serviceguard A.1 1.20 April 201 patches that add new 1 features to it, as well as features first added in A.1 1.20 and in patches to A.1 1.19.
Platform Dependencies
IMPORTANT: This new version of Serviceguard is supported on 1 v3 only. See Serviceguard 1i on HP-UX 1 v3 (page 14). 1i
For information about patches required to support HP Storage Management Suite (SMS) bundles, see the HP Serviceguard Storage Management Suite Version A.04.00 for HP-UX 1 v3 Release 1i Notes at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard Storage Management Suite A.04.xx (with 5.1 SP1 Veritas).
This product structure allows Serviceguard customers who subscribe to Update services to download Serviceguard from the web (along with the sub-products classed as Serviceguard components and listed above) via Software Update Manager (SUM).
Announcements
Upgrade will trigger a cluster membership transition from the old to the new cluster manager when the last node in the cluster has rolled to A.1 1.19. This transition can take up to one second, during which time the old cluster manager will shut down and the new cluster manager will start. CAUTION: From the time when the old cluster manager is shut down until the new cluster manager forms its first cluster, a node failure will cause the entire cluster to fail. HP strongly recommends that you use no Serviceguard commands other than cmviewcl (1m) until the new cluster manager successfully completes its first cluster re-formation. See the section Special Considerations for Upgrade to Serviceguard A.1 1.19 in Appendix D of the latest version of Managing Serviceguard for more information. For further caveats that apply to specific upgrade paths make sure you read the following sections in these Release Notes: Compatibility Information and Installation Requirements (page 35) and Upgrading from an Earlier Serviceguard Release (page 43).
New Support for Version 5.1 Service Pack 1 (SP1) of Veritas Products from Symantec
With the patches listed under April 201 Patches (page 7) Serviceguard A.1 1 1.20 on HP-UX 1 v3 supports version 5.1 SP1 of Veritas VxVM, CVM, and CFS from Symantec. 1i
201 patches. See Support for HP Integrity Virtual Machines (HPVM) (page 33) for more 1 information about HPVM. See also About cmappmgr (page 31).
Legacy Packages
As of Serviceguard A.1 1.20, new package features are being implemented only in modular packages (those created using the method introduced in A.1 1.18). IMPORTANT: Support for legacy packages (created using the earlier method) will be withdrawn altogether in a future release. Use the modular method to create new packages whenever possible. See Chapter 6 of Managing Serviceguard for more information.
STORAGE_GROUP Deprecated
The STORAGE_GROUP parameter (in the package configuration file for legacy packages) is deprecated as of Serviceguard A.1 1.19. Support for it will be withdrawn in a future release. A package using Veritas Cluster Volume manager from Symantec (CVM) disk groups for raw access (without Veritas Cluster File System from Symantec, or CFS) needs to declare a dependency on SG-CFS-pkg; see Creating the Storage Infrastructure with Veritas Cluster Volume Manager (CVM) in Chapter 5 of Managing Serviceguard for more information.
cmviewconf Obsolete
cmviewconf is obsolete as of Serviceguard A.1 1.20. Use cmviewcl (1m) to obtain information about the cluster. See Reviewing Cluster and Package Status in Chapter 7 of Managing Serviceguard for more information.
10
To provide the required redundancy for both networking and mass storage connectivity on servers with fewer than three I/O slots, you may need to use multifunction I/O cards that contain both networking and mass storage ports.
New Bundles
Serviceguard A.1 1.20 is now available as a recommended product in the HP-UX 1 v3 HA-OE 1i and DC-OE bundles.
12
IMPORTANT: If your storage devices are configured with only a single path, or you have disabled multipathing, you should disable PCI Error Recovery; otherwise Serviceguard may not detect when connectivity is lost and cause a failover. For instructions on using the pci_eh_enable parameter to disable PCI Error Recovery, see the Tunable Kernel Parameters section of the latest edition of the PCI Error Recovery Product Note which you can find at http://www.hp.com/go/hpux-networking-docs > HP-UX 11i v3 Networking Software.
13
A new type of device file allows you to deploy a set of persistent device special files that is consistent across the cluster, eliminating to risk of name-space collisions amongst the cluster nodes. These new device special files are called cluster device special files or cDSFs. See Cluster-wide Device Special Files (cDSFs) (page 18).
New Serviceguard capabilities allow you allow you to check the soundness of the cluster configuration, and the health of its components, more thoroughly than you could in the past, and to do so at any time, rather than only when changing the configuration of the cluster or its packages. See Checking the Cluster Configuration and Components (page 20).
You can now you can use NFS-mounted (imported) file systems as shared storage in packages. See NFS-mounted File Systems (page 22). New capabilities in Serviceguard and HPVM allow online migration of a virtual machine. See the section Online VM Migration with Serviceguard in the white paper Designing high-availability solutions with HP Serviceguard and HP Integrity Virtual Machines, at http:// www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard > White papers.
Serviceguard A.1 1.20 supports a new failover policy, site_preferred_manual, which prevents automatic failover of a package across SITEs. This capability can be used only in a site-aware disaster-tolerant cluster, which requires Metrocluster (additional HP software).
Serviceguard on HP-UX 1 v3 1i
Serviceguard support for HP-UX 1 v3 was first introduced in Serviceguard version A.1 1i 1.17.01. The following is a list of some important 1 v3 capabilities that Serviceguard supports. 1i Serviceguard supports HP-UX agile addressing, sometimes also called persistent LUN binding, for device special files (DSFs). See About Device Special Files (DSFs) (page 32). Serviceguard supports HP-UX native multipathing and load balancing. See Native Multipathing, Veritas DMP, and Related Features in HP-UX 1 v3 (page 12). 1i Serviceguard supports the following networking capabilities: The HP-UX olrad -C command, which identifies network interface cards (NICs) that are part of the Serviceguard cluster configuration. You can remove a NIC from the cluster configuration, and then from the system, without bringing down the cluster. See About olrad (page 30). The LAN Monitor mode of APA.
Serviceguard supports Process IDs (PIDs) of any size, up to the maximum value supported by HP-UX and the nodes underlying hardware architecture. Previous versions of HP-UX imposed a limit of 30,000; this limit has been removed as of HP-UX 1 v3. For more information, see the white paper Number of Processes and Process ID Values 1i on HP-UX at http://www.hp.com/go/hpux-core-docs > HP-UX 11i v3 > White papers.
Serviceguard supports the increased number of LVM volume groups supported as of the HP-UX 1 v3 0809 Fusion release; the maximum number of volume groups you can configure in a 1i Serviceguard cluster is the maximum supported by HP-UX. See the HP-UX documentation for details.
14
Serviceguard supports LVM 2.x volume groups, both for data and the cluster lock. See About LVM 2.x (page 30). Serviceguard now supports cell OL* (online addition and deletion of cells) on HP Integrity servers that support them). For more information about using Serviceguard with partitioned systems, see the white paper HP Serviceguard Cluster Configuration for HP-UX 1 or Linux Partitioned Systems at http:// 1i www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard.
Serviceguard supports vgchange -T, which allows multi-threaded activation of volume groups. See About vgchange -T (page 30). You can use the FSWeb utility to configure LVM volumes in a Serviceguard cluster (and SLVM volumes if the add-on product Serviceguard Extension for Real Application Cluster (SGeRAC) is installed). For more information about FSWeb, see the fsweb (1m) manpage.
Serviceguard supports the following recently added HP-UX 1 v3 capabilities: 1i Dynamic Root Disk (DRD); see Upgrade Using DRD (page 44).
Using the modular style of packaging provides flexibility and can be managed better compared to the legacy style of packaging, and is the highly recommended approach to create CFS packages. NOTE: For this feature to be enabled, both patches, Serviceguard A.1 1.20 patch (PHSS_41628) and Serviceguard CFS A.1 1.20 patch (PHSS_41674) must be installed. Advantages of Modular CFS Packages A single multi-node package can include multiple disk groups and mount points, thus reducing the number of packages required to build a CVM/CFS storage configuration. Disk groups and mount points can also be placed in separate multi-node packages, but dependencies between the mount points and disk groups must be configured explicitly in the configuration files. Modular CFS packages can also be configured and managed by Serviceguard Manager. Supports parallel activation of disk groups and parallel mounting of mount points. The Serviceguard cluster and package information displayed by cmviewcl (1m) output is more compact.
For more information on modular CFS packages, see Creating a Storage Infrastructure with Veritas Cluster File System (CFS) and Managing Disk Groups and Mount Points Using Modular Packages in chapter 5 of Managing Serviceguard Nineteenth Edition manual.
NOTE: For more information about SMS support for version 5.1 SP1, including information about new capabilities, and patches that are required in addition to those listed under April 201 1 Patches (page 7), see the HP Serviceguard Storage Management Suite Version A.04.00 for HP-UX 1 v3 Release Notes at the address given above. 1i
Easy Deployment
In the past you had two main choices for configuring a cluster: using Serviceguard commands, as described in detail in chapter 5 of Managing Serviceguard, or using the Serviceguard Manager
16 Serviceguard Version A.1 1.20 Release Notes
GUI (or some combination of these two methods). As of Serviceguard A.1 1.20, there is a third option, called Easy Deployment. The Easy Deployment tool consists of three commands: cmpreparecl (1m), cmdeploycl (1m), and cmpreparestg (1m). In addition, there is a new -N option to cmquerycl, which you can use to obtain networking information for the cluster heartbeat. These commands allow you to get a cluster up and running in the minimum amount of time. The commands: Configure networking and security (cmpreparecl (1m)) Create and start the cluster with a cluster lock device (cmdeploycl (1m)) Create and import volume groups as additional shared storage for use by cluster packages (cmpreparestg (1m)).
In the April 201 patch (PHSS_41628), the cmdeploycl (1m) and cmpreparestg (1m) 1 commands have been enhanced as part of further development to the Easy Deployment feature. cmdeploycl (1m) is enhanced with the following options: A new -s [site] option that can be used to configure site-aware disaster-tolerant clusters, which requires Metrocluster software to be installed. It also mandates the use of quorum server when sites are specified. cmdeploycl (1m) can be used to configure a single site cluster using either a quorum server or lock LUN. A new -cfs option is used to deploy a Serviceguard cluster with SG-CFS-pkg. This is a required option if you intend to enable CVM/CFS on a cluster by configuring SG-CFS-pkg.
cmpreparestg (1m) is enhanced to support the creation and modification of VxVM/CVM disk groups. A new -g option is included to serve this purpose. When -g is used, you can specify the disk group options using the -o dg_opts option. For more information, see the man pages of cmdeploycl (1m) and cmpreparestg (1m). See also Cluster-wide Device Special Files (cDSFs) (page 18). Advantages of Easy Deployment Quick and simple way to create and start a cluster. Automates security and networking configuration that must always be done before you configure nodes into a cluster. Simplifies cluster lock configuration. Simplifies creation of shared storage for packages.
Limitations of Easy Deployment Does not install or verify Serviceguard software. Requires agile addressing for disks. See About Device Special Files (DSFs) (page 32). cmpreparestg (1m) will fail if cDSFs and persistent DSFs are mixed in a volume group. See Cluster-wide Device Special Files (cDSFs) (page 18) for more information about cDSFs. Does not configure access control policies. Does not install or configure firewall and related software. Does not support cross-subnet configurations. Does not configure packages. Does not discover or configure a quorum server (but can deploy one that is already configured).
17
Does not support asymmetric network configurations (in which a given subnet is configured only on a subset of nodes). When cmdeploycl (1m) is used on a cluster where SGeRAC is installed, cmdeploycl (1m) provides a warning if the cluster being deployed does not meet SGeRAC specific requirements.
For more information and instructions, see Using Easy Deployment in chapter 5 of Managing Serviceguard.
Halting a Node or the Cluster while Keeping Packages Running (Live Application Detach)
There may be circumstances in which you want to do maintenance that involves halting a node, or the entire cluster, without halting or failing over the affected packages. Such maintenance might consist of anything short of rebooting the node or nodes, but a likely case is networking changes that will disrupt the heartbeat. New command options in Serviceguard A.1 1.20 (collectively known as Live Application Detach (LAD)) allow you to do this kind of maintenance while keeping the packages running. The packages are no longer monitored by Serviceguard, but the applications continue to run. Packages in this state are called detached packages. IMPORTANT: This capability applies only to modular failover packages and modular multi-node packages. For more information, see Halting a Node or the Cluster while Keeping Packages Running in chapter 7 of Managing Serviceguard.. When upgrading to future releases, you will be able to use LAD in conjunction with rolling upgrade, but you cannot use it when upgrading to A.1 1.20, because the capability is not available until all the nodes are running A.1 1.20.
When you have done the necessary maintenance, you can restart the node or cluster, and normal monitoring will resume on the packages. What You Can Do You can do the following. Halt a node (cmhaltnode (1m) with the -d option) without causing its running packages to halt or fail over. Until you restart the node (cmrunnode (1m)) these packages are detached not being monitored by Serviceguard. Halt the cluster (cmhaltcl (1m) with the -d option) without causing its running packages to halt. Until you restart the cluster (cmruncl (1m)) these packages are detached not being monitored by Serviceguard. Halt a detached package, including instances of detached multi-node packages. Restart normal package monitoring by restarting the node (cmrunnode) or the cluster (cmruncl).
For more information, including important rules and restrictions, see Halting a Node or the Cluster while Keeping Packages Running in chapter 7 of Managing Serviceguard.
Because DSF names may be duplicated between one host and another, it is possible for different storage devices to have the same name on different nodes in a cluster, and for the same piece of storage to be addressed by different names. Serviceguard A.1 1.20 September 2010 patch (PHSS_41225) and later supports Cluster-wide device files (cDSFs), which ensure that each storage device used by the cluster has a unique device file name. cDSFs are available on HP-UX as of the September 2010 Fusion Release. HP recommends that you use cDSFs for the storage devices in the cluster because this makes it simpler to deploy and maintain a cluster, and removes a potential source of configuration errors. Using cDSFs with Easy Deployment (page 16) further simplifies the configuration of storage for the cluster and packages. See Creating Cluster-wide Device Special Files (cDSFs) and Using Easy Deployment in chapter 5 of Managing Serviceguard for instructions. Points To Note cDSFs can be created for any group of nodes that you specify, provided that Serviceguard A.1 1.20 and the required patch are installed on each node. Normally, the group should comprise the entire cluster. cDSFs apply only to shared storage; they will not be generated for local storage, such as root, boot, and swap devices. Once you have created cDSFs for the cluster, HP-UX automatically creates new cDSFs when you add shared storage.
Where cDSFs Reside cDSFs reside in two new HP-UX directories, /dev/cdisk for cluster-wide block devicefiles and /dev/rcdisk for cluster-wide character devicefiles. Persistent DSFs that are not cDSFs continue to reside in /dev/disk and /dev/rdisk, and legacy DSFs (DSFs using the naming convention that was standard before HPUX 1 v3) in /dev/dsk and /dev/rdsk. It is possible that a 1i storage device on an 1 v3 system could be addressed by DSFs of all three types of device 1i but if you are using cDSFs, you should ensure that you use them exclusively as far as possible. NOTE: Software that assumes DSFs reside only in /dev/disk and /dev/rdisk will not find cDSFs and may not work properly as a result; as of the date of this document, this was true of the Veritas Volume Manager, VxVM.
Limitations of cDSFs
cDSFs are supported only within a single cluster; you cannot define a cDSF group that crosses cluster boundaries. A node can belong to only one cDSF group. cDSFs are not supported by VxVM, CVM, CFS, or any other application that assumes DSFs reside only in /dev/disk and /dev/rdisk. Oracle ASM cannot detect cDSFs created after ASM is installed. cDSFs do not support disk partitions. Such partitions can be addressed by a device file using the agile addressing scheme, but not by a cDSF.
cDSFs are not supported by Ignite-UX in Serviceguard Cluster environment. Recovery support for such a configuration is not supported. If you require support for recovery archives in a Serviceguard environment do not implement Ignite-UX with cDSFs.
Some HP-UX commands have new options and behavior to support cDSFs, specifically:
Whats in this Release 19
vgscan C causes vgscan (1m) to display cDSFs See the manpages for more information. The following new HP-UX commands handle cDSFs specifically: vgcdsf(1m) converts all persistent DSFs in a volume group to cDSFs. Legacy DSFs in the volume group will not be converted, but you can use the HP-UX vgdsf script to convert these legacy DSFs to persistent DSFs if you need to. For more information on the vgdsf script, see the white paper LVM Migration from Legacy to Agile Naming Model at http://www.hp.com/go/hpux-core-docs. For more information on vgcdsf, see the manpage. io_cdsf_config (1m) displays information about cDSFs. See the manpage for more information.
20
NOTE: All of the checks below are performed when you run cmcheckconf (1m) without any arguments (or with only -v, with or without -k or -K). cmcheckconf (1m) validates the current cluster and package configuration, including external scripts and pre-scripts for modular packages, and runs cmcompare (1m) to check file consistency across nodes. (This new version of the command also performs all of the checks that were done in previous releases.) These new checks are not done for legacy packages. For information about legacy and modular packages, see chapter 6 of Managing Serviceguard. LVM volume groups: Check that each volume group contains the same physical volumes on each node Check that each node has a working physical connection to the physical volumes Check that volume groups used in modular packages are cluster-aware Check that file systems have been built on the logical volumes identified by the fs_name parameter in the cluster's packages. Check that each disk group contains the same number of disks on each node. Check that each node has a working physical connection to the disks. Check that files including the following are consistent across all nodes: /etc/hosts (must contain all IP addresses configured into the cluster) /etc/nsswitch.conf /etc/services package control scripts for legacy packages (if you specify them) /etc/cmcluster/cmclfiles2check /etc/cmcluster/cmignoretypes.conf /etc/cmcluster/cmknowncmds /etc/cmcluster/cmnotdisk.conf user-created files (if you specify them)
File consistency:
For more information, see Checking the Cluster Configuration and Components in chapter 7 of Managing Serviceguard. Limitations Serviceguard does not check the following conditions: Access Control Policies properly configured (see chapter 5 of Managing Serviceguard for information about Access Control Policies) File systems configured to mount automatically on boot (that is, Serviceguard does not check /etc/fstab) Shared volume groups configured to activate on boot Volume group major and minor numbers unique Redundant storage paths functioning properly
Whats in this Release 21
Kernel parameters and driver configurations consistent across nodes Mount point overlaps (such that one file system is obscured when another is mounted) Unreachable DNS server Consistency of settings in .rhosts and /var/admin/inetd.sec Consistency across cluster of major and minor numbers device-file numbers Nested mount points Staleness of mirror copies
Cluster Verification and ccmon The Cluster Consistency Monitor (ccmon) provides even more comprehensive verification capabilities than those described in this section. ccmon is a separate product, available for purchase; ask your HP Sales Representative for details.
22
Only NFS client-side locks (local locks) are supported. Server-side locks are not supported. Because exclusive activation is not available for NFS-imported file systems, you should take the following precautions to ensure that data is not accidentally overwritten. The server should be configured so that only the cluster nodes have access to the file system. The NFS file system used by a package must not be imported by any other system, including other nodes in the cluster. The only exception to this restriction is when you want to use the NFS file system as a backing store for HPVM. In this case, the NFS file system is configured as the filesystem type in a multi-node package and is imported on more than one node in the cluster. The nodes should not mount the file system on boot; it should be mounted only as part of the startup for the package that uses it. The same NFS file system should be used by only one package. While the package is running, the file system should be used exclusively by the package. If the package fails, do not attempt to restart it manually until you have verified that the file system has been unmounted properly.
In addition, you should observe the following guidelines. CacheFS and AutoFS should be disabled on all nodes configured to run a package that uses NFS mounts. For more information, see the NFS Services Administrator's Guide HP-UX 1 version 3 at 1i http://www.hp.com/go/hpux-networking-docs. HP recommends that you avoid a single point of failure by ensuring that the NFS server is highly available. NOTE: If network connectivity to the NFS Server is lost, the applications using the imported file system may hang and it may not be possible to kill them. If the package attempts to halt at this point, it may not halt successfully Do not use the automounter; otherwise package startup may fail. If storage is directly connected to all the cluster nodes and shared, configure it as a local filesystem rather than using NFS. An NFS file system should not be mounted on more than one mount point at the same time. Access to an NFS file system used by a package should be restricted to the nodes that can run the package.
For more information, see the white paper Using NFS as a file system type with Serviceguard 1 1.20 on HP-UX 1 v3 which you can find at http://www.hp.com/go/hpux-serviceguard-docs. 1i This paper includes instructions for setting up a sample package that uses an NFS-imported filesystem. See also the description of the new parameter fs_server, and of fs_type and the other filesystem-related package parameters, in chapter 6 of Managing Serviceguard.
23
NOTE: The addition of the fs_server package parameter alters the output of cmviewcl -f line; this in turn may affect your programs and scripts that parse cmviewcl -f line output.
Serviceguard Manager
HP Serviceguard Manager B.03.10 is a web-based, HP System Management Homepage (HP SMH) tool, that replaces the functionality of the earlier Serviceguard management tools. Serviceguard Manager allows you to monitor, administer and configure a Serviceguard cluster from any system with a supported web browser. Serviceguard Manager does not require additional software installation. Instead, using your browser, you log into an HP Systems Management Homepage (SMH) and access the HP Serviceguard Manager tool, as well as other system management tools. The HP Serviceguard Manager Main Page provides you with a summary of the health of the cluster including the status of each node and its packages.
DSAU Integration
HP Serviceguard Manager uses Distributed Systems Administration Utilities (DSAU) to display consolidated cluster log (syslog) and consolidated package logs. You can find more information on DSAU in the Distributed Systems Administration Utilities Users Guide, at the address given under Documents for This Version (page 34).
24
NOTE: DSAU does not support a local log consolidation server in a cross-subnet cluster. Instead, you can setup a remote log consolidation server on the Quorum Server node or cluster.
Serviceguard Manager
25
New Features
HP Serviceguard Manager version B.03.10 supports Serviceguard A.1 1.20 on HP-UX 1 v3. The 1i following are new capabilities in B.03.10: Support for Site-aware Metrocluster Configuration (Site Aware Cluster + Site Controller package) Support for configuration of sites to form a site-aware Cluster. Support for Site Controller Package configuration. Support for moving the Site Controller package within site and across site.
Manual Site switching Enhancement If the failover policy is SITE_PREFERRED_MANUAL, an alert is displayed when a package fails and manual intervention is required to start the package in the same site or another site.
Support for Serviceguard Toolkit for Oracle Data Guard B.01.00 and the April 201 Patch 1 (PHSS_41640) or later Support for configuration, administration and monitoring of modular packages with Serviceguard toolkits for Oracle Data Guard and ECMT Oracle/SGeRAC.
Support for Serviceguard Extension for Oracle E-business Suite Toolkit B.02.00 Configuration, Monitoring and Administration Support for the new Serviceguard Extension for Oracle E-business Suite (SGeEBS) Toolkit.
Support to open Toolkit README files Open the README file for any installed toolkit, from the Toolkits configuration page for more information on it and its parameters.
Single-click Access to Package Logs Launch a Package log window from the Operation log window for each of the packages involved in a particular Administration operation, to get the latest information on the progress of the commands as well as for better troubleshooting.
Support for SG-CFS-pkg (SMNP) Configuration Configure the SMNP package, SG-CFS-pkg, for CVM/CFS environments.
Run cDSF while adding new node To allow users to run the cDSF command while adding a new node to the cluster to have uniform cDSF names in the cluster.
Support for Co-existence of SGeRAC with ECMT Support for Oracle RAC (SGeRAC toolkit) packages to co-exist with Oracle Single instance (ECMT Oracle toolkit) packages in the same Serviceguard cluster.
Easy deployment of CFS Cluster (SG-CFS-pkg SMNP package) If the appropriate SG SMS bundle is found to be installed, you will have the option to deploy a CFS cluster. The SG-CFS-pkg (SMNP) will be automatically created.
SGeRAC Cluster Easy Deployment If the SGeRAC bundle is found to be installed during cluster configuration, the network configuration will be analyzed to determine if the cluster that is being deployed is a candidate SGeRAC cluster, then you will be informed of the results.
26
Cluster Verification for Metrocluster An enhancement to the existing cluster verification feature for Metrocluster specific checks.
Metrocluster Easy Deployment (SiteAware + NonSiteAware) If the appropriate Metrocluster, SMS, and SGeRAC bundles are found to be installed, you will be given options for configuring a site aware Metrocluster with minimal steps.
Package Easy Deployment Effortlessly deploy the following modular packages with minimal user input and automatic package dependency configuration: SGeRAC OC Toolkit package easy deployment SGeRAC RAC DB Toolkit package easy deployment ECMT Oracle Toolkit package easy deployment SGeEBS Toolkit package easy deployment Site Controller package easy deployment
For more information, see the white paper Using Easy Deployment in Serviceguard and Metrocluster environments at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard > White papers. NFS Toolkit Enhancement A new parameter "SUPPORTED_NETIDS" is added in the Serviceguard Manager GUI on HA NFS toolkit.
SGeRAC ASM disk group package Enhancements Displays actual values in the SGeRAC RAC database and ASM disk group packages property sheet. These parameters are removed from the configuration page display since the user need not enter values.
Modular CFS Package Integration (Multiple CVM Disk Groups and CFS Mount Points) You can merge multiple CVM Disk Group (DG) and CFS Mount Point (MP) packages that are needed for a single application
Internet Explorer 7 Zooming Problem Internet Explorer 7 introduced a "zoom" feature which allows the user to zoom in/out the page being viewed. However, its implementation is known to have problems. HTML elements in a page are not zoomed proportionally. As a result, the resulting page is displayed in a unpredictable manner. Internet Explorer 8 fixed most of these problems and behaves in the same way as Firefox 3.x. HP System Management Homepage (HPSMH) Timeout Default In some situations (for example, when a remote cluster node is disconnected from the network), Serviceguard commands may take longer than 30 seconds to return. In this case, if the HPSMH 3.0 UI timeout value is left at its default value (20 seconds), Serviceguard Manager information will not be loaded correctly into the HPSMH. To ensure Serviceguard Manager has sufficient time to execute Serviceguard commands, increase the UI timeout value to 60 seconds (Settings -> Security -> Timeouts -> UI timeout).
Help Subsystem
Use this section to help you become familiar with Serviceguard Manager. Once Serviceguard Manager is running, use the tooltips by moving your mouse over a field from the read-only property pages for a brief definition for each field. You can also access the online help by clicking the button located in the upper-right hand corner of the screen to view overview and procedure information. Start with the help topic Understanding the HP Serviceguard Manager Main Page. You should read the help topic About Security, as it explains HP Serviceguard Manager Access Control Policies, as well as root privileges.
See the online help topic About Security for more information. Have created the security bootstrap file cmclnodelist. See Configuring Root-Level Access in Chapter 5 of the Managing Serviceguard manual for instructions.
28
29
About vgchange -T
Serviceguard supports vgchange -T, which allows multi-threaded activation of volume groups on HP-UX 1 v3 systems. 1i This means that when the volume group is activated, physical volumes (disks or LUNs) are attached to the volume group in parallel, and mirror copies of logical volumes are synchronized in parallel, rather than serially. That can improve a packages startup performance if its volume groups contain a large number of physical volumes. To enable vgchange -T for all of a packages volume groups, set enable_threaded_vgchange to 1 in the package configuration file (the default is 0, meaning that multi-threaded activation is disabled). Note that, in the context of a Serviceguard package, this affects the way physical volumes are activated within a volume group; another package parameter, concurrent_vgchange_operations, controls how many volume groups the package can activate simultaneously. IMPORTANT: Make sure you read the configuration file comments for both concurrent_vgchange_operations and enable_threaded_vgchange before configuring these options, as well as the vgchange (1m) manpage.
30
with LVM version B.1 1.31.0809 and Serviceguard A.1 1.19 or later. (This support was first introduced in a patch to Serviceguard A.1 1.18.) NOTE: You are not required to move to LVM 2.x volume groups and everything will work as before if you do nothing. If you do use LVM 2.x volume groups, you can still manage them with the same commands as before, although you may have to make minor changes to any scripts you use that parse the output of lvdisplay, vgdisplay, pvdisplay or vgscan, as the output of these commands has changed slightly. In addition, new options are available for some commands. For more information, see the white paper LVM 2.0 Volume Groups in HP-UX 1 v3 at http:// 1i www.hp.com/go/hpux-core-docs. For information about all other aspects of LVM on HP-UX 1 1i v3, see the Logical Volume Management volume (volume 3) of the HP-UX System Administrators Guide, at the same address.
About cmappmgr
cmappmgr is an utility that allows you launch and monitor processes on HP Virtual Machine (HPVM) guest nodes. For information about HPVM, See Support for HP Integrity Virtual Machines (HPVM) (page 33). cmappmgr is operating-system-independent, supporting HP-UX, Linux, and Windows VMs. cmappmgr on the host communicates via SSL connections with a lightweight module on the VM guest, cmappserver. cmappmgr exits when the process that is being monitored does. It can be run as a service in a Serviceguard package, or invoked from an external script in a modular package or from a run and halt script in a legacy package. (See Chapter 6 of Managing Serviceguard for information about modular and legacy packages.) cmappmgr is packaged as a Serviceguard command. cmappserver is packaged as a depot, rpm, or exe (for HP-UX, Linux, or Windows respectively) which can be copied from the host to a VM guest and installed there. For more information see the white paper Designing High Availability Solutions with HP Serviceguard and HP Integrity Virtual Machines, which you can find at the address given under Documents for This Version (page 34). HPVM 4.1 Support for Windows 2008 Guests HPVM 4.1 supports Windows 2008 guests, and you can use cmappmgr to monitor processes on these guests. To enable this capability you need to do the following: Install the HPVM 4.1 July patches. See the latest version of the HP Integrity Virtual Machines Version 4.1 Release Notes, which you can find on http://www.hp.com/go/hpux-hpvm-docs. Install the Windows 2008 guest operating system on the host. edit C:\Program Files\Hewlett-Packard\cmappserver\conf\wrapper.conf on the Windows 2008 guest to insert the following line: wrapper.java.additional.1=-Dos.name="Windows 2003" This should go after the following lines:
#Java Additional Parameters #wrapper.java.additional.1=-Dprogram.name=C:\Program Files\Hewlett-Packard\cmappserver\cmappserver.bat
31
32
33
Considerations when Upgrading Serviceguard .rhosts If you relied on .rhosts for access in the previous version of the cluster, you must now configure Access Control Policies for the cluster users. For instructions on how to proceed, see the subsection Allowing Root Access to an Unconfigured Node under Configuring Root-Level Access in Chapter 5 of the Managing Serviceguard manual. cmclnodelist When you upgrade from an earlier version, Serviceguard converts cmclnodelist entries into new entries written into the cluster configuration file during the upgrade, as follows: USER_NAME <user_name> USER_HOST <host_node> USER_ROLE Monitor A wildcard + (plus) is converted as follows: USER_NAME ANY_USER USER_HOST ANY_SERVICEGUARD_NODE USER_ROLE Monitor After you complete the upgrade, use cmgetconf to create and save a copy of the new configuration. If you do a cmapplyconf, you want to be sure it applies the newly migrated Access Control Policies. Considerations when Installing Serviceguard When you install Serviceguard for the first time on a node, the node is not yet part of a cluster, and so there is no Access Control Policy. For instructions on how to proceed, see the subsection Allowing Root Access to an Unconfigured Node under Configuring Root-Level Access in Chapter 5 of the Managing Serviceguard manual.
For information on the Distributed Systems Administration Utilities (DSAU), see the latest version of the Distributed Systems Administration Utilities Release Notes and the Distributed Systems
34 Serviceguard Version A.1 1.20 Release Notes
Administration Utilities Users Guide at http://www.hp.com/go/hpux-core-docs: go to the HP-UX 11i v3 collection and scroll down to Getting started. For information about the Event Monitoring Service, see the following documents at http:// www.hp.com/go/hpux-ha-monitoring-docs > HP Event Monitoring Service: Using the Event Monitoring Service Using High Availability Monitors
The Event Monitoring Service (EMS) and the Event Monitoring Service (EMS) Developers Kit are available for download at http://www.hp.com/go/softwaredepot -> High Availability. Other relevant HP-UX documentation includes: HP-UX System Administrators Guide at http://www.hp.com/go/hpux-core-docs. This multi-volume manual replaces Managing Systems and Workgroups as of HP-UX 1 v3. 1i For information about the organization of the set, see the Preface to the Overview volume. The latest HP Auto Port Aggregation Release Notes and other APA documentation at http:// www.hp.com/go/hpux-networking-docs > HP-UX 11i v3 Networking Software. The latest version of HP-UX VLAN Administrator's Guide and other VLAN documentation at http://www.hp.com/go/hpux-networking-docs > HP-UX 11i v3 Networking Software.
Further Information
Additional information about Serviceguard and related high availability topics can be found at: http://www.hp.com/go/softwaredepot -> High Availability Online versions of users guides and white papers are available at: http://www.hp.com/go/hpux-serviceguard-docs Support tools and information are available from the Hewlett-Packard IT Resource Centers: http://us-support.external.hp.com (Americas and Asia Pacific) http://europe-support.external.hp.com (Europe)
Compatibility
For complete compatibility information see the Serviceguard/SGeRAC/SMS/Serviceguard Manager Plug-in Compatibility and Feature Matrix posted at http://www.hp.com/go/hpux-serviceguard-docs.
Mixed Clusters
Mixed cluster has several meanings in the context of Serviceguard. The following are support statements for various types of mixed cluster. Mixed Serviceguard Versions You cannot mix Serviceguard versions in the same cluster; all nodes must be running the same version of Serviceguard. The sole exception to this rule is a rolling upgrade, during which Serviceguard versions can be mixed temporarily, but no cluster configuration changes are allowed. See Upgrading from an Earlier Serviceguard Release (page 43) , and Appendix D of Managing Serviceguard, at the address given under Documents for This Version (page 34).
Further Information 35
Mixed Hardware Architecture As of HP-UX 1 v2 Update 2 (0409) and Serviceguard A.1 1i 1.16, Serviceguard supports mixed-hardware-architecture clusters, consisting of HP 9000 and Integrity servers. Mixed-hardware-architecture clusters support the same volume managers, at the same version level, as Serviceguard clusters in which the server hardware is of a single type. The following restrictions apply: Except during a rolling upgrade, all nodes must be running: The same HP-UX version NOTE: HP-UX version in this context means major release, such as 1 v2. It is acceptable 1i to have a mix of different HP-UX Fusion releases for the same major revision (for example, 1 v2 September 2004 and 1 v2 September 2006), although it is generally best to 1i 1i have all nodes running the same Fusion release. A mix of HP-UX 1 v2 and 1 v3 nodes 1i 1i is also allowed, but entails some restrictions; see Mixed HP-UX Operating-System Revisions (page 36). Keep in mind that Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i The same version of Serviceguard The same version of any volume manager or file system that is independent of HP-UX The same patch level for LVM and SLVM The same patch level for HP-UX, Serviceguard, and volume managers and related subsystems (for example Veritas VxVM, and VxFS)
All applications running in the cluster must adhere to the vendors requirements for mixed Integrity and HP 9000 environments. The cluster cannot use Oracle RAC. (SGeRAC is not supported in a mixed-hardware-architecture cluster because Oracle RAC does not support mixing hardware architectures within a single RAC cluster.)
For more information about mixed-hardware-architecture clusters, see Configuration Rules for a Mixed HP 9000 /Integrity Serviceguard Cluster at http://www.hp.com/go/hpux-serviceguard-docs. Mixed HP-UX Operating-System Revisions A Serviceguard A.1 1.19 cluster can contain a mix of nodes running HP-UX 1 v2 and 1 v3, 1i 1i with certain restrictions. You may want to take advantage of this fact when preparing to upgrade the cluster to A.1 1.20; see in particular Rules and Restrictions for Clusters in Transition (page 37) and Rules and Restrictions for Heterogeneous Clusters (page 38). IMPORTANT: Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i
For the purposes of this discussion we'll identify three broad cases: homogeneous clusters, clusters in transition, and heterogeneous clusters.
36
NOTE: In all three cases, the discussion refers to mixing HP-UX versions 1 v2 and 1 v3. 1i 1i Unless explicitly stated otherwise, version in this subsection means HP-UX version, not Serviceguard version. Homogeneous cluster refers to a cluster that is not being upgraded, and which has no need to include nodes running different major versions of HP-UX. See HP Recommendation for Homogeneous Clusters (page 37). Cluster in transition refers to a cluster whose HP-UX version is being upgraded as part of a normal rolling upgrade that occurs over a relatively short period. See Rules and Restrictions for Clusters in Transition (page 37). Heterogeneous cluster refers to a cluster that contains a mix of nodes running HP-UX 1 v2 1i and 1 v3. See Rules and Restrictions for Heterogeneous Clusters (page 38). 1i Such a cluster is either being upgraded from HP-UX 1 v2 to 1 v3 over an extended period, 1i 1i or for some other reason needs to accommodate nodes running both versions of HP-UX. For example, if you are running Serviceguard A.1 1.19 on HP-UX 1 v2, and are planning 1i to upgrade to Serviceguard A.1 1.20, which runs only on HP-UX 1 v3, you may decide to 1i upgrade the nodes to HPUX 1 v3 over a period of time, and then upgrade the cluster to 1i Serviceguard A.1 1.20.
HP Recommendation for Homogeneous Clusters
All nodes should be running the same major HP-UX version. Major version means a release such as 1 v2 or 1 v3. It is acceptable to have a mix of different HP-UX Fusion releases for the same 1i 1i major revision (for example, 1 v2 September 2004 and 1 v2 September 2006), although it 1i 1i is generally best to have all nodes running the same Fusion release at the same patch level. Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i
Rules and Restrictions for Clusters in Transition
You can upgrade a cluster from HP-UX 1 v2 to 1 v3 as part of a rolling upgrade. A rolling 1i 1i upgrade includes upgrading to a new version of Serviceguard. Rules, guidelines, and restrictions for rolling upgrades are in Appendix D of Managing Serviceguard, which you can find at the address given under Documents for This Version (page 34). See also Upgrading from an Earlier Serviceguard Release (page 43). It is also possible to upgrade the HP-UX version without upgrading Serviceguard, and so avoid the rolling upgrade restrictions. In this case you must make sure that: The Serviceguard version bundled with the HP-UX Operating Environment (OE) matches the version already installed. CAUTION: You need to pay careful attention to the Serviceguard patch level as well as the Serviceguard version. If you install Serviceguard at a patch level lower than the one the cluster was running, any new features introduced in the higher-level patch will cease to be available; you will need to re-install the higher-level patch on all nodes before you can use its features again. The Serviceguard product bundled with the HP-UX OE is installed along with the HP-UX filesets. (The Serviceguard binary files differ between HP-UX 1 v2 and 1 v3, even though the 1i 1i Serviceguard version is the same.)
37
NOTE:
IMPORTANT: Serviceguard A.1 1.20 is supported only on HPUX 1 v3, so all cluster nodes 1i must be running 1 v3 before you can complete a rolling upgrade to A.1 1i 1.20. In preparation for an upgrade to Serviceguard A.1 1.20, you may want to upgrade the nodes from 1 v2 to 1 v3 over time. The rules that follow provide guidance. 1i 1i A Serviceguard A.1 1.19 cluster can accommodate a mix of nodes running HP-UX 1 v2 and 1 1i 1i v3, with the following restrictions. Each node must be running either HPUX 1 v2 or 1 v3. No other operating system, or 1i 1i operating-system version (such as 1 v1) is permitted. 1i Except during a rolling upgrade of Serviceguard (see Rules and Restrictions for Clusters in Transition (page 37)), all the nodes must be running Serviceguard A.1 1.19. IMPORTANT: This means, for example, that before you can add a node running HP-UX 1 1i v3 to a cluster in which the existing nodes are running 1 v2, all the 1 v2 nodes, and the 1i 1i new 1 v3 node, must have Serviceguard A.1 1i 1.19 installed. See also Recommendations (page 38). Some HPUX 1 v3 features must not be used as long as any nodes are running 1 v2. 1i 1i These features specifically must not be used: LVM 2.x volume groups. Agile addressing (see About Device Special Files (DSFs) (page 32) for more information about agile addressing).
NOTE: Native multipathing, which is enabled by default on HP-UX 1 v3, can be used on 1i 1 v3 nodes in a heterogeneous cluster. 1i The cluster must not be using Oracle RAC. (SGeRAC is not supported in a heterogeneous cluster because Oracle RAC does not support mixing HP-UX versions within a single RAC cluster.) The cluster must not be using Veritas CVM or CFS. If you are updating nodes from HPUX 1 v2 to 1 v3, you must use update-ux; cold 1i 1i install is not supported in this context. If you are updating nodes from HPUX 1 v2 to 1 v3, and the Serviceguard version bundled 1i 1i with the HP-UX Operating Environment (OE) is later than the one already installed, rolling upgrade restrictions apply (see Rules and Restrictions for Clusters in Transition (page 37)).
Recommendations
In addition to the above Rules and Restrictions for Heterogeneous Clusters, HP strongly recommends the following: All the nodes on a given HP-UX version should be running the same Fusion release, at the same patch level; that is, the 1 v2 nodes should all be running the same 1 v2 Fusion 1i 1i
38
release at the same patch level, and the 1 v3 nodes should all be running the same 1 v3 1i 1i Fusion release at the same patch level. Keep in mind that Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i All nodes should be at the same Serviceguard patch level. CAUTION: If you introduce a node running a lower patch level than that of the existing nodes, any new functionality introduced in the higher-level patch will cease to be available until that higher-level patch is installed on all nodes. All nodes should be running the same patch levels for other products used by the cluster.
Bastille Compatibility
To ensure compatibility between Serviceguard (and Serviceguard Manager) and Bastille, do the following, depending on your environment. The files (host.config, for example) are under /etc/opt/sec_mgmt/bastille/defaults/configs/. If Bastille is started using Sec10Host (host.config) level lock down, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident="N" If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If Bastille is started using Sec20MngDMZ (mandmz.config) level lock down, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident=N If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If you are using the Serviceguard WBEM Provider, set IPFilter.block_wbem="N" (default) If you are using Serviceguard IP Monitoring, set IPFilter.block_ping="N" (default) If Bastille is started using SIM.config, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident=N If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If Bastille is started using Sec30DMZ (dmz.config) level lock down, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident=N If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If you are using the Serviceguard WBEM Provider, set IPFilter.block_wbem="N" (default) If you are using Serviceguard IP Monitoring, set IPFilter.block_ping="N" (default) Add the following rules to ipf.customrules: pass in quick proto tcp from any to any port = 2301 pass in quick proto tcp from any to any port = 2381 pass in quick from <clusternodes> to any pass out quick from any to <clusternodes> In the above rules, <clusternodes> are all nodes in the cluster, including the local node. The ipf.customrules file is located under the Bastille directory itself. IPFilter-Serviceguard rules are documented in the latest HP-UX IPFilter Administrators Guide, which you can find under http://www.hp.com/go/hpux-security-docs > HP-UX IPFilter Software
39
For information on how to configure HP-UX Bastille Sec10Host to allow the identd daemon to run, see the latest version of the Security Management volume of the HP-UX System Administrator's Guide under http://www.hp.com/go/hpux-core-docs. See also the HP-UX Bastille Users Guide installed on your system: /opt/sec_mgmt/bastille/ docs/user_guide.txt.
Memory Requirements
Serviceguard needs approximately 15.5 MB of lockable memory on each cluster node. NOTE: Remember to tune the swap space and the HP-UX kernel parameters nfile, maxfiles and maxfiles_lim to ensure that they are set high enough for the number of packages you are configuring.
Port Requirements
Serviceguard uses the ports listed below. Before installing, check /etc/services and be sure no other program has reserved these ports. discard 9/udp snmp 161/udp snmp 162/udp clvm-cfg 1476/tcp hacl-qs 1238/tcp hacl-qs 1238/udp hacl-monitor 3542/tcp hacl-monitor 3542/udp hacl-hb 5300/tcp hacl-hb 5300/udp hacl-gs 5301/tcp hacl-gs 5301/udp hacl-cfg 5302/tcp hacl-cfg 5302/udp hacl-probe 5303/tcp hacl-probe 5303/udp hacl-local 5304/tcp hacl-local 5304/udp hacl-test 5305/tcp hacl-dlm 5408/tcp
Serviceguard also uses port 9/udp discard during network probing setup when running configuration commands such as cmcheckconf or cmapplyconf and cmquerycl. If the port
40
is disabled (in inetd.conf), the network probing may be slower and under some conditions error messages may be written to syslog. Serviceguard also uses dynamic ports (typically in the range of 49152 - 65535) for some cluster services. If you have adjusted the dynamic port range using ndd (1M) - network tuning, alter your rules accordingly.
In addition, if you are using DSAU consolidated logging and decide to use the TCP transport, HP recommends you use TCP port 1775. This port is configurable; if port 1775 is already being used by another application, configure and open another free port when you configure the firewall.
System Firewalls
When using a system firewall such as HP-UX IPFilter with Serviceguard, you must leave open the ports listed above and follow specific IPFilter rules required by Serviceguard; these are documented in the latest version of the HP-UX IPFilter Administration Guide, available from http://www.hp.com/ go/hpux-security-docs > HP-UX IPFilter Software.
Installing Serviceguard
Serviceguard will automatically be installed when you install an HP-UX Operating Environment that includes it (HAOE or DCOE). To install Serviceguard independently, follow these broad steps:
41
CAUTION: This release of Serviceguard requires 1 v3. If you are already running earlier versions of 1i Serviceguard and HP-UX, see Upgrading from an Earlier Serviceguard Release (page 43) for more information. If you intend to use an alternate Quorum Server subnet About the Alternate Quorum Server Subnet (page 30), and the new cluster will use an existing Quorum Server, make sure you are running Quorum Server version A.04.00. If not, you must upgrade the before you proceed; see Quorum Server Upgrade Required if You Are Using an Alternate Address (page 9). Install or upgrade to HP-UX 1 v3 before loading Serviceguard Version A.1 1i 1.20. For information and instructions, see the HP-UX Installation and Update Guide for the target release at http://www.hp.com/go/hpux-core-docs. 2. Use the swinstall command to install Serviceguard, product number T1905CA. For more information about swinstall, see the swinstall(1M) manpage and the Software Distributor Administration Guide for HP-UX 1 v2 or 1 v3. 1i 1i 3. Verify the installation. Use the following command to display a list of all installed components: swlist -R T1905CA
1.
The filesets that make up the Serviceguard product are: Serviceguard.CM-SG SGManagerPI.SGMGRPI SGWBEMProviders.SGPROV-CORE SGWBEMProviders.SGPROV-DOC SGWBEMProviders.SGPROV-MOF CM-Provider-MOF.CM-MOF CM-Provider-MOF.CM-PROVIDER Cluster-OM.CM-DEN-MOF Cluster-OM.CM-DEN-PROV Cluster-OM.CM-OM Cluster-OM.CM-OM-AUTH Cluster-OM.CM-OM-AUTH-COM Cluster-OM.CM-OM-COM Cluster-OM.CM-OM-MAN Package-CVM-CFS.CM-CVM-CFS Package-CVM-CFS.CM-CVM-CFS-COM Package-Manager.CM-PKG Package-Manager.CM-PKG-MAN Cluster-Monitor.CM-CORE Cluster-Monitor.CM-CORE-COM Cluster-Monitor.CM-CORE-MAN
42
NOTE: There are files in CM-CORE that are reserved for HP support. Do not change these files. Do not move, alter, or delete the following: /usr/contrib/bin/cmcorefr /usr/contrib/bin/cmdumpfr /usr/contrib/bin/cmfmtfr /usr/contrib/Q4/lib/q4lib/cmfr.pl /var/adm/cmcluster/frdump.cmcld.x (where x is a digit)
NOTE: If you did a swremove of an older version of Serviceguard before the swinstall, a zero-length binary configuration file (/etc/cmcluster/cmclconfig) may be left on your system. Remove this file before you issue the swinstall command. If you do not remove the zero-length binary configuration file, the installation will proceed correctly, but you may see error or warning messages such as:
Bad binary config file directory format. Could not convert old binary configuration file.
43
given under Documents for This Version (page 34); see also Non-Rolling Upgrade Using DRD (page 44). CAUTION: Special considerations apply to a rolling or non-rolling upgrade to Serviceguard A.1 1.19; see New Cluster Manager (page 8). If you are upgrading both the Quorum Server and Serviceguard, upgrade the Quorum Server before you upgrade Serviceguard. CAUTION: If you are using an alternate Quorum Server subnet (page 30), and you are not already running Quorum Server version A.04.00, you must upgrade to version A.04.00 before you proceed; see Quorum Server Upgrade Required if You Are Using an Alternate Address (page 9). If you are upgrading from a release earlier than A.1 1.16, see Access changes as of A.1 1.16 (page 33). For information about supported Serviceguard versions, see the support matrix at http:// www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard.
CAUTION: Make sure that no package is in maintenance mode when you upgrade Serviceguard; see Maintaining a Package: Maintenance Mode in chapter 7 of Managing Serviceguard for more information about maintenance mode.
44
If, after reading and understanding the restrictions, you decide to perform a non-rolling upgrade using DRD, follow the instructions under Performing a Non-Rolling Upgrade Using DRD in Appendix D of the latest edition of Managing Serviceguard, which you can find at the address given under Documents for This Version (page 34). Restrictions for DRD Upgrades DRD is available only on HP-UX 1 v3. 1i As of the date of these release notes, the only paths that are supported for DRD upgrade are: from Serviceguard version A.1 1.18 with patch PHSS_37602 (or a later cumulative patch) to: A.1 1.19 an A.1 1.19 patch release A.1 1.20 an A.1 1.20 patch release an A.1 1.19 patch release A.1 1.20 an A.1 1.20 patch release
IMPORTANT: Upgrades from earlier releases via DRD are not currently supported. DRD upgrades to version A.1 1.19 require that you make a manual fix to a checkinstall script in the software depot. You need to make this fix if: You are using drd runcmd update-ux to upgrade to Serviceguard A.1 1.19; or You are using drd runcmd swinstall to upgrade to Serviceguard A.1 1.19. For instructions on making the fix, see the Workaround Notes for defect QXCR1000901306 at ITRC.hp.com; see Fixed in This Version (page 49) for more information about looking up defects. You do not need to make this fix if: You are upgrading from Serviceguard A.1 1.18 or A.1 1.19 to A.1 1.20, or You are using drd runcmd swinstall to upgrade from one revision of Serviceguard A.1 1.19 to a later revision, or You are using drd runcmd update-ux to upgrade to an HP-UX OE containing SG A.1 1.20.
As of the date of these release notes, DRD upgrade is supported for clusters that use the LVM or VxVM volume manager.
45
IMPORTANT: See the HP Serviceguard Storage Management Suite A.04.00 Release Notes and HP Serviceguard Storage Management Suite A.03.00 Release Notes for information about DRD upgrades in CVM/CFS environments. Use the DRD software released with the September 2009 release of HP-UX 1 v3 or later. 1i You do not have to upgrade the operating system itself, so long as you are running 1 v3. 1i HP recommends using the latest version of the DRD software, which you can obtain free from HP (see Upgrade Using DRD above.) Serviceguard does not support booting from a clone disk made on another system (sometimes referred to as DRD re-hosting). The cluster must meet both general and release-specific requirements for a rolling or non-rolling upgrade; see the remainder of this section and the sections Guidelines for Rolling Upgrade and Guidelines for Non-Rolling Upgrade in Appendix D of the latest edition of Managing Serviceguard. You must follow the instructions for DRD upgrades in Managing Serviceguard; see Performing a Rolling Upgrade Using DRD or Performing a Non-Rolling Upgrade Using DRD in Appendix D of the latest edition of that manual. IMPORTANT: The following information is additional to the instructions in Appendix D of Managing Serviceguard. If the install depot is on tape (or is a single file) you must copy it onto the clone disk or an upgrade server before running swinstall or update-ux to upgrade the clone. Do one of the following: use swcopy to copy the depot onto the active root disk, then clone the disk, then install the software from the depot on the clone disk; or use swcopy to copy the depot to an external server, then use the depot on the external server to install the software onto the clone disk
You should also do this if you are going to use update-ux and the install depot is on a CD mounted on the local system, or is a directory in the active system (for example, in a file system mounted on the active root disk). In addition, make sure you do not use control-C to escape from a runcmd drd swinstall command, as this can cause problems.
Rolling Upgrade
In some cases you can upgrade Serviceguard and HP-UX without bringing down the cluster; you do this by upgrading one node at a time, while that nodes applications are running on an alternate node. The process is referred to as a rolling upgrade, and is described in Appendix D of the Managing Serviceguard manual.
46
Requirements for Rolling Upgrade to A.1 1.20 To perform a rolling upgrade to Serviceguard A.1 1.20, you must be running: HP-UX 1 v3 and 1i Serviceguard A.1 1.19
If you are not already running A.1 1.19, you may be able to do a rolling upgrade to A.1 1.19, and from A.1 1.19 to A.1 1.20. See the next two subsections. Requirements for Rolling Upgrade to A.1 1.19 IMPORTANT: Although you can upgrade to Serviceguard A.1 1.19 while still on HP-UX 1 v2, 1i you must upgrade to HP-UX 1 v3 in order to upgrade to Serviceguard A.1 1i 1.20. Rolling upgrade to Serviceguard A.1 1.19 is supported only if you are upgrading from: Serviceguard A.1 1.15 or greater on HP Integrity systems running HP-UX 1 v2 or 1 v3; or 1i 1i Serviceguard A.1 1.16 or greater on HP 9000 systems running HP-UX 1 v2 or 1 v3. 1i 1i IMPORTANT: If you are upgrading from A.1 1.16 on HP-UX 1 v2, you must first install 1i patch PHSS_31072 or a later patch. See also the requirements listed above under Upgrading from an Earlier Serviceguard Release (page 43), and the Rolling Upgrade Exceptions that follow. Obtaining a Copy of Serviceguard A.1 1.19 If you are running a release earlier than A.1 1.19, and you need to obtain a copy of A.1 1.19 so that you can do a rolling upgrade to A.1 1.20, you can download the software from web at http:// www.software.hp.com/kiosk. Log in as follows: User name: ESS_SG1119_KIOSK Password: upgrade21120
47
Serviceguard for instructions in this case. See About Device Special Files (DSFs) (page 32) of these Release Notes for more information about agile addressing.
Upgrading from an Earlier Release if You Are Not Using Rolling Upgrade (Non-Rolling Upgrade)
If your cluster does not meet the requirements for a rolling upgrade, or you decide not to use rolling upgrade for some other reason, you must bring down the cluster (cmhaltcl) and then upgrade Serviceguard and HP-UX on the nodes. See Appendix D of the Managing Serviceguard manual. You can perform a non-rolling upgrade (that is, an upgrade performed while the cluster is down) from any HP-UX/Serviceguard release.
Uninstalling Serviceguard
To uninstall the Serviceguard software, run the SD-UX swremove command. Before removing software, note the following: 1. 2. Serviceguard must be halted (not running) on the node from which the swremove command is issued. The system from which the swremove command is issued must be removed from the cluster configuration. (If the node is not removed from the cluster configuration first, swremove will cause the current cluster to be deleted.) The swremove command should be issued from one system at a time. That is, if Serviceguard is being de-installed from more than one system, it should be removed from one system at a time. If a zero-length binary configuration file (/etc/cmcluster/cmclconfig) is left your system you should remove it.
3.
48
On a system to which the patch has been applied, you will be able to perform online replacement of hot-swappable cards (without bringing down the cluster). See Replacing LAN or Fibre Channel Cards in Chapter 8 of Managing Serviceguard for more information. (You can find the manual at http://www.hp.com/go/hpux-serviceguard-docs.) NOTE: If for some reason you need to proceed without the patch, you must follow the Off-Line Replacement procedure under Replacing LAN or Fibre Channel Cards in Chapter 8 of Managing Serviceguard.
49
Additional fixes are listed in the patch form text, which is delivered with the patches.
50
QXCR1001007816: Serviceguard 11.20 cmcld abort on unexpected UDP message version QXCR1001007347: Serviceguard 11.20 installation does not preserve hacl-cfg options QXCR1001005198: cmcld has a file descriptor and memory leak when connect errors occur QXCR1001005241: cmmodnet -t incorrectly plumbs secondary interface on lan QXCR1001005296: cmcld runs into an assertion at cluster start QXCR1001003944: SG cross subnet functionality breaks non symmetric cluster configurations QXCR1000984406: cmruncl -n may fail if MEMBER_TIMEOUT is 60 seconds or more QXCR1000984418: cmclconfd -p does not process query requests from commands after nessus scan QXCR1000984388: Packages may re-start if they are halted at the same time and have dependencies QXCR1000984408: cmproxyd does not clean up named pipe files in /var/adm/cmcluster/proxies dir QXCR1000984401: script_log_file does not resolve more than one $SG_PACKAGE variable QXCR1000984411: Serviceguard 11.19 unlimited modular package service restarts is impossible QXCR1000957441: ALL nodes in ALL 11.19 clusters fail after quorum server re-configuration QXCR1000945173: all cmcld threads in ksleep() safetytimer expiration node TOC QXCR1000939872: cmcld abort due to closing the same fd twice after QS failure QXCR1000930233: 11.19 cmclconfd liable to core dump and log messages omit hostname QXCR1000926553:cmclconfd hang QXCR1000924956: SG cmcld does not log .cm_start_time message correctly in syslog after abort QXCR1000924958: cmcld SIGSEGV when starting a 1 node cluster after upgrade to A.11.19 QXCR1000924001: cmfileassistd should not die after 1min of inactivity QXCR1000924069: Serviceguard does not detect cluster lock recovery QXCR1000924116: cmdisklock core dumps if the open fails on a locklun QXCR1000923632: cmcld aborts after short hangs on one-node-cluster QXCR1000923641: cmcld aborts after short hangs on one-node-cluster
Additional fixes are listed in the patch form text, which is delivered with the patches.
51
Additional fixes are listed in the patch form text, which is delivered with the patches.
52
Known Problems
This section lists problems in Serviceguard Version A.1 1.20 known at the time of publication. This list is subject to change without notice. More-recent information may be available from your HP support representative, or on the Hewlett-Packard IT Resource Center (ITRC): http://www.itrc.hp.com (Americas and Asia Pacific) or http://www.europe.itrc.hp.com (Europe). You can find details of each defect on the ITRC web site. See Fixed in This Version (page 49) for instructions on looking up a defect.
Known Problems
53
Platform Release
A platform release is a stable version of Serviceguard, which is the preferred environment for the majority of Serviceguard customers. Platform releases may also contain new Serviceguard features. These releases are supported for an extended period of time, determined by HP. Patches will be made available within the extended support time frame even though a newer version of Serviceguard is available. Serviceguard A.1 1.20 is a platform release. NOTE: For compatibility information for this and earlier releases, see the Serviceguard Compatibility and Feature Matrix at the address given under Documents for This Version (page 34). See also Compatibility Information and Installation Requirements (page 35) in these Release Notes.
Feature Release
A feature release contains new Serviceguard features. Feature releases are for customers who want to use the latest features of Serviceguard. In general, feature releases will be supported until a newer version of Serviceguard becomes available. In order to receive fixes for any defects found in a feature release after a newer version is released, you will need to upgrade to the newer, supported version.
Patch
A patch to a release may be issued in response to a critical business problem found by a Serviceguard customer, or a patch may enable new features. (Such features do not affect the running of an existing Serviceguard cluster until you activate them, for example by reconfiguring the cluster or reworking a package.) In the case of a patch, the following is guaranteed: Patch-specific release testing is performed before the patch is posted. Existing functionality, scripts, etc. will continue to operate without modification.
All fixes from the previous patch are incorporated. Certification testing for a patch is recommended only for those fixes that are important to your specific installation, or for new features that you intend to use.
Supported Releases
For information about the support life of a given release and its compatibility with versions of related products, see the Serviceguard Compatibility and Feature Matrix, at http://www.hp.com/ go/hpux-serviceguard-docs.
54
Version Numbering
Serviceguard releases employ a version numbering string that helps you identify the characteristics of the release. The version string has 4 parts: Alphabetic Prefix First Numeric Field Second Numeric Field
Third Numeric Field When a new release is issued, different portions of the version string are incremented to show a change from a previous version of the product.
Native Languages
Serviceguard Manager provides Native Language Support, but the command-line interface does not. IMPORTANT: Even though the command-line interface does not provide Native Language Support, Serviceguard functions correctly whether or not the LANG variable is set to C. See Serviceguard Manager (page 24) for a list of the languages supported by Serviceguard Manager. Documentation for major releases of Serviceguard is translated into the following languages: Japanese Simplified Chinese
55