Sie sind auf Seite 1von 55

HP Serviceguard Version A.1 1.

20 Release Notes

HP Part Number: 5900-1493 Published: April 201 1 Edition: 1

Legal Notices Copyright 1998-201 Hewlett-Packard Development Company, L.P. 1 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.21 and 12.212, Commercial 1 Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendors standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. UNIX is a registered trademark in the United States and other countries, licensed exclusively through The Open Group.

Contents
Publishing History..........................................................................................6 1 Serviceguard Version A.1 1.20 Release Notes..................................................7
Announcements........................................................................................................................7 Platform Dependencies.........................................................................................................7 April 201 Patches...............................................................................................................7 1 Serviceguard Bundled Components - New Product Structure......................................................7 Serviceguard Optional Products Not Bundled..........................................................................8 Serviceguard A.1 1.19 Is the Required Basis for Rolling Upgrades.................................................8 New Cluster Manager..........................................................................................................8 Quorum Server Upgrade Required if You Are Using an Alternate Address...................................9 Serviceguard Manager Available from the System Management Homepage (SMH)......................9 Support for Mixed-OS Clusters (HPUX 1 v2 and 1 v3).........................................................9 1i 1i New Support for Version 5.1 Service Pack 1 (SP1) of Veritas Products from Symantec....................9 Support for Veritas Products from Symantec.............................................................................9 Version 4.1, 4.2, or 4.3 of HPVM Required.............................................................................9 ipnodes Entries Needed in /etc/nsswitch.conf.......................................................................10 Legacy Packages...............................................................................................................10 STORAGE_GROUP Deprecated......................................................................................10 Do Not Use .rhosts ............................................................................................................10 cmviewconf Obsolete.........................................................................................................10 Serviceguard Extension for Faster Failover Obsolete...............................................................10 RS232 Heartbeat Obsolete.................................................................................................10 Token Ring and FDDI Obsolete............................................................................................11 Parallel SCSI Dual Cluster Lock Obsolete...............................................................................11 Parallel SCSI Not Supported for Lock LUN.............................................................................11 Cluster Name Restrictions....................................................................................................11 Optimizing Performance when Activating LVM Volume Groups.................................................11 High Availability Consulting Services....................................................................................12 HP-UX 1 v3 Features Important to Serviceguard....................................................................12 1i New Bundles................................................................................................................12 Support for Veritas 5.1 SP1 from Symantec.......................................................................12 Native Multipathing, Veritas DMP, and Related Features in HP-UX 1 v3..............................12 1i PV Links..................................................................................................................12 PCI Error Recovery....................................................................................................12 Online Replacement of LAN Cards Requires Patch.............................................................13 Whats in this Release.............................................................................................................13 New Features for A.1 1.20 April 201 Patches.........................................................................13 1 New Features for A.1 1.20....................................................................................................13 Serviceguard on HP-UX 1 v3.............................................................................................14 1i Whats Not in this Release..................................................................................................15 About the New Features.....................................................................................................15 Modular CFS Packages for Reducing Package Usage.........................................................15 Advantages of Modular CFS Packages........................................................................16 Improved Performance while Halting a Non-detached Multi-node Package............................16 Support for Veritas 5.1 SP1 on HP-UX 1 v3 Only.............................................................16 1i Easy Deployment...........................................................................................................16 Advantages of Easy Deployment.................................................................................17 Limitations of Easy Deployment...................................................................................17 Halting a Node or the Cluster while Keeping Packages Running (Live Application Detach)......18
Contents 3

What You Can Do....................................................................................................18 Cluster-wide Device Special Files (cDSFs)..........................................................................18 Points To Note..........................................................................................................19 Where cDSFs Reside.................................................................................................19 Checking the Cluster Configuration and Components.........................................................20 Limitations...............................................................................................................21 Cluster Verification and ccmon...................................................................................22 NFS-mounted File Systems..............................................................................................22 The LVM and VxVM Volume Monitor...............................................................................24 Serviceguard Manager...........................................................................................................24 DSAU Integration...............................................................................................................24 Native Language Support...................................................................................................25 What You Can Do.............................................................................................................25 New Features....................................................................................................................26 Current Limitations of Serviceguard Manager.........................................................................27 Browser and SMH Issues................................................................................................27 Pages Launched into New Window rather than Tabs with Firefox Tabbed Browsing...........27 Internet Explorer 7 Zooming Problem..........................................................................28 HP System Management Homepage (HPSMH) Timeout Default.......................................28 Help Subsystem.................................................................................................................28 Before Using HP Serviceguard Manager: Setting Up...............................................................28 Launching Serviceguard Manager........................................................................................29 Patches and Fixes...............................................................................................................29 Features Introduced Before A.1 1.20............................................................................................29 Features First Introduced in Serviceguard A.1 1.19 Patches.........................................................29 Features First Introduced Before Serviceguard A.1 1.19 .............................................................30 About olrad..................................................................................................................30 About vgchange -T........................................................................................................30 About the Alternate Quorum Server Subnet.......................................................................30 About LVM 2.x..............................................................................................................30 About cmappmgr..........................................................................................................31 HPVM 4.1 Support for Windows 2008 Guests.............................................................31 About Device Special Files (DSFs)....................................................................................32 Support for HP Integrity Virtual Machines (HPVM).............................................................33 About HPVM and Cluster Re-formation Time.................................................................33 Access changes as of A.1 1.16..........................................................................................33 Considerations when Upgrading Serviceguard.............................................................34 Considerations when Installing Serviceguard................................................................34 Documents for This Version......................................................................................................34 Further Information..................................................................................................................35 Compatibility Information and Installation Requirements...............................................................35 Compatibility....................................................................................................................35 Mixed Clusters..............................................................................................................35 Mixed Serviceguard Versions.....................................................................................35 Mixed Hardware Architecture.....................................................................................36 Mixed HP-UX Operating-System Revisions....................................................................36 Compatibility with Storage Devices..................................................................................39 Bastille Compatibility.....................................................................................................39 Before Installing Serviceguard A.1 1.20 .................................................................................40 Memory Requirements........................................................................................................40 Port Requirements...............................................................................................................40 Ports Required by Serviceguard Manager.........................................................................41 System Firewalls.................................................................................................................41 Installing Serviceguard on HP-UX..............................................................................................41 Dependencies...................................................................................................................41
4 Contents

Installing Serviceguard.......................................................................................................41 If You Need To Disable identd.............................................................................................43 Upgrading from an Earlier Serviceguard Release...................................................................43 Upgrade Using DRD......................................................................................................44 Rolling Upgrade Using DRD.......................................................................................44 Non-Rolling Upgrade Using DRD................................................................................44 Restrictions for DRD Upgrades....................................................................................45 Veritas Storage Management Products.............................................................................46 Rolling Upgrade............................................................................................................46 Requirements for Rolling Upgrade to A.1 1.20................................................................47 Requirements for Rolling Upgrade to A.1 1.19.................................................................47 Obtaining a Copy of Serviceguard A.1 1.19...................................................................47 Rolling Upgrade Exceptions............................................................................................47 HP-UX Cold Install.....................................................................................................47 HP Serviceguard Storage Management Suite and standalone CVM product.....................47 Migrating to Agile Addressing if Using Cluster Lock Disk...............................................47 Upgrading from an Earlier Release if You Are Not Using Rolling Upgrade (Non-Rolling Upgrade)..........................................................................................................................48 Uninstalling Serviceguard........................................................................................................48 Patches for this Version............................................................................................................48 QXCR1000575890: OLR of a LAN Card in SG cluster fails on HP-UX 1 v3.............................49 1i Fixed in This Version...............................................................................................................49 A.1 1.20 Defects Fixed in the April 201 Patches......................................................................50 1 Defects Fixed in A.1 1.20......................................................................................................50 Problems Fixed in this Version of the Serviceguard Manager Plug-in..........................................52 Known Problems.....................................................................................................................53 Known Problems for Serviceguard........................................................................................53 Known Problems for Serviceguard Manager..........................................................................54 About Serviceguard Releases...................................................................................................54 Types of Releases and Patches.............................................................................................54 Platform Release............................................................................................................54 Feature Release.............................................................................................................54 Patch...........................................................................................................................54 Supported Releases............................................................................................................54 Version Numbering............................................................................................................55 Release Notes Revisions..........................................................................................................55 Native Languages ..................................................................................................................55

Contents

Publishing History
Table 1 Publishing History
Printing Date April 201 1 Part Number 59001493 Edition First Edition

This new edition provides information on Serviceguard A.1 1.20 April 201 patches that add new 1 features to it, as well as features first added in A.1 1.20 and in patches to A.1 1.19.

1 Serviceguard Version A.1 1.20 Release Notes


Announcements
This section announces the most important features and limitations of Serviceguard A.1 1.20. For more information, see Whats in this Release (page 13). NOTE: These Release Notes also include information about features first introduced in A.1 1.19 patches; see Features First Introduced in Serviceguard A.1 1.19 Patches (page 29).

Platform Dependencies
IMPORTANT: This new version of Serviceguard is supported on 1 v3 only. See Serviceguard 1i on HP-UX 1 v3 (page 14). 1i

April 201 Patches 1


See New Features for A.1 1.20 April 201 Patches (page 13) for information about new features 1 introduced in the following April 201 patches: 1 PHSS_41628 for Serviceguard A.1 1.20 on HP-UX 1 v3 and Serviceguard Storage 1i Management Suite A.04.00 (with 5.1 SP1 Veritas) on HP-UX 1 v3 only 1i PHSS_41674 for Serviceguard CFS A.1 1.20 on HP-UX 1 v3 1i

For information about patches required to support HP Storage Management Suite (SMS) bundles, see the HP Serviceguard Storage Management Suite Version A.04.00 for HP-UX 1 v3 Release 1i Notes at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard Storage Management Suite A.04.xx (with 5.1 SP1 Veritas).

Serviceguard Bundled Components - New Product Structure


The following Serviceguard components are now available for HP-UX 1 v3: 1i Product T1905CA A.1 1.20 software and license The following products are bundled with Serviceguard, as components of the Serviceguard product, T1905CA: HP Serviceguard Manager version B.03.10 This version of the web-based graphical user interface (GUI), is also known as the SMH Plug-in, or simply plug-in, version. See Serviceguard Manager (page 24). The Cluster Object Manager (COM), version B.07.00 HP Serviceguard WBEM Providers SD product, version A.03.10 HP SG Cluster CVM CFS SD Product, version A.1 1.20 HP Package-Manager SD Product, version A.1 1.20 HP Cluster-Monitor SD Product, version A.1 1.20

This product structure allows Serviceguard customers who subscribe to Update services to download Serviceguard from the web (along with the sub-products classed as Serviceguard components and listed above) via Software Update Manager (SUM).

Announcements

Serviceguard Optional Products Not Bundled


The following optional product is not bundled with Serviceguard, but is delivered free with Serviceguard on the Serviceguard Distributed Components CD, and can also be downloaded from http://www.hp.com/go/softwaredepot/ha: Quorum Server, and the Quorum Server Version A.04.00 Release Notes.

Serviceguard A.1 1.19 Is the Required Basis for Rolling Upgrades


A.1 1.19 is the only version of Serviceguard that will allow both the older version of the cluster manager and the New Cluster Manager (page 8) to coexist during a rolling upgrade. This means that though you can perform a rolling upgrade to A.1 1.19, or from A.1 1.19 to A.1 1.20, you cannot do a rolling upgrade from a pre-A.1 1.19 release to a post-A.1 1.19 release. For example, you can do a rolling upgrade from A.1 1.18 to A.1 1.19, but you cannot do a rolling upgrade from A.1 1.18 directly to A.1 1.20; you must upgrade to A.1 1.19 first. See Upgrading from an Earlier Serviceguard Release (page 43) and Rolling Upgrade (page 46). You can also do a rolling upgrade using a Dynamic Root Disk (DRD). It is recommended to use this method as it is easier and is less disruptive because each node is down for a shorter time. For more information, see Upgrade Using DRD (page 44).

New Cluster Manager


IMPORTANT: Serviceguard A.1 1.19 introduced a new cluster manager. Because of this change, you must upgrade to Serviceguard A.1 1.19 before you can do a rolling upgrade to any later release, including Serviceguard A.1 1.20. If you are already running A.1 1.19, you can skip the discussion that follows. In a running cluster, the new cluster manager affects node timeout and failover and cluster re-formation, providing significant performance improvements; see the discussion of MEMBER_TIMEOUT under Cluster Configuration Parameters in the latest version of Managing Serviceguard for more information. NOTE: Because of these built-in performance improvements, Serviceguard Extension for Faster Failover is no longer being offered as a separate product (starting with A.1 1.19). There is also a one-time effect on upgrade to Serviceguard A.1 1.19.

Serviceguard Version A.1 1.20 Release Notes

Upgrade will trigger a cluster membership transition from the old to the new cluster manager when the last node in the cluster has rolled to A.1 1.19. This transition can take up to one second, during which time the old cluster manager will shut down and the new cluster manager will start. CAUTION: From the time when the old cluster manager is shut down until the new cluster manager forms its first cluster, a node failure will cause the entire cluster to fail. HP strongly recommends that you use no Serviceguard commands other than cmviewcl (1m) until the new cluster manager successfully completes its first cluster re-formation. See the section Special Considerations for Upgrade to Serviceguard A.1 1.19 in Appendix D of the latest version of Managing Serviceguard for more information. For further caveats that apply to specific upgrade paths make sure you read the following sections in these Release Notes: Compatibility Information and Installation Requirements (page 35) and Upgrading from an Earlier Serviceguard Release (page 43).

Quorum Server Upgrade Required if You Are Using an Alternate Address


If you are using an alternate address on a Serviceguard version earlier than A.1 1.19, you must upgrade the Quorum Server to version A.04.00 before you upgrade the cluster. See About the Alternate Quorum Server Subnet (page 30) for more information. CAUTION: If you fail to do this, the upgraded cluster will be running without a cluster lock until you have upgraded the Quorum Server.

Serviceguard Manager Available from the System Management Homepage (SMH)


For details, see Serviceguard Manager (page 24). NOTE: The earlier, station-management version of Serviceguard Manager is obsolete (as of Serviceguard A.1 1.18). The last version of Serviceguard that supports that product is A.1 1.17 (A.1 1.17.01 on HP-UX 1 v3). 1i

Support for Mixed-OS Clusters (HPUX 1 v2 and 1 v3) 1i 1i


With some limitations, HP now supports Serviceguard clusters in which some nodes are running HP-UX 1 v2 and some 1 v3. You may want to take advantage of this if you are currently running 1i 1i Serviceguard A.1 1.19 on HP-UX 1 v2 and want to upgrade the nodes to 1 v3 over a period 1i 1i of time, in preparation for upgrading to Serviceguard A.1 1.20. IMPORTANT: The cluster must be running Serviceguard A.1 1.19. Serviceguard A.1 1.20 does not run on HPUX 1 v2. 1i For more information, see Mixed Clusters (page 35).

New Support for Version 5.1 Service Pack 1 (SP1) of Veritas Products from Symantec
With the patches listed under April 201 Patches (page 7) Serviceguard A.1 1 1.20 on HP-UX 1 v3 supports version 5.1 SP1 of Veritas VxVM, CVM, and CFS from Symantec. 1i

Support for Veritas Products from Symantec


For the most up-to-date information, check the HP Serviceguard/SGeRAC/Serviceguard Storage Management Suite/Serviceguard Manager Plug-in Compatibility and Feature Matrix, at the address given under Compatibility Information and Installation Requirements (page 35).

Version 4.1, 4.2, or 4.3 of HPVM Required


If you intend to use HP Integrity Virtual Machines (HPVM) with Serviceguard A.1 1.20, you must install or upgrade to HPVM 4.1 or 4.2. HPVM 4.3 is supported with Serviceguard A.1 1.20 April
Announcements 9

201 patches. See Support for HP Integrity Virtual Machines (HPVM) (page 33) for more 1 information about HPVM. See also About cmappmgr (page 31).

ipnodes Entries Needed in /etc/nsswitch.conf


Beginning with version A.1 1.19, Serviceguard uses calls that require ipnodes entries to be configured in the /etc/nsswitch.conf. See Safeguarding against Loss of Name Resolution Services in Chapter 5 of Managing Serviceguard for more information.

Legacy Packages
As of Serviceguard A.1 1.20, new package features are being implemented only in modular packages (those created using the method introduced in A.1 1.18). IMPORTANT: Support for legacy packages (created using the earlier method) will be withdrawn altogether in a future release. Use the modular method to create new packages whenever possible. See Chapter 6 of Managing Serviceguard for more information.

STORAGE_GROUP Deprecated
The STORAGE_GROUP parameter (in the package configuration file for legacy packages) is deprecated as of Serviceguard A.1 1.19. Support for it will be withdrawn in a future release. A package using Veritas Cluster Volume manager from Symantec (CVM) disk groups for raw access (without Veritas Cluster File System from Symantec, or CFS) needs to declare a dependency on SG-CFS-pkg; see Creating the Storage Infrastructure with Veritas Cluster Volume Manager (CVM) in Chapter 5 of Managing Serviceguard for more information.

Do Not Use .rhosts


Do not use .rhosts file as a means of allowing root access to an unconfigured node. Use $SGCONF/cmclnodelist instead. See Allowing Root Access to an Unconfigured Node in Chapter 5 of Managing Serviceguard for more information.

cmviewconf Obsolete
cmviewconf is obsolete as of Serviceguard A.1 1.20. Use cmviewcl (1m) to obtain information about the cluster. See Reviewing Cluster and Package Status in Chapter 7 of Managing Serviceguard for more information.

Serviceguard Extension for Faster Failover Obsolete


Performance improvements in the New Cluster Manager (page 8) make the Serviceguard Extension for Faster Failover (SGeFF) obsolete. If SGeFF is present, it will be removed from the system when you upgrade to Serviceguard A.1 1.19 (preparatory to a rolling upgrade to A.1 1.20) or A.1 1.20.

RS232 Heartbeat Obsolete


The current version of Serviceguard does not support RS232 for the cluster heartbeat (Serviceguard A.1 1.17 on HP-UX 1 v2 and A.1 1i 1.16 on HP-UX 1 v1 were the last versions that did). 1i A minimum Serviceguard configuration needs two network interface cards for the heartbeat in all cases, using one of the following configurations: two heartbeat subnets; or one heartbeat subnet with a standby; or one heartbeat subnet using APA with two physical ports in hot standby mode or LAN monitor mode.

10

Serviceguard Version A.1 1.20 Release Notes

To provide the required redundancy for both networking and mass storage connectivity on servers with fewer than three I/O slots, you may need to use multifunction I/O cards that contain both networking and mass storage ports.

Token Ring and FDDI Obsolete


Serviceguard A.1 1.20 does not support Token Ring and FDDI technologies for the cluster heartbeat and data networks (Serviceguard A.1 1.17 on HP-UX 1 v2 and A.1 1i 1.16 on HP-UX 1 v1 were 1i the last versions that did). HP-UX 1 v3 does not support these two technologies. 1i The unsupported configurations include physical Token Ring and FDDI interfaces, Virtual LAN (VLAN) interfaces over FDDI or Token Ring, and failover groups of Token Ring and FDDI interfaces in the LAN Monitor Mode of the APA product.

Parallel SCSI Dual Cluster Lock Obsolete


You must use Fibre Channel connections for a dual cluster lock; you can no longer implement it in a parallel SCSI configuration (as of Serviceguard A.1 1.18). See Dual Lock Disk in Chapter 3 of Managing Serviceguard for more information about dual cluster locks.

Parallel SCSI Not Supported for Lock LUN


The lock LUN functionality does not support parallel SCSI; you must use Fibre Channel for a lock LUN. If you need to use parallel SCSI, use an LVM cluster lock disk, or a Quorum Server. For more information about the cluster lock, see Cluster Lock in Chapter 3 of the latest edition of Managing Serviceguard.

Cluster Name Restrictions


The following characters must not be used in the cluster name if you are using the Quorum Server: at-sign (@), equal-sign (=), or-sign (|), semicolon (;). These characters are deprecated, meaning that you should not use them, even if you are not using the Quorum Server, because they will be illegal in a future Serviceguard release. Future releases will require the cluster name to: Begin and end with an alphanumeric character Otherwise use only alphanumeric characters, or dot (.), hyphen (-), or underscore (_)

Optimizing Performance when Activating LVM Volume Groups


If a package activates a large number of volume groups, you can improve the packages start-up and shutdown performance by carefully tuning the concurrent_vgchange_operations parameter in the package configuration file (or control script, for legacy packages). Tune performance by increasing this parameter a little at a time and monitoring the effect on performance at each step; stop increasing it, or reset it to a lower level, as soon as performance starts to level off or decline. Factors you need to take into account include the number of CPUs, the amount of available memory, the HP-UX kernel settings for nfile and nproc, and the number and characteristics of other packages that will be running on the node. NOTE: Remember to do this exercise not only on the node on which the package will normally run, but also on the node with the least resources in the cluster, as a failover or other unexpected circumstances could result in that node running the package. For more information, see the section Optimizing for Large Numbers of Storage Units in Chapter 6 of the latest edition of Managing Serviceguard (in the High Availability collection on http://www.hp.com/go/hpux-serviceguard-docs) and the comments in the package configuration file.
Announcements 1 1

High Availability Consulting Services


Because Serviceguard configurations can be complex to configure and maintain, HP strongly recommends that you use its high availability consulting services to ensure a smooth installation and rollout; contact your HP representative for more information. You should also work with your HP representative to ensure that you have the latest firmware revisions for disk drives, disk controllers, LAN controllers, and other hardware.

HP-UX 1 v3 Features Important to Serviceguard 1i


Serviceguard A.1 1.20 runs only on HP-UX 1 v3, which introduces important improvements over 1i 1 v2, particularly in regard to the I/O subsystem. See the subsections that follow, and Whats 1i in this Release (page 13).

New Bundles
Serviceguard A.1 1.20 is now available as a recommended product in the HP-UX 1 v3 HA-OE 1i and DC-OE bundles.

Support for Veritas 5.1 SP1 from Symantec


With the April 201 Patches (page 7), Serviceguard A.1 1 1.20 on HP-UX 1 v3 supports versions 1i 5.0.1 and 5.1 SP1 of VxVM/VxFS, CVM, and CFS. See Support for Veritas 5.1 SP1 on HP-UX 1 v3 Only (page 16). 1i

Native Multipathing, Veritas DMP, and Related Features in HP-UX 1 v3 1i


The HP-UX 1 v3 I/O subsystem provides multipathing and load balancing by default. This is 1i often referred to as native multipathing. Veritas Volume Manager (VxVM) and Dynamic Multipathing (DMP) from Symantec are supported on HP-UX 1 v3, but do not provide multipathing and load balancing; DMP acts as a pass-through 1i driver, allowing multipathing and load balancing to be controlled by the HP-UX I/O subsystem instead. When you upgrade a system to HP-UX 1 v3, the I/O subsystem by default will start performing 1i load balancing and multipathing for all multipath devices (whether or not they are managed by VxVM/DMP, and whether or not you decide to migrate the system to agile addressing); you do not have to take any additional steps to make this happen. For more information about multipathing in HP-UX 1 v3, see the white paper HP-UX 1 v3 Native 1i 1i Multipathing for Mass Storage, and the Logical Volume Management volume of the HP-UX System Administrators Guide at http://www.hp.com/go/hpux-core-docs. See also About Device Special Files (DSFs) (page 32). PV Links Previous editions of Managing Serviceguard recommended creating LVM PV links for LUNs defined in disk arrays. On HP-UX 1 v3, these PV Links are redundant; they are supported, but will be 1i inactive unless you turn off native multipathing (in that case they will function as they did in previous releases of HP-UX). PCI Error Recovery PCI Error Recovery enables an HP-UX system to detect, isolate, and automatically recover from a PCI error. PCI Error Recovery is enabled on HP-UX 1 v3 systems by default, but HP recommends 1i that it remain enabled in a Serviceguard cluster only if your storage devices are configured with multiple paths and you have not disabled HP-UX native multipathing.

12

Serviceguard Version A.1 1.20 Release Notes

IMPORTANT: If your storage devices are configured with only a single path, or you have disabled multipathing, you should disable PCI Error Recovery; otherwise Serviceguard may not detect when connectivity is lost and cause a failover. For instructions on using the pci_eh_enable parameter to disable PCI Error Recovery, see the Tunable Kernel Parameters section of the latest edition of the PCI Error Recovery Product Note which you can find at http://www.hp.com/go/hpux-networking-docs > HP-UX 11i v3 Networking Software.

Online Replacement of LAN Cards Requires Patch


Before you can replace a LAN card online (without bringing down the cluster) as described under Replacing LAN or Fibre Channel Cards in Chapter 8 of Managing Serviceguard, you must apply patch PHNE_35894 or a later cumulative patch. See Patches for this Version (page 48) and QXCR1000575890: OLR of a LAN Card in SG cluster fails on HP-UX 1 v3 (page 49). 1i For more information about replacing interface cards online, see the Interface Card OL* Support Guide for HP-UX 1 v3, at http://www.hp.com/go/hpux-core-docs -> HP-UX 11i v3 -> 1i User guide.

Whats in this Release


This release of Serviceguard A.1 1.20 runs only on 1 v3 (see Serviceguard on HP-UX 1 v3 1i 1i (page 14)) and adds new functionality. See the subsections that follow for details; see also the Announcements (page 7). For information about documentation, see Documents for This Version (page 34).

New Features for A.1 1.20 April 201 Patches 1


Serviceguard A.1 1.20, with the patches listed under April 201 Patches (page 7), provides 1 the following new capabilities: Modular CFS Packages for Reducing Package Usage (page 15) Improved Performance while Halting a Non-detached Multi-node Package (page 16) Support for VxVM, CVM, and CFS version 5.1 SP1 on HP-UX 1 1iv3 only (See Support for Veritas 5.1 SP1 on HP-UX 1 v3 Only (page 16) 1i Enhancements to Easy Deployment commands (See Easy Deployment (page 16)) Updates to NFS-mounted File System (See NFS-mounted File Systems (page 22)) New Serviceguard Manager capabilities in B.03.10 (See Serviceguard Manager (page 24) )

New Features for A.1 1.20


New commands (and equivalent capabilities with Serviceguard Manager) provide a quick and easy way to configure and deploy a Serviceguard cluster. See Easy Deployment (page 16). A new capability in Serviceguard allows a package's applications to remain running (unmonitored) while you do maintenance on the node or cluster on which the package is running. See Halting a Node or the Cluster while Keeping Packages Running (Live Application Detach) (page 18).

Whats in this Release

13

A new type of device file allows you to deploy a set of persistent device special files that is consistent across the cluster, eliminating to risk of name-space collisions amongst the cluster nodes. These new device special files are called cluster device special files or cDSFs. See Cluster-wide Device Special Files (cDSFs) (page 18).

New Serviceguard capabilities allow you allow you to check the soundness of the cluster configuration, and the health of its components, more thoroughly than you could in the past, and to do so at any time, rather than only when changing the configuration of the cluster or its packages. See Checking the Cluster Configuration and Components (page 20).

You can now you can use NFS-mounted (imported) file systems as shared storage in packages. See NFS-mounted File Systems (page 22). New capabilities in Serviceguard and HPVM allow online migration of a virtual machine. See the section Online VM Migration with Serviceguard in the white paper Designing high-availability solutions with HP Serviceguard and HP Integrity Virtual Machines, at http:// www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard > White papers.

Serviceguard A.1 1.20 supports a new failover policy, site_preferred_manual, which prevents automatic failover of a package across SITEs. This capability can be used only in a site-aware disaster-tolerant cluster, which requires Metrocluster (additional HP software).

Serviceguard on HP-UX 1 v3 1i
Serviceguard support for HP-UX 1 v3 was first introduced in Serviceguard version A.1 1i 1.17.01. The following is a list of some important 1 v3 capabilities that Serviceguard supports. 1i Serviceguard supports HP-UX agile addressing, sometimes also called persistent LUN binding, for device special files (DSFs). See About Device Special Files (DSFs) (page 32). Serviceguard supports HP-UX native multipathing and load balancing. See Native Multipathing, Veritas DMP, and Related Features in HP-UX 1 v3 (page 12). 1i Serviceguard supports the following networking capabilities: The HP-UX olrad -C command, which identifies network interface cards (NICs) that are part of the Serviceguard cluster configuration. You can remove a NIC from the cluster configuration, and then from the system, without bringing down the cluster. See About olrad (page 30). The LAN Monitor mode of APA.

Serviceguard supports Process IDs (PIDs) of any size, up to the maximum value supported by HP-UX and the nodes underlying hardware architecture. Previous versions of HP-UX imposed a limit of 30,000; this limit has been removed as of HP-UX 1 v3. For more information, see the white paper Number of Processes and Process ID Values 1i on HP-UX at http://www.hp.com/go/hpux-core-docs > HP-UX 11i v3 > White papers.

Serviceguard supports the increased number of LVM volume groups supported as of the HP-UX 1 v3 0809 Fusion release; the maximum number of volume groups you can configure in a 1i Serviceguard cluster is the maximum supported by HP-UX. See the HP-UX documentation for details.

14

Serviceguard Version A.1 1.20 Release Notes

Serviceguard supports LVM 2.x volume groups, both for data and the cluster lock. See About LVM 2.x (page 30). Serviceguard now supports cell OL* (online addition and deletion of cells) on HP Integrity servers that support them). For more information about using Serviceguard with partitioned systems, see the white paper HP Serviceguard Cluster Configuration for HP-UX 1 or Linux Partitioned Systems at http:// 1i www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard.

Serviceguard supports vgchange -T, which allows multi-threaded activation of volume groups. See About vgchange -T (page 30). You can use the FSWeb utility to configure LVM volumes in a Serviceguard cluster (and SLVM volumes if the add-on product Serviceguard Extension for Real Application Cluster (SGeRAC) is installed). For more information about FSWeb, see the fsweb (1m) manpage.

Serviceguard supports the following recently added HP-UX 1 v3 capabilities: 1i Dynamic Root Disk (DRD); see Upgrade Using DRD (page 44).

See also Whats Not in this Release (page 15).

Whats Not in this Release


The cmviewcl command no longer supports the -r 11.09 option. -r 11.12 and -r 11.16 are still supported. See the cmviewcl (1m) manpage for details Serviceguard A.1 1.20 no longer supports versions of Veritas CVM and CFS from Symantec 5.0 and earlier. For the most up-to-date information, check the HP Serviceguard/SGeRAC/Serviceguard Storage Management Suite/Serviceguard Manager Plug-in Compatibility and Feature Matrix, at the address given under Compatibility Information and Installation Requirements (page 35). Serviceguard A.1 1.20 does not currently support the target port congestion control capability in HP-UX 1 v3, because of an LVM issue. 1i Serviceguard A.1 1.20 does not currently support parallel resynchronization of LVM volume groups, because of an LVM issue. For other obsolete and deprecated features, see Announcements (page 7). See also Rolling Upgrade Exceptions (page 47).

About the New Features


The subsections that follow discuss the major new capabilities introduced in the A.1 1.20 release, as well as those introduced in patches to A.1 1.20. Information on using these capabilities is in the latest version of Managing Serviceguard, which you can find at the address given under Documents for This Version (page 34).

Modular CFS Packages for Reducing Package Usage


With modular CFS packages, Serviceguard provides the flexibility to combine all the CVM disk groups and CFS mount points required by an application into a single package using the modular style of packaging. As a result, the number of packages used for configuring disk groups and mount points in that cluster reduces significantly compared to the packages used for configuring the same in the legacy style of packaging. This helps in easy management of the packages in that cluster and leaves more packages for other applications. You can also merge all the checkpoints and snapshot packages into individual modular checkpoint and snapshot packages respectively.
Whats in this Release 15

Using the modular style of packaging provides flexibility and can be managed better compared to the legacy style of packaging, and is the highly recommended approach to create CFS packages. NOTE: For this feature to be enabled, both patches, Serviceguard A.1 1.20 patch (PHSS_41628) and Serviceguard CFS A.1 1.20 patch (PHSS_41674) must be installed. Advantages of Modular CFS Packages A single multi-node package can include multiple disk groups and mount points, thus reducing the number of packages required to build a CVM/CFS storage configuration. Disk groups and mount points can also be placed in separate multi-node packages, but dependencies between the mount points and disk groups must be configured explicitly in the configuration files. Modular CFS packages can also be configured and managed by Serviceguard Manager. Supports parallel activation of disk groups and parallel mounting of mount points. The Serviceguard cluster and package information displayed by cmviewcl (1m) output is more compact.

For more information on modular CFS packages, see Creating a Storage Infrastructure with Veritas Cluster File System (CFS) and Managing Disk Groups and Mount Points Using Modular Packages in chapter 5 of Managing Serviceguard Nineteenth Edition manual.

Improved Performance while Halting a Non-detached Multi-node Package


The command cmhaltpkg (1m) has been modified so that you will now see a performance improvement while halting a non-detached multi-node package. When cmhaltpkg (1m) is run on a non-detached multi-node package, the package is halted simultaneously on all the nodes or the list of the nodes specified, thus reducing the down time in halting the packages. This performance improvement is directly dependent on the number of nodes on which the package needs to be halted; the performance benefit increases as the number of nodes increases. cmhaltpkg (1m) defaults to this behavior and the feature is automatically enabled once all the nodes in the cluster are upgraded to Serviceguard A.1 1.20 April 201 patch (PHSS_41628). 1

Support for Veritas 5.1 SP1 on HP-UX 1 v3 Only 1i


IMPORTANT: For information about patches required to support HP Storage Management Suite (SMS) bundles see the HP Serviceguard Storage Management Suite Release Notes for your version of the Storage Management Suite at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard Storage Management Suite A.04.xx (with 5.1 SP1 Veritas). Serviceguard A.1 1.20 on HP-UX 1 v3 with the patches listed under April 201 Patches 1i 1 (page 7) supports versions 5.0.1 and 5.1 SP1 of Veritas VxVM, CVM, and CFS from Symantec. Serviceguard A.1 1.20 on HP-UX 1 v3 with the September 2010 patch supports version 1i 5.0.1 of Veritas VxVM, CVM, and CFS from Symantec, and not 5.0 and earlier.

NOTE: For more information about SMS support for version 5.1 SP1, including information about new capabilities, and patches that are required in addition to those listed under April 201 1 Patches (page 7), see the HP Serviceguard Storage Management Suite Version A.04.00 for HP-UX 1 v3 Release Notes at the address given above. 1i

Easy Deployment
In the past you had two main choices for configuring a cluster: using Serviceguard commands, as described in detail in chapter 5 of Managing Serviceguard, or using the Serviceguard Manager
16 Serviceguard Version A.1 1.20 Release Notes

GUI (or some combination of these two methods). As of Serviceguard A.1 1.20, there is a third option, called Easy Deployment. The Easy Deployment tool consists of three commands: cmpreparecl (1m), cmdeploycl (1m), and cmpreparestg (1m). In addition, there is a new -N option to cmquerycl, which you can use to obtain networking information for the cluster heartbeat. These commands allow you to get a cluster up and running in the minimum amount of time. The commands: Configure networking and security (cmpreparecl (1m)) Create and start the cluster with a cluster lock device (cmdeploycl (1m)) Create and import volume groups as additional shared storage for use by cluster packages (cmpreparestg (1m)).

In the April 201 patch (PHSS_41628), the cmdeploycl (1m) and cmpreparestg (1m) 1 commands have been enhanced as part of further development to the Easy Deployment feature. cmdeploycl (1m) is enhanced with the following options: A new -s [site] option that can be used to configure site-aware disaster-tolerant clusters, which requires Metrocluster software to be installed. It also mandates the use of quorum server when sites are specified. cmdeploycl (1m) can be used to configure a single site cluster using either a quorum server or lock LUN. A new -cfs option is used to deploy a Serviceguard cluster with SG-CFS-pkg. This is a required option if you intend to enable CVM/CFS on a cluster by configuring SG-CFS-pkg.

cmpreparestg (1m) is enhanced to support the creation and modification of VxVM/CVM disk groups. A new -g option is included to serve this purpose. When -g is used, you can specify the disk group options using the -o dg_opts option. For more information, see the man pages of cmdeploycl (1m) and cmpreparestg (1m). See also Cluster-wide Device Special Files (cDSFs) (page 18). Advantages of Easy Deployment Quick and simple way to create and start a cluster. Automates security and networking configuration that must always be done before you configure nodes into a cluster. Simplifies cluster lock configuration. Simplifies creation of shared storage for packages.

Limitations of Easy Deployment Does not install or verify Serviceguard software. Requires agile addressing for disks. See About Device Special Files (DSFs) (page 32). cmpreparestg (1m) will fail if cDSFs and persistent DSFs are mixed in a volume group. See Cluster-wide Device Special Files (cDSFs) (page 18) for more information about cDSFs. Does not configure access control policies. Does not install or configure firewall and related software. Does not support cross-subnet configurations. Does not configure packages. Does not discover or configure a quorum server (but can deploy one that is already configured).

Whats in this Release

17

Does not support asymmetric network configurations (in which a given subnet is configured only on a subset of nodes). When cmdeploycl (1m) is used on a cluster where SGeRAC is installed, cmdeploycl (1m) provides a warning if the cluster being deployed does not meet SGeRAC specific requirements.

For more information and instructions, see Using Easy Deployment in chapter 5 of Managing Serviceguard.

Halting a Node or the Cluster while Keeping Packages Running (Live Application Detach)
There may be circumstances in which you want to do maintenance that involves halting a node, or the entire cluster, without halting or failing over the affected packages. Such maintenance might consist of anything short of rebooting the node or nodes, but a likely case is networking changes that will disrupt the heartbeat. New command options in Serviceguard A.1 1.20 (collectively known as Live Application Detach (LAD)) allow you to do this kind of maintenance while keeping the packages running. The packages are no longer monitored by Serviceguard, but the applications continue to run. Packages in this state are called detached packages. IMPORTANT: This capability applies only to modular failover packages and modular multi-node packages. For more information, see Halting a Node or the Cluster while Keeping Packages Running in chapter 7 of Managing Serviceguard.. When upgrading to future releases, you will be able to use LAD in conjunction with rolling upgrade, but you cannot use it when upgrading to A.1 1.20, because the capability is not available until all the nodes are running A.1 1.20.

When you have done the necessary maintenance, you can restart the node or cluster, and normal monitoring will resume on the packages. What You Can Do You can do the following. Halt a node (cmhaltnode (1m) with the -d option) without causing its running packages to halt or fail over. Until you restart the node (cmrunnode (1m)) these packages are detached not being monitored by Serviceguard. Halt the cluster (cmhaltcl (1m) with the -d option) without causing its running packages to halt. Until you restart the cluster (cmruncl (1m)) these packages are detached not being monitored by Serviceguard. Halt a detached package, including instances of detached multi-node packages. Restart normal package monitoring by restarting the node (cmrunnode) or the cluster (cmruncl).

For more information, including important rules and restrictions, see Halting a Node or the Cluster while Keeping Packages Running in chapter 7 of Managing Serviceguard.

Cluster-wide Device Special Files (cDSFs)


Under agile addressing on HP-UX 1 v3, each device has a unique identifier as seen from a given 1i host; this identifier is reflected in the name of the Device Special File (DSF). See About Device Special Files (DSFs) (page 32) for more information.
18 Serviceguard Version A.1 1.20 Release Notes

Because DSF names may be duplicated between one host and another, it is possible for different storage devices to have the same name on different nodes in a cluster, and for the same piece of storage to be addressed by different names. Serviceguard A.1 1.20 September 2010 patch (PHSS_41225) and later supports Cluster-wide device files (cDSFs), which ensure that each storage device used by the cluster has a unique device file name. cDSFs are available on HP-UX as of the September 2010 Fusion Release. HP recommends that you use cDSFs for the storage devices in the cluster because this makes it simpler to deploy and maintain a cluster, and removes a potential source of configuration errors. Using cDSFs with Easy Deployment (page 16) further simplifies the configuration of storage for the cluster and packages. See Creating Cluster-wide Device Special Files (cDSFs) and Using Easy Deployment in chapter 5 of Managing Serviceguard for instructions. Points To Note cDSFs can be created for any group of nodes that you specify, provided that Serviceguard A.1 1.20 and the required patch are installed on each node. Normally, the group should comprise the entire cluster. cDSFs apply only to shared storage; they will not be generated for local storage, such as root, boot, and swap devices. Once you have created cDSFs for the cluster, HP-UX automatically creates new cDSFs when you add shared storage.

Where cDSFs Reside cDSFs reside in two new HP-UX directories, /dev/cdisk for cluster-wide block devicefiles and /dev/rcdisk for cluster-wide character devicefiles. Persistent DSFs that are not cDSFs continue to reside in /dev/disk and /dev/rdisk, and legacy DSFs (DSFs using the naming convention that was standard before HPUX 1 v3) in /dev/dsk and /dev/rdsk. It is possible that a 1i storage device on an 1 v3 system could be addressed by DSFs of all three types of device 1i but if you are using cDSFs, you should ensure that you use them exclusively as far as possible. NOTE: Software that assumes DSFs reside only in /dev/disk and /dev/rdisk will not find cDSFs and may not work properly as a result; as of the date of this document, this was true of the Veritas Volume Manager, VxVM.
Limitations of cDSFs

cDSFs are supported only within a single cluster; you cannot define a cDSF group that crosses cluster boundaries. A node can belong to only one cDSF group. cDSFs are not supported by VxVM, CVM, CFS, or any other application that assumes DSFs reside only in /dev/disk and /dev/rdisk. Oracle ASM cannot detect cDSFs created after ASM is installed. cDSFs do not support disk partitions. Such partitions can be addressed by a device file using the agile addressing scheme, but not by a cDSF.

cDSFs are not supported by Ignite-UX in Serviceguard Cluster environment. Recovery support for such a configuration is not supported. If you require support for recovery archives in a Serviceguard environment do not implement Ignite-UX with cDSFs.

LVM Commands and cDSFs

Some HP-UX commands have new options and behavior to support cDSFs, specifically:
Whats in this Release 19

vgimport C causes vgimport (1m) to use cDSFs

vgscan C causes vgscan (1m) to display cDSFs See the manpages for more information. The following new HP-UX commands handle cDSFs specifically: vgcdsf(1m) converts all persistent DSFs in a volume group to cDSFs. Legacy DSFs in the volume group will not be converted, but you can use the HP-UX vgdsf script to convert these legacy DSFs to persistent DSFs if you need to. For more information on the vgdsf script, see the white paper LVM Migration from Legacy to Agile Naming Model at http://www.hp.com/go/hpux-core-docs. For more information on vgcdsf, see the manpage. io_cdsf_config (1m) displays information about cDSFs. See the manpage for more information.

Checking the Cluster Configuration and Components


Serviceguard provides tools that allow you to check the soundness of the cluster configuration, and the health of its components. In past releases, much of this was done by cmcheckconf (1m) and/or cmapplyconf (1m) and could be done only when you were changing the configuration of the cluster or packages. As of Serviceguard A.1 1.20, these commands perform additional checks, and a new command, cmcompare (1m) allows you to compare the contents and characteristics of cluster-wide files to make sure they are consistent. In addition, you can check configuration of the cluster and all of its packages at any time by running cmcheckconf (1m) without arguments (or with -v; see below). See also cmcheckconf (5). These checks help you to ensure that packages will start up and fail over successfully. The following capabilities are new as of Serviceguard A.1 1.20.

20

Serviceguard Version A.1 1.20 Release Notes

NOTE: All of the checks below are performed when you run cmcheckconf (1m) without any arguments (or with only -v, with or without -k or -K). cmcheckconf (1m) validates the current cluster and package configuration, including external scripts and pre-scripts for modular packages, and runs cmcompare (1m) to check file consistency across nodes. (This new version of the command also performs all of the checks that were done in previous releases.) These new checks are not done for legacy packages. For information about legacy and modular packages, see chapter 6 of Managing Serviceguard. LVM volume groups: Check that each volume group contains the same physical volumes on each node Check that each node has a working physical connection to the physical volumes Check that volume groups used in modular packages are cluster-aware Check that file systems have been built on the logical volumes identified by the fs_name parameter in the cluster's packages. Check that each disk group contains the same number of disks on each node. Check that each node has a working physical connection to the disks. Check that files including the following are consistent across all nodes: /etc/hosts (must contain all IP addresses configured into the cluster) /etc/nsswitch.conf /etc/services package control scripts for legacy packages (if you specify them) /etc/cmcluster/cmclfiles2check /etc/cmcluster/cmignoretypes.conf /etc/cmcluster/cmknowncmds /etc/cmcluster/cmnotdisk.conf user-created files (if you specify them)

LVM logical volumes

VxVM disk groups:

File consistency:

For more information, see Checking the Cluster Configuration and Components in chapter 7 of Managing Serviceguard. Limitations Serviceguard does not check the following conditions: Access Control Policies properly configured (see chapter 5 of Managing Serviceguard for information about Access Control Policies) File systems configured to mount automatically on boot (that is, Serviceguard does not check /etc/fstab) Shared volume groups configured to activate on boot Volume group major and minor numbers unique Redundant storage paths functioning properly
Whats in this Release 21

Kernel parameters and driver configurations consistent across nodes Mount point overlaps (such that one file system is obscured when another is mounted) Unreachable DNS server Consistency of settings in .rhosts and /var/admin/inetd.sec Consistency across cluster of major and minor numbers device-file numbers Nested mount points Staleness of mirror copies

Cluster Verification and ccmon The Cluster Consistency Monitor (ccmon) provides even more comprehensive verification capabilities than those described in this section. ccmon is a separate product, available for purchase; ask your HP Sales Representative for details.

NFS-mounted File Systems


As of Serviceguard A.1 1.20, you can use NFS-mounted (imported) file systems as shared storage in packages. The same package can mount more than one NFS-imported file system, and can use both cluster-local shared storage and NFS imports. The following rules and restrictions apply. NFS mounts are supported for modular, failover packages. See chapter 6 of Managing Serviceguard for a discussion of types of packages. With Serviceguard A.1 1.20 (April 201 patch), it is now possible to create a Multi-Node 1 Package that uses an NFS file share, and this is useful only if you want to create a HP Integrity Virtual Machine (HPVM) in a Serviceguard Package, where the virtual machine itself uses a remote NFS share as backing store. For details on how to configure NFS as a backing store for HPVM, see the HP Integrity Virtual Machines 4.3: Installation, Configuration, and Administration guide at http://www.hp.com/ go/virtualization-manuals > HP Integrity Virtual Machines and Online VM Migration. So that Serviceguard can ensure that all I/O from a node on which a package has failed is flushed before the package restarts on an adoptive node, all the network switches and routers between the NFS server and client must support a worst-case timeout, after which packets and frames are dropped. This timeout is known as the Maximum Bridge Transit Delay (MBTD). IMPORTANT: Find out the MBTD value for each affected router and switch from the vendors' documentation; determine all of the possible paths; find the worst case sum of the MBTD values on these paths; and use the resulting value to set the Serviceguard CONFIGURED_IO_TIMEOUT_EXTENSION parameter. For instructions, see the discussion of this parameter under Cluster Configuration Parameters in chapter 4 of Managing Serviceguard. Switches and routers that do not support MBTD value must not be used in a Serviceguard NFS configuration. This might lead to delayed packets that in turn could lead to data corruption. Networking among the Serviceguard nodes must be configured in such a way that a single failure in the network does not cause a package failure.

22

Serviceguard Version A.1 1.20 Release Notes

Only NFS client-side locks (local locks) are supported. Server-side locks are not supported. Because exclusive activation is not available for NFS-imported file systems, you should take the following precautions to ensure that data is not accidentally overwritten. The server should be configured so that only the cluster nodes have access to the file system. The NFS file system used by a package must not be imported by any other system, including other nodes in the cluster. The only exception to this restriction is when you want to use the NFS file system as a backing store for HPVM. In this case, the NFS file system is configured as the filesystem type in a multi-node package and is imported on more than one node in the cluster. The nodes should not mount the file system on boot; it should be mounted only as part of the startup for the package that uses it. The same NFS file system should be used by only one package. While the package is running, the file system should be used exclusively by the package. If the package fails, do not attempt to restart it manually until you have verified that the file system has been unmounted properly.

In addition, you should observe the following guidelines. CacheFS and AutoFS should be disabled on all nodes configured to run a package that uses NFS mounts. For more information, see the NFS Services Administrator's Guide HP-UX 1 version 3 at 1i http://www.hp.com/go/hpux-networking-docs. HP recommends that you avoid a single point of failure by ensuring that the NFS server is highly available. NOTE: If network connectivity to the NFS Server is lost, the applications using the imported file system may hang and it may not be possible to kill them. If the package attempts to halt at this point, it may not halt successfully Do not use the automounter; otherwise package startup may fail. If storage is directly connected to all the cluster nodes and shared, configure it as a local filesystem rather than using NFS. An NFS file system should not be mounted on more than one mount point at the same time. Access to an NFS file system used by a package should be restricted to the nodes that can run the package.

For more information, see the white paper Using NFS as a file system type with Serviceguard 1 1.20 on HP-UX 1 v3 which you can find at http://www.hp.com/go/hpux-serviceguard-docs. 1i This paper includes instructions for setting up a sample package that uses an NFS-imported filesystem. See also the description of the new parameter fs_server, and of fs_type and the other filesystem-related package parameters, in chapter 6 of Managing Serviceguard.

Whats in this Release

23

NOTE: The addition of the fs_server package parameter alters the output of cmviewcl -f line; this in turn may affect your programs and scripts that parse cmviewcl -f line output.

The LVM and VxVM Volume Monitor


Simply monitoring each physical disk in a Serviceguard cluster does not provide adequate monitoring for volumes managed by Veritas Volume Manager from Symantec (VxVM), or logical volumes managed by HP-UX Logical Volume Manager (LVM), because a physical volume failure is not always a critical failure that triggers failover (for example, the failure of a mirrored volume is not considered critical). For this reason, it can be very difficult to determine which physical disks must be monitored to ensure that a logical volume is functioning properly. The HP Serviceguard Volume Monitor provides a means for effective and persistent monitoring of storage volumes. You can use this monitor is an alternative to using Event Monitoring Service (EMS) resource dependencies to monitor LVM storage. EMS does not currently provide a monitor for VxVM volumes. IMPORTANT: The LVM monitoring capability (cmvolmond) for Serviceguard A.1 1.20 is supported from the September 2010 patch (PHSS_41225) and later. The VxVM monitoring capability (cmvxserviced) was first introduced in a patch to A.1 1.18, and is still supported. cmvolmond replaces cmvxserviced, combining the VxVM monitoring capabilities of cmvxserviced with the new capabilities needed to support LVM monitoring. Although cmvxserviced will still work in A.1 1.20, HP recommends you use cmvolmond instead. For more information, see About the Volume Monitor in chapter 4 of Managing Serviceguard, at the address given under Documents for This Version (page 34).

Serviceguard Manager
HP Serviceguard Manager B.03.10 is a web-based, HP System Management Homepage (HP SMH) tool, that replaces the functionality of the earlier Serviceguard management tools. Serviceguard Manager allows you to monitor, administer and configure a Serviceguard cluster from any system with a supported web browser. Serviceguard Manager does not require additional software installation. Instead, using your browser, you log into an HP Systems Management Homepage (SMH) and access the HP Serviceguard Manager tool, as well as other system management tools. The HP Serviceguard Manager Main Page provides you with a summary of the health of the cluster including the status of each node and its packages.

DSAU Integration
HP Serviceguard Manager uses Distributed Systems Administration Utilities (DSAU) to display consolidated cluster log (syslog) and consolidated package logs. You can find more information on DSAU in the Distributed Systems Administration Utilities Users Guide, at the address given under Documents for This Version (page 34).

24

Serviceguard Version A.1 1.20 Release Notes

NOTE: DSAU does not support a local log consolidation server in a cross-subnet cluster. Instead, you can setup a remote log consolidation server on the Quorum Server node or cluster.

Native Language Support


HP Serviceguard Manager Version B.03.10 is available in the following languages: Japanese Simplified Chinese Korean Traditional Chinese Standard French Standard German Standard Italian Standard Spanish

What You Can Do


Depending on your SMH and Serviceguard security privileges, you can do the following: Monitor, create, modify, run, and halt a cluster. Monitor, run, and halt nodes. Create and modify failover and multi-node packages, including configuring package dependencies. You can also modify Auto Run and Node Switching settings. Monitor, run, halt, and move failover, multi-node, and system multi-node packages.

Serviceguard Manager

25

New Features
HP Serviceguard Manager version B.03.10 supports Serviceguard A.1 1.20 on HP-UX 1 v3. The 1i following are new capabilities in B.03.10: Support for Site-aware Metrocluster Configuration (Site Aware Cluster + Site Controller package) Support for configuration of sites to form a site-aware Cluster. Support for Site Controller Package configuration. Support for moving the Site Controller package within site and across site.

Manual Site switching Enhancement If the failover policy is SITE_PREFERRED_MANUAL, an alert is displayed when a package fails and manual intervention is required to start the package in the same site or another site.

Support for Serviceguard Toolkit for Oracle Data Guard B.01.00 and the April 201 Patch 1 (PHSS_41640) or later Support for configuration, administration and monitoring of modular packages with Serviceguard toolkits for Oracle Data Guard and ECMT Oracle/SGeRAC.

Support for Serviceguard Extension for Oracle E-business Suite Toolkit B.02.00 Configuration, Monitoring and Administration Support for the new Serviceguard Extension for Oracle E-business Suite (SGeEBS) Toolkit.

Support to open Toolkit README files Open the README file for any installed toolkit, from the Toolkits configuration page for more information on it and its parameters.

Single-click Access to Package Logs Launch a Package log window from the Operation log window for each of the packages involved in a particular Administration operation, to get the latest information on the progress of the commands as well as for better troubleshooting.

Support for SG-CFS-pkg (SMNP) Configuration Configure the SMNP package, SG-CFS-pkg, for CVM/CFS environments.

Run cDSF while adding new node To allow users to run the cDSF command while adding a new node to the cluster to have uniform cDSF names in the cluster.

Support for Co-existence of SGeRAC with ECMT Support for Oracle RAC (SGeRAC toolkit) packages to co-exist with Oracle Single instance (ECMT Oracle toolkit) packages in the same Serviceguard cluster.

Easy deployment of CFS Cluster (SG-CFS-pkg SMNP package) If the appropriate SG SMS bundle is found to be installed, you will have the option to deploy a CFS cluster. The SG-CFS-pkg (SMNP) will be automatically created.

SGeRAC Cluster Easy Deployment If the SGeRAC bundle is found to be installed during cluster configuration, the network configuration will be analyzed to determine if the cluster that is being deployed is a candidate SGeRAC cluster, then you will be informed of the results.

26

Serviceguard Version A.1 1.20 Release Notes

Cluster Verification for Metrocluster An enhancement to the existing cluster verification feature for Metrocluster specific checks.

Metrocluster Easy Deployment (SiteAware + NonSiteAware) If the appropriate Metrocluster, SMS, and SGeRAC bundles are found to be installed, you will be given options for configuring a site aware Metrocluster with minimal steps.

Package Easy Deployment Effortlessly deploy the following modular packages with minimal user input and automatic package dependency configuration: SGeRAC OC Toolkit package easy deployment SGeRAC RAC DB Toolkit package easy deployment ECMT Oracle Toolkit package easy deployment SGeEBS Toolkit package easy deployment Site Controller package easy deployment

For more information, see the white paper Using Easy Deployment in Serviceguard and Metrocluster environments at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard > White papers. NFS Toolkit Enhancement A new parameter "SUPPORTED_NETIDS" is added in the Serviceguard Manager GUI on HA NFS toolkit.

SGeRAC ASM disk group package Enhancements Displays actual values in the SGeRAC RAC database and ASM disk group packages property sheet. These parameters are removed from the configuration page display since the user need not enter values.

Modular CFS Package Integration (Multiple CVM Disk Groups and CFS Mount Points) You can merge multiple CVM Disk Group (DG) and CFS Mount Point (MP) packages that are needed for a single application

Current Limitations of Serviceguard Manager


You can only monitor a Continentalcluster, but cannot configure or administer it.

Browser and SMH Issues


Pages Launched into New Window rather than Tabs with Firefox Tabbed Browsing By design, the Serviceguard Manager logging windows are launched with a pre-specified window size that will not obscure the original window when they are displayed. With the Firefox tabbed browsing preference selected, if a new window has a specified size, the tabbed browsing preference is ignored and the target page is launched in a new window instead. See the following excerpt from Firefox support site (http://support.mozilla.com/en-US/kb/ Options+window+-+Tabs+panel): Note: If you have chosen to open pages in new tabs, Firefox will ignore this option/preference and will open a new window from a link if the page author specified that the new window should have a specific size, because some pages can only be displayed correctly at a specific size. Firefox 3.0 does not provide an API that Serviceguard Manager can use to detect or alter this behavior. There is no workaround for this problem at this time.
Serviceguard Manager 27

Internet Explorer 7 Zooming Problem Internet Explorer 7 introduced a "zoom" feature which allows the user to zoom in/out the page being viewed. However, its implementation is known to have problems. HTML elements in a page are not zoomed proportionally. As a result, the resulting page is displayed in a unpredictable manner. Internet Explorer 8 fixed most of these problems and behaves in the same way as Firefox 3.x. HP System Management Homepage (HPSMH) Timeout Default In some situations (for example, when a remote cluster node is disconnected from the network), Serviceguard commands may take longer than 30 seconds to return. In this case, if the HPSMH 3.0 UI timeout value is left at its default value (20 seconds), Serviceguard Manager information will not be loaded correctly into the HPSMH. To ensure Serviceguard Manager has sufficient time to execute Serviceguard commands, increase the UI timeout value to 60 seconds (Settings -> Security -> Timeouts -> UI timeout).

Help Subsystem
Use this section to help you become familiar with Serviceguard Manager. Once Serviceguard Manager is running, use the tooltips by moving your mouse over a field from the read-only property pages for a brief definition for each field. You can also access the online help by clicking the button located in the upper-right hand corner of the screen to view overview and procedure information. Start with the help topic Understanding the HP Serviceguard Manager Main Page. You should read the help topic About Security, as it explains HP Serviceguard Manager Access Control Policies, as well as root privileges.

Before Using HP Serviceguard Manager: Setting Up


You must have, or have done, the following before you can start using HP Serviceguard Manager: At least one cluster member node with Serviceguard A.1 1.20 and Serviceguard Manager B.03.10 installed. Java JDK 5.0 (5.0.16 or later) Version B.5.5.23.02 or later of the hpuxswTOMCAT product. hpuxswTOMCAT is installed by default with HP-UX. To check that it is on your system, use a command such as: swlist -l fileset | grep TOMCAT Version A.3.0.1, or a higher version of A.3.0.<x>, of SMH (System Management Homepage), for HPUX 1 v3. 1i A web browser (Internet Explorer 6.0 or higher or Firefox 2.0 or higher) with access to SMH. Have launched SMH (settings -> Security -> User Groups) to configure user roles for SMH. A user with HP SMH Administrator access has full cluster management capabilities. A user with HP SMH Operator access can monitor the cluster and has restricted cluster management capabilities as defined by the users Serviceguard role-based access configuration. A user with HP SMH User access does not have any cluster management capabilities.

See the online help topic About Security for more information. Have created the security bootstrap file cmclnodelist. See Configuring Root-Level Access in Chapter 5 of the Managing Serviceguard manual for instructions.

28

Serviceguard Version A.1 1.20 Release Notes

Launching Serviceguard Manager


See Appendix I of Managing Serviceguard, at the address listed under Documents for This Version (page 34), for instructions on launching Serviceguard Manager, and other useful information.

Patches and Fixes


No patches are required for Serviceguard Manager B.03.10. For information about known problems and workarounds, see Known Problems for Serviceguard Manager (page 54).

Features Introduced Before A.1 1.20


The following subsections discuss important features that were added to Serviceguard in recent releases and patches. All of these features have been carried forward into A.1 1.20.

Features First Introduced in Serviceguard A.1 1.19 Patches


Listed below are features originally introduced in patches to Serviceguard version A.1 1.19. Support for CFS, VxVM and CVM version 5.0.1 on HP-UX 1 v3 only. 1i Improved support for IPv6, including support for IPv6only hosts. See the section About Hostname Address Families: IPv4-Only, IPv6-Only, and Mixed Mode in chapter 4 of Managing Serviceguard, at the address given under Documents for This Version (page 34). Improved support for online package maintenance. See Maintaining a Package: Maintenance Mode in chapter 7 of Managing Serviceguard. Support for Dynamic Root Disk (DRD) on HP-UX 1 v3. 1i See Upgrade Using DRD (page 44). New Serviceguard Manager capabilities; see Serviceguard Manager (page 24). New -K option for cmgetconf. This causes cmgetconf to skip probing of volume groups, allowing the command to complete faster; the resulting cluster configuration file will not contain a list of cluster-aware volume groups. See cmgetconf (1m) for more information. Support for Shared LVM (SLVM) in an HPVM environment only that is, when the cluster includes virtual machines (either as nodes or within packages) that are managed by HPVM. For more information, see HP Integrity Virtual Machines Installation, Configuration, and Administration at http://www.hp.com/go/hpux-hpvm-docs. For more information about using HPVM in a Serviceguard cluster, see Support for HP Integrity Virtual Machines (HPVM) (page 33). Improved support for nested mount points. Serviceguard now ensures that a package will never attempt to mount a nested directory concurrently with the parent, no matter what concurrent_mount_and_umount_operations is set to. (In earlier releases, this could happen if concurrent_mount_and_umount_operations was set to a value greater than 1.) See also: About LVM 2.x (page 30) Serviceguard on HP-UX 1 v3 (page 14) 1i

Features Introduced Before A.1 1.20

29

Features First Introduced Before Serviceguard A.1 1.19


About olrad
You must remove a LAN or VLAN interface from the cluster configuration before removing it from the system. You can do this without bringing down the cluster. HP-UX 1 v3 provides a new option for the olrad command, olrad -C, to help you determine 1i whether or not an interface is part of the cluster configuration: run olrad -C with the affected I/O slot ID as argument. If the NIC is part of the cluster configuration, youll see a warning message telling you to remove it from the configuration before you proceed. See the olrad(1M) manpage for more information about olrad. After removing the NIC from the cluster configuration, you can remove it from an HP-UX 1 v3 1i cluster node without shutting down the system by running olrad -d. See Removing a LAN or VLAN Interface from a Node in Chapter 7 of Managing Serviceguard for more information.

About vgchange -T
Serviceguard supports vgchange -T, which allows multi-threaded activation of volume groups on HP-UX 1 v3 systems. 1i This means that when the volume group is activated, physical volumes (disks or LUNs) are attached to the volume group in parallel, and mirror copies of logical volumes are synchronized in parallel, rather than serially. That can improve a packages startup performance if its volume groups contain a large number of physical volumes. To enable vgchange -T for all of a packages volume groups, set enable_threaded_vgchange to 1 in the package configuration file (the default is 0, meaning that multi-threaded activation is disabled). Note that, in the context of a Serviceguard package, this affects the way physical volumes are activated within a volume group; another package parameter, concurrent_vgchange_operations, controls how many volume groups the package can activate simultaneously. IMPORTANT: Make sure you read the configuration file comments for both concurrent_vgchange_operations and enable_threaded_vgchange before configuring these options, as well as the vgchange (1m) manpage.

About the Alternate Quorum Server Subnet


Serviceguard A.1 1.20 allows you to configure an alternate subnet for communication between the cluster nodes and the Quorum Server. You can do this from the command line or in Serviceguard Manager. For details and instructions, see the HP Serviceguard Quorum Server Version A.04.00 Release Notes at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard Quorum Server Software. IMPORTANT: This capability requires Quorum Server Version A.04.00. (It was also provided in a patch to Serviceguard A.1 1.18 with Quorum Server Version A.03.00, but Quorum Server Version A.03.00 does not support an alternate subnet with Serviceguard A.1 1.20 or A.1 1.19.) See Quorum Server Upgrade Required if You Are Using an Alternate Address (page 9).

About LVM 2.x


Logical Volume Manager (LVM) 2.x volume groups, which remove some of the limitations imposed by LVM 1.0 volume groups, can be used on systems running HP-UX 1 v3 0803 Fusion or later 1i

30

Serviceguard Version A.1 1.20 Release Notes

with LVM version B.1 1.31.0809 and Serviceguard A.1 1.19 or later. (This support was first introduced in a patch to Serviceguard A.1 1.18.) NOTE: You are not required to move to LVM 2.x volume groups and everything will work as before if you do nothing. If you do use LVM 2.x volume groups, you can still manage them with the same commands as before, although you may have to make minor changes to any scripts you use that parse the output of lvdisplay, vgdisplay, pvdisplay or vgscan, as the output of these commands has changed slightly. In addition, new options are available for some commands. For more information, see the white paper LVM 2.0 Volume Groups in HP-UX 1 v3 at http:// 1i www.hp.com/go/hpux-core-docs. For information about all other aspects of LVM on HP-UX 1 1i v3, see the Logical Volume Management volume (volume 3) of the HP-UX System Administrators Guide, at the same address.

About cmappmgr
cmappmgr is an utility that allows you launch and monitor processes on HP Virtual Machine (HPVM) guest nodes. For information about HPVM, See Support for HP Integrity Virtual Machines (HPVM) (page 33). cmappmgr is operating-system-independent, supporting HP-UX, Linux, and Windows VMs. cmappmgr on the host communicates via SSL connections with a lightweight module on the VM guest, cmappserver. cmappmgr exits when the process that is being monitored does. It can be run as a service in a Serviceguard package, or invoked from an external script in a modular package or from a run and halt script in a legacy package. (See Chapter 6 of Managing Serviceguard for information about modular and legacy packages.) cmappmgr is packaged as a Serviceguard command. cmappserver is packaged as a depot, rpm, or exe (for HP-UX, Linux, or Windows respectively) which can be copied from the host to a VM guest and installed there. For more information see the white paper Designing High Availability Solutions with HP Serviceguard and HP Integrity Virtual Machines, which you can find at the address given under Documents for This Version (page 34). HPVM 4.1 Support for Windows 2008 Guests HPVM 4.1 supports Windows 2008 guests, and you can use cmappmgr to monitor processes on these guests. To enable this capability you need to do the following: Install the HPVM 4.1 July patches. See the latest version of the HP Integrity Virtual Machines Version 4.1 Release Notes, which you can find on http://www.hp.com/go/hpux-hpvm-docs. Install the Windows 2008 guest operating system on the host. edit C:\Program Files\Hewlett-Packard\cmappserver\conf\wrapper.conf on the Windows 2008 guest to insert the following line: wrapper.java.additional.1=-Dos.name="Windows 2003" This should go after the following lines:
#Java Additional Parameters #wrapper.java.additional.1=-Dprogram.name=C:\Program Files\Hewlett-Packard\cmappserver\cmappserver.bat

Features Introduced Before A.1 1.20

31

About Device Special Files (DSFs)


HP-UX releases up to and including 1 v2 use a naming convention for device files that encodes 1i their hardware path. For example, a device file named /dev/dsk/c3t15d0 would indicate SCSI controller instance 3, SCSI target 15, and SCSI LUN 0. HP-UX 1 v3 introduces a new nomenclature for device files, known as agile addressing (sometimes 1i also called persistent LUN binding). Under the agile addressing convention, the hardware path name is no longer encoded in a storage devices name; instead, each device file name reflects a unique instance number, for example /dev/[r]disk/disk3, that does not need to change when the hardware path does. Agile addressing is the default on new 1 v3 installations, but the I/O subsystem still recognizes 1i pre-1 v3 device files, which as of 1 v3 are referred to as legacy device files. Device files using 1i 1i the new nomenclature are called persistent device files, When you upgrade to HP-UX 1 v3, a set of new, persistent device files is created, but the existing, 1i legacy device files are left intact and by default will continue to be used by HP-UX and Serviceguard. This means that you are not required to migrate to agile addressing when you upgrade to 1 v3, 1i though you should seriously consider its advantages (see the white paper The Next Generation Mass Storage Stack which you can find at http://www.hp.com/go/hpux-core-docs > Overview: The Next Generation Mass Storage Stack ). Migration involves modifying system and application configuration files and scripts to use persistent device files and in some cases new commands and options; the process is described in the white paper LVM Migration from Legacy to Agile Naming Model HP-UX 1 v3 at http://www.hp.com/go/hpux-core-docs. 1i If you cold-install HP-UX 1 v3, sets of both legacy and persistent device files are automatically 1i created. In this case, by default the installation process will configure system devices such as the boot, root, swap, and dump devices to use persistent device files. This means that system configuration files such as/etc/fstab and /etc/lvmtab will contain references to persistent device files, but Serviceguards functioning will not be affected by this. CAUTION: You cannot migrate to the agile addressing scheme during a rolling upgrade if you are using cluster lock disks as a tie-breaker, because that involves changing the cluster configuration. But you can migrate the cluster lock device file names to the new scheme without bringing the cluster down. For the requirements and a procedure, see Updating the Cluster Lock Configuration in Chapter 7 of Managing Serviceguard. NOTE: It is possible, though not a best practice, to use legacy DSFs on some nodes after migrating to agile addressing on others; this allows you to migrate different nodes at different times, if necessary. For more information about agile addressing, see following documents at http://www.hp.com/ go/hpux-core-docs: the Logical Volume Management volume of the HP-UX System Administrators Guide the HP-UX 1 v3 Installation and Update Guide 1i the following white papers: The Next Generation Mass Storage Stack HP-UX 1 v3 Native Multi-Pathing for Mass Storage 1i LVM Migration from Legacy to Agile Naming Model HP-UX 1 v3 1i

See also the HP-UX 1 v3 intro(7) manpage. 1i

32

Serviceguard Version A.1 1.20 Release Notes

Support for HP Integrity Virtual Machines (HPVM)


Serviceguard supports HP Integrity Virtual Machines (HPVM). HPVM runs only on HP Integrity systems; it does not run on HP 9000 systems. IMPORTANT: For the most up-to-date compatibility information, see the Serviceguard/SGeRAC/SMS/Serviceguard Mgr Plug-in Compatibility and Feature Matrix, at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard, under the heading General reference. See also the Integrity VM/Serviceguard Support Matrix in the white paper Designing high-availability solutions with HP Serviceguard and HP Integrity Virtual Machines on the same web page under White papers. Serviceguard A.1 1.20 supports an HPVM either as a package or as a cluster node. If any Serviceguard cluster node is a virtual machine, the amount of time Serviceguard needs to wait for a failed nodes I/O to complete increases; see About HPVM and Cluster Re-formation Time (page 33). See also About cmappmgr (page 31). About HPVM and Cluster Re-formation Time When a node fails and the cluster re-forms, Serviceguard must wait a certain amount of time to allow I/O from the failed node to be written out to the target storage device. Only after that time has elapsed can Serviceguard allow an adoptive node access to that device; otherwise data corruption could occur. The amount of time Serviceguard waits is calculated by Serviceguard and is not user-configurable. The above is true whether or not the cluster includes virtual machines (VMs), but using VMs as Serviceguard nodes increases the amount of time Serviceguard needs to wait before it is safe to allow another node access to the same storage. This additional wait can increase cluster re-formation time by as much as 70 seconds. The additional time Serviceguard needs to wait depends in part on whether or not a VM guest depot is installed on the VM node. (See HP Integrity Virtual Machines Installation, Configuration, and Administration, at the address given below, for information on installing a guest depot.) Serviceguard uses information it derives from the VM guest depot to set the timeout to the optimal value. If any VM node does not have a VM guest depot, Serviceguard may not be able to obtain the information it needs to set the optimal timeout, and in that case it sets the additional timeout to the maximum value, 70 seconds. IMPORTANT: This additional timeout extension represents a net addition to the time it takes for the cluster to re-form. For example, if the cluster typically took 40 seconds to re-form before any VM nodes were added, it will take about 80 seconds when one or more VM nodes are members of the cluster, if all those nodes have a VM guest depot. If any VM node without a VM guest depot is a member of the cluster, it will take about 1 seconds. This is true whenever VM nodes are 10 cluster members, whether or not the re-formation is caused by the failure of a VM node. For more information about HP Integrity Virtual Machines, see HP Integrity Virtual Machines Installation, Configuration, and Administration at http://www.hp.com/go/hpux-hpvm-docs.

Access changes as of A.1 1.16


Serviceguard version A.1 1.16 introduced a new access method. As of A.1 1.16, Serviceguard uses Access Control Policies, also known as Role-Based Access, rather than cmclnodelist or .rhosts, to authenticate users. For more information about Access Control Policies, see Chapter 5 of the Managing Serviceguard manual, the Serviceguard Manager help, and the cluster and package configuration files themselves.

Features Introduced Before A.1 1.20

33

Considerations when Upgrading Serviceguard .rhosts If you relied on .rhosts for access in the previous version of the cluster, you must now configure Access Control Policies for the cluster users. For instructions on how to proceed, see the subsection Allowing Root Access to an Unconfigured Node under Configuring Root-Level Access in Chapter 5 of the Managing Serviceguard manual. cmclnodelist When you upgrade from an earlier version, Serviceguard converts cmclnodelist entries into new entries written into the cluster configuration file during the upgrade, as follows: USER_NAME <user_name> USER_HOST <host_node> USER_ROLE Monitor A wildcard + (plus) is converted as follows: USER_NAME ANY_USER USER_HOST ANY_SERVICEGUARD_NODE USER_ROLE Monitor After you complete the upgrade, use cmgetconf to create and save a copy of the new configuration. If you do a cmapplyconf, you want to be sure it applies the newly migrated Access Control Policies. Considerations when Installing Serviceguard When you install Serviceguard for the first time on a node, the node is not yet part of a cluster, and so there is no Access Control Policy. For instructions on how to proceed, see the subsection Allowing Root Access to an Unconfigured Node under Configuring Root-Level Access in Chapter 5 of the Managing Serviceguard manual.

Documents for This Version


For information about the current version of Serviceguard, and about older versions, see the Serviceguard documents posted at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard. The following documents, which can all be found at http://www.hp.com/go/ hpux-serviceguard-docs, are particularly useful. Managing Serviceguard Nineteenth Edition. This manual has been revised for the current A.1 1.20 April 201 Patch release. 1 HP Serviceguard Quorum Server Version A.04.00 Release Notes Serviceguard Extension for RAC Version A.1 1.20 Release Notes Using Serviceguard Extension for RAC Understanding and Designing Serviceguard Disaster Tolerant Architectures Designing Disaster Tolerant HA Clusters Using Metrocluster and Continentalclusters Enterprise Cluster Master Toolkit Version Release Notes Serviceguard/SGeRAC/SMS/Serviceguard Mgr Plug-in Compatibility and Feature Matrix. Securing Serviceguard and other Serviceguard white papers.

For information on the Distributed Systems Administration Utilities (DSAU), see the latest version of the Distributed Systems Administration Utilities Release Notes and the Distributed Systems
34 Serviceguard Version A.1 1.20 Release Notes

Administration Utilities Users Guide at http://www.hp.com/go/hpux-core-docs: go to the HP-UX 11i v3 collection and scroll down to Getting started. For information about the Event Monitoring Service, see the following documents at http:// www.hp.com/go/hpux-ha-monitoring-docs > HP Event Monitoring Service: Using the Event Monitoring Service Using High Availability Monitors

The Event Monitoring Service (EMS) and the Event Monitoring Service (EMS) Developers Kit are available for download at http://www.hp.com/go/softwaredepot -> High Availability. Other relevant HP-UX documentation includes: HP-UX System Administrators Guide at http://www.hp.com/go/hpux-core-docs. This multi-volume manual replaces Managing Systems and Workgroups as of HP-UX 1 v3. 1i For information about the organization of the set, see the Preface to the Overview volume. The latest HP Auto Port Aggregation Release Notes and other APA documentation at http:// www.hp.com/go/hpux-networking-docs > HP-UX 11i v3 Networking Software. The latest version of HP-UX VLAN Administrator's Guide and other VLAN documentation at http://www.hp.com/go/hpux-networking-docs > HP-UX 11i v3 Networking Software.

Further Information
Additional information about Serviceguard and related high availability topics can be found at: http://www.hp.com/go/softwaredepot -> High Availability Online versions of users guides and white papers are available at: http://www.hp.com/go/hpux-serviceguard-docs Support tools and information are available from the Hewlett-Packard IT Resource Centers: http://us-support.external.hp.com (Americas and Asia Pacific) http://europe-support.external.hp.com (Europe)

Compatibility Information and Installation Requirements


Read this entire document (and any other Release Notes or READMEs for related products you may have) before you begin an installation.

Compatibility
For complete compatibility information see the Serviceguard/SGeRAC/SMS/Serviceguard Manager Plug-in Compatibility and Feature Matrix posted at http://www.hp.com/go/hpux-serviceguard-docs.

Mixed Clusters
Mixed cluster has several meanings in the context of Serviceguard. The following are support statements for various types of mixed cluster. Mixed Serviceguard Versions You cannot mix Serviceguard versions in the same cluster; all nodes must be running the same version of Serviceguard. The sole exception to this rule is a rolling upgrade, during which Serviceguard versions can be mixed temporarily, but no cluster configuration changes are allowed. See Upgrading from an Earlier Serviceguard Release (page 43) , and Appendix D of Managing Serviceguard, at the address given under Documents for This Version (page 34).
Further Information 35

Mixed Hardware Architecture As of HP-UX 1 v2 Update 2 (0409) and Serviceguard A.1 1i 1.16, Serviceguard supports mixed-hardware-architecture clusters, consisting of HP 9000 and Integrity servers. Mixed-hardware-architecture clusters support the same volume managers, at the same version level, as Serviceguard clusters in which the server hardware is of a single type. The following restrictions apply: Except during a rolling upgrade, all nodes must be running: The same HP-UX version NOTE: HP-UX version in this context means major release, such as 1 v2. It is acceptable 1i to have a mix of different HP-UX Fusion releases for the same major revision (for example, 1 v2 September 2004 and 1 v2 September 2006), although it is generally best to 1i 1i have all nodes running the same Fusion release. A mix of HP-UX 1 v2 and 1 v3 nodes 1i 1i is also allowed, but entails some restrictions; see Mixed HP-UX Operating-System Revisions (page 36). Keep in mind that Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i The same version of Serviceguard The same version of any volume manager or file system that is independent of HP-UX The same patch level for LVM and SLVM The same patch level for HP-UX, Serviceguard, and volume managers and related subsystems (for example Veritas VxVM, and VxFS)

and in addition, HP strongly recommends that all nodes be running:

All applications running in the cluster must adhere to the vendors requirements for mixed Integrity and HP 9000 environments. The cluster cannot use Oracle RAC. (SGeRAC is not supported in a mixed-hardware-architecture cluster because Oracle RAC does not support mixing hardware architectures within a single RAC cluster.)

For more information about mixed-hardware-architecture clusters, see Configuration Rules for a Mixed HP 9000 /Integrity Serviceguard Cluster at http://www.hp.com/go/hpux-serviceguard-docs. Mixed HP-UX Operating-System Revisions A Serviceguard A.1 1.19 cluster can contain a mix of nodes running HP-UX 1 v2 and 1 v3, 1i 1i with certain restrictions. You may want to take advantage of this fact when preparing to upgrade the cluster to A.1 1.20; see in particular Rules and Restrictions for Clusters in Transition (page 37) and Rules and Restrictions for Heterogeneous Clusters (page 38). IMPORTANT: Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i

For the purposes of this discussion we'll identify three broad cases: homogeneous clusters, clusters in transition, and heterogeneous clusters.

36

Serviceguard Version A.1 1.20 Release Notes

NOTE: In all three cases, the discussion refers to mixing HP-UX versions 1 v2 and 1 v3. 1i 1i Unless explicitly stated otherwise, version in this subsection means HP-UX version, not Serviceguard version. Homogeneous cluster refers to a cluster that is not being upgraded, and which has no need to include nodes running different major versions of HP-UX. See HP Recommendation for Homogeneous Clusters (page 37). Cluster in transition refers to a cluster whose HP-UX version is being upgraded as part of a normal rolling upgrade that occurs over a relatively short period. See Rules and Restrictions for Clusters in Transition (page 37). Heterogeneous cluster refers to a cluster that contains a mix of nodes running HP-UX 1 v2 1i and 1 v3. See Rules and Restrictions for Heterogeneous Clusters (page 38). 1i Such a cluster is either being upgraded from HP-UX 1 v2 to 1 v3 over an extended period, 1i 1i or for some other reason needs to accommodate nodes running both versions of HP-UX. For example, if you are running Serviceguard A.1 1.19 on HP-UX 1 v2, and are planning 1i to upgrade to Serviceguard A.1 1.20, which runs only on HP-UX 1 v3, you may decide to 1i upgrade the nodes to HPUX 1 v3 over a period of time, and then upgrade the cluster to 1i Serviceguard A.1 1.20.
HP Recommendation for Homogeneous Clusters

All nodes should be running the same major HP-UX version. Major version means a release such as 1 v2 or 1 v3. It is acceptable to have a mix of different HP-UX Fusion releases for the same 1i 1i major revision (for example, 1 v2 September 2004 and 1 v2 September 2006), although it 1i 1i is generally best to have all nodes running the same Fusion release at the same patch level. Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i
Rules and Restrictions for Clusters in Transition

You can upgrade a cluster from HP-UX 1 v2 to 1 v3 as part of a rolling upgrade. A rolling 1i 1i upgrade includes upgrading to a new version of Serviceguard. Rules, guidelines, and restrictions for rolling upgrades are in Appendix D of Managing Serviceguard, which you can find at the address given under Documents for This Version (page 34). See also Upgrading from an Earlier Serviceguard Release (page 43). It is also possible to upgrade the HP-UX version without upgrading Serviceguard, and so avoid the rolling upgrade restrictions. In this case you must make sure that: The Serviceguard version bundled with the HP-UX Operating Environment (OE) matches the version already installed. CAUTION: You need to pay careful attention to the Serviceguard patch level as well as the Serviceguard version. If you install Serviceguard at a patch level lower than the one the cluster was running, any new features introduced in the higher-level patch will cease to be available; you will need to re-install the higher-level patch on all nodes before you can use its features again. The Serviceguard product bundled with the HP-UX OE is installed along with the HP-UX filesets. (The Serviceguard binary files differ between HP-UX 1 v2 and 1 v3, even though the 1i 1i Serviceguard version is the same.)

Compatibility Information and Installation Requirements

37

NOTE:

You can apply Serviceguard patches without doing a rolling upgrade.

Rules and Restrictions for Heterogeneous Clusters

IMPORTANT: Serviceguard A.1 1.20 is supported only on HPUX 1 v3, so all cluster nodes 1i must be running 1 v3 before you can complete a rolling upgrade to A.1 1i 1.20. In preparation for an upgrade to Serviceguard A.1 1.20, you may want to upgrade the nodes from 1 v2 to 1 v3 over time. The rules that follow provide guidance. 1i 1i A Serviceguard A.1 1.19 cluster can accommodate a mix of nodes running HP-UX 1 v2 and 1 1i 1i v3, with the following restrictions. Each node must be running either HPUX 1 v2 or 1 v3. No other operating system, or 1i 1i operating-system version (such as 1 v1) is permitted. 1i Except during a rolling upgrade of Serviceguard (see Rules and Restrictions for Clusters in Transition (page 37)), all the nodes must be running Serviceguard A.1 1.19. IMPORTANT: This means, for example, that before you can add a node running HP-UX 1 1i v3 to a cluster in which the existing nodes are running 1 v2, all the 1 v2 nodes, and the 1i 1i new 1 v3 node, must have Serviceguard A.1 1i 1.19 installed. See also Recommendations (page 38). Some HPUX 1 v3 features must not be used as long as any nodes are running 1 v2. 1i 1i These features specifically must not be used: LVM 2.x volume groups. Agile addressing (see About Device Special Files (DSFs) (page 32) for more information about agile addressing).

NOTE: Native multipathing, which is enabled by default on HP-UX 1 v3, can be used on 1i 1 v3 nodes in a heterogeneous cluster. 1i The cluster must not be using Oracle RAC. (SGeRAC is not supported in a heterogeneous cluster because Oracle RAC does not support mixing HP-UX versions within a single RAC cluster.) The cluster must not be using Veritas CVM or CFS. If you are updating nodes from HPUX 1 v2 to 1 v3, you must use update-ux; cold 1i 1i install is not supported in this context. If you are updating nodes from HPUX 1 v2 to 1 v3, and the Serviceguard version bundled 1i 1i with the HP-UX Operating Environment (OE) is later than the one already installed, rolling upgrade restrictions apply (see Rules and Restrictions for Clusters in Transition (page 37)).

Recommendations

In addition to the above Rules and Restrictions for Heterogeneous Clusters, HP strongly recommends the following: All the nodes on a given HP-UX version should be running the same Fusion release, at the same patch level; that is, the 1 v2 nodes should all be running the same 1 v2 Fusion 1i 1i

38

Serviceguard Version A.1 1.20 Release Notes

release at the same patch level, and the 1 v3 nodes should all be running the same 1 v3 1i 1i Fusion release at the same patch level. Keep in mind that Serviceguard A.1 1.20 is supported only on HP-UX 1 v3. 1i All nodes should be at the same Serviceguard patch level. CAUTION: If you introduce a node running a lower patch level than that of the existing nodes, any new functionality introduced in the higher-level patch will cease to be available until that higher-level patch is installed on all nodes. All nodes should be running the same patch levels for other products used by the cluster.

Compatibility with Storage Devices


For the matrix of currently supported storage devices and volume managers, see http:// h71028.www7.hp.com/enterprise/downloads/External-SG-Storage6.pdf.

Bastille Compatibility
To ensure compatibility between Serviceguard (and Serviceguard Manager) and Bastille, do the following, depending on your environment. The files (host.config, for example) are under /etc/opt/sec_mgmt/bastille/defaults/configs/. If Bastille is started using Sec10Host (host.config) level lock down, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident="N" If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If Bastille is started using Sec20MngDMZ (mandmz.config) level lock down, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident=N If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If you are using the Serviceguard WBEM Provider, set IPFilter.block_wbem="N" (default) If you are using Serviceguard IP Monitoring, set IPFilter.block_ping="N" (default) If Bastille is started using SIM.config, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident=N If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If Bastille is started using Sec30DMZ (dmz.config) level lock down, change SecureInetd.deactivate_ident=Y to SecureInetd.deactivate_ident=N If you are using the Serviceguard SNMP subagent, set MiscellaneousDaemons.snmpd="N" If you are using the Serviceguard WBEM Provider, set IPFilter.block_wbem="N" (default) If you are using Serviceguard IP Monitoring, set IPFilter.block_ping="N" (default) Add the following rules to ipf.customrules: pass in quick proto tcp from any to any port = 2301 pass in quick proto tcp from any to any port = 2381 pass in quick from <clusternodes> to any pass out quick from any to <clusternodes> In the above rules, <clusternodes> are all nodes in the cluster, including the local node. The ipf.customrules file is located under the Bastille directory itself. IPFilter-Serviceguard rules are documented in the latest HP-UX IPFilter Administrators Guide, which you can find under http://www.hp.com/go/hpux-security-docs > HP-UX IPFilter Software

Compatibility Information and Installation Requirements

39

For information on how to configure HP-UX Bastille Sec10Host to allow the identd daemon to run, see the latest version of the Security Management volume of the HP-UX System Administrator's Guide under http://www.hp.com/go/hpux-core-docs. See also the HP-UX Bastille Users Guide installed on your system: /opt/sec_mgmt/bastille/ docs/user_guide.txt.

Before Installing Serviceguard A.1 1.20


Before you install Serviceguard A.1 1.20, you need to make sure that your cluster has the correct hardware upgrades. If you are upgrading older systems, make sure your HP representative reviews the firmware levels of SCSI controller cards and installs the latest versions.

Memory Requirements
Serviceguard needs approximately 15.5 MB of lockable memory on each cluster node. NOTE: Remember to tune the swap space and the HP-UX kernel parameters nfile, maxfiles and maxfiles_lim to ensure that they are set high enough for the number of packages you are configuring.

Port Requirements
Serviceguard uses the ports listed below. Before installing, check /etc/services and be sure no other program has reserved these ports. discard 9/udp snmp 161/udp snmp 162/udp clvm-cfg 1476/tcp hacl-qs 1238/tcp hacl-qs 1238/udp hacl-monitor 3542/tcp hacl-monitor 3542/udp hacl-hb 5300/tcp hacl-hb 5300/udp hacl-gs 5301/tcp hacl-gs 5301/udp hacl-cfg 5302/tcp hacl-cfg 5302/udp hacl-probe 5303/tcp hacl-probe 5303/udp hacl-local 5304/tcp hacl-local 5304/udp hacl-test 5305/tcp hacl-dlm 5408/tcp

Serviceguard also uses port 9/udp discard during network probing setup when running configuration commands such as cmcheckconf or cmapplyconf and cmquerycl. If the port

40

Serviceguard Version A.1 1.20 Release Notes

is disabled (in inetd.conf), the network probing may be slower and under some conditions error messages may be written to syslog. Serviceguard also uses dynamic ports (typically in the range of 49152 - 65535) for some cluster services. If you have adjusted the dynamic port range using ndd (1M) - network tuning, alter your rules accordingly.

Ports Required by Serviceguard Manager


If you will be using Serviceguard Manager via HPSMH, make sure the following ports are open in addition to the ports listed above: compaq-https 2381/tcp compaq-https 2381/udp cpq-wbem 2301/tcp cpq-wbem 2301/udp

In addition, if you are using DSAU consolidated logging and decide to use the TCP transport, HP recommends you use TCP port 1775. This port is configurable; if port 1775 is already being used by another application, configure and open another free port when you configure the firewall.

System Firewalls
When using a system firewall such as HP-UX IPFilter with Serviceguard, you must leave open the ports listed above and follow specific IPFilter rules required by Serviceguard; these are documented in the latest version of the HP-UX IPFilter Administration Guide, available from http://www.hp.com/ go/hpux-security-docs > HP-UX IPFilter Software.

Installing Serviceguard on HP-UX


Dependencies
The following are required by Serviceguard. They are part of the HP-UX Base Operating Environment: Open SSL, which includes the OPENSSSL-RUN and OPENSSL-LIB filesets. The EventMonitoring bundle, which contains the EMS-CORE and EMS-CORE-COM filesets.

Installing Serviceguard
Serviceguard will automatically be installed when you install an HP-UX Operating Environment that includes it (HAOE or DCOE). To install Serviceguard independently, follow these broad steps:

Installing Serviceguard on HP-UX

41

CAUTION: This release of Serviceguard requires 1 v3. If you are already running earlier versions of 1i Serviceguard and HP-UX, see Upgrading from an Earlier Serviceguard Release (page 43) for more information. If you intend to use an alternate Quorum Server subnet About the Alternate Quorum Server Subnet (page 30), and the new cluster will use an existing Quorum Server, make sure you are running Quorum Server version A.04.00. If not, you must upgrade the before you proceed; see Quorum Server Upgrade Required if You Are Using an Alternate Address (page 9). Install or upgrade to HP-UX 1 v3 before loading Serviceguard Version A.1 1i 1.20. For information and instructions, see the HP-UX Installation and Update Guide for the target release at http://www.hp.com/go/hpux-core-docs. 2. Use the swinstall command to install Serviceguard, product number T1905CA. For more information about swinstall, see the swinstall(1M) manpage and the Software Distributor Administration Guide for HP-UX 1 v2 or 1 v3. 1i 1i 3. Verify the installation. Use the following command to display a list of all installed components: swlist -R T1905CA

1.

The filesets that make up the Serviceguard product are: Serviceguard.CM-SG SGManagerPI.SGMGRPI SGWBEMProviders.SGPROV-CORE SGWBEMProviders.SGPROV-DOC SGWBEMProviders.SGPROV-MOF CM-Provider-MOF.CM-MOF CM-Provider-MOF.CM-PROVIDER Cluster-OM.CM-DEN-MOF Cluster-OM.CM-DEN-PROV Cluster-OM.CM-OM Cluster-OM.CM-OM-AUTH Cluster-OM.CM-OM-AUTH-COM Cluster-OM.CM-OM-COM Cluster-OM.CM-OM-MAN Package-CVM-CFS.CM-CVM-CFS Package-CVM-CFS.CM-CVM-CFS-COM Package-Manager.CM-PKG Package-Manager.CM-PKG-MAN Cluster-Monitor.CM-CORE Cluster-Monitor.CM-CORE-COM Cluster-Monitor.CM-CORE-MAN

42

Serviceguard Version A.1 1.20 Release Notes

NOTE: There are files in CM-CORE that are reserved for HP support. Do not change these files. Do not move, alter, or delete the following: /usr/contrib/bin/cmcorefr /usr/contrib/bin/cmdumpfr /usr/contrib/bin/cmfmtfr /usr/contrib/Q4/lib/q4lib/cmfr.pl /var/adm/cmcluster/frdump.cmcld.x (where x is a digit)

NOTE: If you did a swremove of an older version of Serviceguard before the swinstall, a zero-length binary configuration file (/etc/cmcluster/cmclconfig) may be left on your system. Remove this file before you issue the swinstall command. If you do not remove the zero-length binary configuration file, the installation will proceed correctly, but you may see error or warning messages such as:
Bad binary config file directory format. Could not convert old binary configuration file.

These messages may safely be ignored.

If You Need To Disable identd


CAUTION: HP does not recommend disabling this security feature, as it maintains the integrity and high availability of your data. If you must disable identd, do it after installing Serviceguard but before each node rejoins the cluster (for example, just before issuing the cmrunnode or cmruncl command). Instructions are in Chapter 5 of the Managing Serviceguard manual, under the heading Disabling identd in the section Managing the Running Cluster.

Upgrading from an Earlier Serviceguard Release


You can upgrade to Serviceguard A.1 1.20 from any earlier release (whether or not that release is still in support life), but you can perform a rolling upgrade only from A.1 1.19. Read the bullets that follow carefully before proceeding. This release of Serviceguard requires HPUX 1 v3. For information about HP-UX upgrade 1i paths, see the HP-UX Installation and Upgrade Guide at http://www.hp.com/go/ hpux-core-docs > HP-UX 11i v3. In order to do a rolling upgrade to Serviceguard A.1 1.20, you must be running A.1 1.19. If you are running an earlier version, upgrade to A.1 1.19 before attempting a rolling upgrade to A.1 1.20. See Rolling Upgrade (page 46). This restriction does not apply to non-rolling upgrades. For more information about non-rolling upgrade, see Appendix D of Managing Serviceguard, which you can find at the address

Installing Serviceguard on HP-UX

43

given under Documents for This Version (page 34); see also Non-Rolling Upgrade Using DRD (page 44). CAUTION: Special considerations apply to a rolling or non-rolling upgrade to Serviceguard A.1 1.19; see New Cluster Manager (page 8). If you are upgrading both the Quorum Server and Serviceguard, upgrade the Quorum Server before you upgrade Serviceguard. CAUTION: If you are using an alternate Quorum Server subnet (page 30), and you are not already running Quorum Server version A.04.00, you must upgrade to version A.04.00 before you proceed; see Quorum Server Upgrade Required if You Are Using an Alternate Address (page 9). If you are upgrading from a release earlier than A.1 1.16, see Access changes as of A.1 1.16 (page 33). For information about supported Serviceguard versions, see the support matrix at http:// www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard.

CAUTION: Make sure that no package is in maintenance mode when you upgrade Serviceguard; see Maintaining a Package: Maintenance Mode in chapter 7 of Managing Serviceguard for more information about maintenance mode.

Upgrade Using DRD


DRD stands for Dynamic Root Disk. Using a Dynamic Root Disk on HP-UX 1 v3 allows you to 1i perform the update on a clone of the root disk, then halt the node and reboot it from the updated clone root disk. You can obtain the latest version of the DRD software free from http://www.hp.com/go/drd. IMPORTANT: Use the clone disk only on the system on which it was created. Serviceguard does not support booting from a clone disk made on another system (sometimes referred to as DRD re-hosting). Rolling Upgrade Using DRD A rolling upgrade using DRD is like a rolling upgrade, but is even less disruptive because each node is down for a shorter time. It is also very safe; if something goes wrong you can roll back to the original (pre-upgrade) state by rebooting from the original disk. This method is the least disruptive, but you need to make sure your cluster is eligible; see Requirements for Rolling Upgrade to A.1 1.20 (page 47) and Restrictions for DRD Upgrades (page 45). If, after reading and understanding the restrictions, you decide to perform a rolling upgrade using DRD, follow the instructions under Performing a Rolling Upgrade Using DRD in Appendix D of the latest edition of Managing Serviceguard, which you can find at the address given under Documents for This Version (page 34). Non-Rolling Upgrade Using DRD In a non-rolling upgrade with DRD, you clone each node's root disk and apply the upgrade to the clone, then halt the cluster and reboot each node from its updated clone root disk. This method involves much less cluster down time than a conventional non-rolling upgrade, and is particularly safe because the nodes can be quickly rolled back to their original (pre-upgrade) root disks. But you must make sure your cluster is eligible; see Restrictions for DRD Upgrades (page 45).

44

Serviceguard Version A.1 1.20 Release Notes

If, after reading and understanding the restrictions, you decide to perform a non-rolling upgrade using DRD, follow the instructions under Performing a Non-Rolling Upgrade Using DRD in Appendix D of the latest edition of Managing Serviceguard, which you can find at the address given under Documents for This Version (page 34). Restrictions for DRD Upgrades DRD is available only on HP-UX 1 v3. 1i As of the date of these release notes, the only paths that are supported for DRD upgrade are: from Serviceguard version A.1 1.18 with patch PHSS_37602 (or a later cumulative patch) to: A.1 1.19 an A.1 1.19 patch release A.1 1.20 an A.1 1.20 patch release an A.1 1.19 patch release A.1 1.20 an A.1 1.20 patch release

from Serviceguard version A.1 1.19 to:

IMPORTANT: Upgrades from earlier releases via DRD are not currently supported. DRD upgrades to version A.1 1.19 require that you make a manual fix to a checkinstall script in the software depot. You need to make this fix if: You are using drd runcmd update-ux to upgrade to Serviceguard A.1 1.19; or You are using drd runcmd swinstall to upgrade to Serviceguard A.1 1.19. For instructions on making the fix, see the Workaround Notes for defect QXCR1000901306 at ITRC.hp.com; see Fixed in This Version (page 49) for more information about looking up defects. You do not need to make this fix if: You are upgrading from Serviceguard A.1 1.18 or A.1 1.19 to A.1 1.20, or You are using drd runcmd swinstall to upgrade from one revision of Serviceguard A.1 1.19 to a later revision, or You are using drd runcmd update-ux to upgrade to an HP-UX OE containing SG A.1 1.20.

As of the date of these release notes, DRD upgrade is supported for clusters that use the LVM or VxVM volume manager.

Installing Serviceguard on HP-UX

45

IMPORTANT: See the HP Serviceguard Storage Management Suite A.04.00 Release Notes and HP Serviceguard Storage Management Suite A.03.00 Release Notes for information about DRD upgrades in CVM/CFS environments. Use the DRD software released with the September 2009 release of HP-UX 1 v3 or later. 1i You do not have to upgrade the operating system itself, so long as you are running 1 v3. 1i HP recommends using the latest version of the DRD software, which you can obtain free from HP (see Upgrade Using DRD above.) Serviceguard does not support booting from a clone disk made on another system (sometimes referred to as DRD re-hosting). The cluster must meet both general and release-specific requirements for a rolling or non-rolling upgrade; see the remainder of this section and the sections Guidelines for Rolling Upgrade and Guidelines for Non-Rolling Upgrade in Appendix D of the latest edition of Managing Serviceguard. You must follow the instructions for DRD upgrades in Managing Serviceguard; see Performing a Rolling Upgrade Using DRD or Performing a Non-Rolling Upgrade Using DRD in Appendix D of the latest edition of that manual. IMPORTANT: The following information is additional to the instructions in Appendix D of Managing Serviceguard. If the install depot is on tape (or is a single file) you must copy it onto the clone disk or an upgrade server before running swinstall or update-ux to upgrade the clone. Do one of the following: use swcopy to copy the depot onto the active root disk, then clone the disk, then install the software from the depot on the clone disk; or use swcopy to copy the depot to an external server, then use the depot on the external server to install the software onto the clone disk

You should also do this if you are going to use update-ux and the install depot is on a CD mounted on the local system, or is a directory in the active system (for example, in a file system mounted on the active root disk). In addition, make sure you do not use control-C to escape from a runcmd drd swinstall command, as this can cause problems.

Veritas Storage Management Products


For information about installing and updating VxVM, see the appropriate version of the Veritas Installation Guide at http://www.hp.com/go/hpux-core-docs > 11i v3 Setup and install general. NOTE: A new default Disk Layout Version was introduced in VxVM 4.1, and not all earlier Disk Layout versions are supported on HP-UX 1 v3. See the Veritas 5.0 Installation Guide for details. 1i

Rolling Upgrade
In some cases you can upgrade Serviceguard and HP-UX without bringing down the cluster; you do this by upgrading one node at a time, while that nodes applications are running on an alternate node. The process is referred to as a rolling upgrade, and is described in Appendix D of the Managing Serviceguard manual.

46

Serviceguard Version A.1 1.20 Release Notes

Requirements for Rolling Upgrade to A.1 1.20 To perform a rolling upgrade to Serviceguard A.1 1.20, you must be running: HP-UX 1 v3 and 1i Serviceguard A.1 1.19

If you are not already running A.1 1.19, you may be able to do a rolling upgrade to A.1 1.19, and from A.1 1.19 to A.1 1.20. See the next two subsections. Requirements for Rolling Upgrade to A.1 1.19 IMPORTANT: Although you can upgrade to Serviceguard A.1 1.19 while still on HP-UX 1 v2, 1i you must upgrade to HP-UX 1 v3 in order to upgrade to Serviceguard A.1 1i 1.20. Rolling upgrade to Serviceguard A.1 1.19 is supported only if you are upgrading from: Serviceguard A.1 1.15 or greater on HP Integrity systems running HP-UX 1 v2 or 1 v3; or 1i 1i Serviceguard A.1 1.16 or greater on HP 9000 systems running HP-UX 1 v2 or 1 v3. 1i 1i IMPORTANT: If you are upgrading from A.1 1.16 on HP-UX 1 v2, you must first install 1i patch PHSS_31072 or a later patch. See also the requirements listed above under Upgrading from an Earlier Serviceguard Release (page 43), and the Rolling Upgrade Exceptions that follow. Obtaining a Copy of Serviceguard A.1 1.19 If you are running a release earlier than A.1 1.19, and you need to obtain a copy of A.1 1.19 so that you can do a rolling upgrade to A.1 1.20, you can download the software from web at http:// www.software.hp.com/kiosk. Log in as follows: User name: ESS_SG1119_KIOSK Password: upgrade21120

Rolling Upgrade Exceptions


HP-UX Cold Install A rolling upgrade cannot include a cold install of HP-UX on any node. A cold install will remove configuration information; for example, device file names (DSFs) are not guaranteed to remain the same after a cold install. HP Serviceguard Storage Management Suite and standalone CVM product In many cases you cannot do a rolling upgrade if you are upgrading the HP Serviceguard Storage Management Suite. Specifically you cannot do a rolling upgrade if you are using the Veritas clustering capabilities CVM and CFS. In the case of CVM, this applies whether you purchase it as part of a Storage Management bundle, or as a standalone product. For more information, see the Veritas 5.0 Installation Guide and the HP Serviceguard Storage Management Suite A.02.01 Release Notes at http://www.hp.com/go/hpux-serviceguard-docs > HP Serviceguard Storage Management Suite. Migrating to Agile Addressing if Using Cluster Lock Disk You cannot migrate to the HP-UX 1 v3 agile addressing scheme for device files during a rolling 1i upgrade if cluster lock disks are used as a tie-breaker, because that involves changing the cluster configuration. See Updating the Cluster Lock Configuration in Chapter 7 of Managing

Installing Serviceguard on HP-UX

47

Serviceguard for instructions in this case. See About Device Special Files (DSFs) (page 32) of these Release Notes for more information about agile addressing.

Upgrading from an Earlier Release if You Are Not Using Rolling Upgrade (Non-Rolling Upgrade)
If your cluster does not meet the requirements for a rolling upgrade, or you decide not to use rolling upgrade for some other reason, you must bring down the cluster (cmhaltcl) and then upgrade Serviceguard and HP-UX on the nodes. See Appendix D of the Managing Serviceguard manual. You can perform a non-rolling upgrade (that is, an upgrade performed while the cluster is down) from any HP-UX/Serviceguard release.

Uninstalling Serviceguard
To uninstall the Serviceguard software, run the SD-UX swremove command. Before removing software, note the following: 1. 2. Serviceguard must be halted (not running) on the node from which the swremove command is issued. The system from which the swremove command is issued must be removed from the cluster configuration. (If the node is not removed from the cluster configuration first, swremove will cause the current cluster to be deleted.) The swremove command should be issued from one system at a time. That is, if Serviceguard is being de-installed from more than one system, it should be removed from one system at a time. If a zero-length binary configuration file (/etc/cmcluster/cmclconfig) is left your system you should remove it.

3.

Patches for this Version


The table below lists patches required or recommended for Serviceguard A.1 1.20 on HP-UX 1 1i v3. Before installing Serviceguard, you should also check the Hewlett-Packard IT Resource Center web page for any new patch requirements: http://itrc.hp.com (Americas and Asia Pacific) http://europe.itrc.hp.com (Europe) NOTE: One quick way to see which patches that have been applied to your system is a command such as the following: swlist -l patch | grep applied | more For complete information, see the section Which Patches Are on a System? in the Patch Management User Guide for 1 Systems, at http://www.hp.com/go/hpux-core-docs. 1.x Table 2 Patches
Patch Number PHNE_35894 or later cumulative patch PHKL_37458 or later cumulative patch Description Patch to enable online replacement of LAN cards on HP-UX 1 v3. Fixes 1i QXCR1000575890; see QXCR1000575890: OLR of a LAN Card in SG cluster fails on HP-UX 1 v3 (page 49). 1i Patch to modify the GIO subsystem to export the wwid and uniq_name properties of a LUN. This patch is required.

48

Serviceguard Version A.1 1.20 Release Notes

Table 2 Patches (continued)


Patch Number PHCO_39413 or later cumulative patch Description Patch to enable display of wwid and uniq_name properties in the output of ioscan (1m). This patch is recommended. PHSS_41628 PHSS_41674 Patch to enable the features introduced in HP Serviceguard A.1 1.20 on HP-UX 1 1i v3 listed under New Features for A.1 1.20 April 201 Patches (page 13). 1 Patch to enable the Modular CFS Package feature introduced in HP Serviceguard CFS A.1 1.20 on HP-UX 1 v3 listed under New Features for A.1 1i 1.20 April 201 Patches 1 (page 13). NOTE: This patch is needed only if SG Storage Management Suite A.03.01 or A.04.00 is installed on the node. To install this patch, PHSS_41628 must be installed.

QXCR1000575890: OLR of a LAN Card in SG cluster fails on HP-UX 1 v3 1i


Problem: Online replacement (OLR) of a LAN card in a Serviceguard cluster fails on a system running HP-UX 1 v3 because the Critical Resource Analysis (CRA) performed as part of the OLR 1i operation returns CRA_SYS_CRITICAL. You will encounter this problem on an unpatched HP-UX 1 v3 system whether you use the Peripheral Device Tool (pdweb) or the HP-UX olrad command. 1i Workaround: Apply patch PHNE_35894. (See Patches for this Version (page 48) for more information about patches.) NOTE: You can apply the patch without a reboot.

On a system to which the patch has been applied, you will be able to perform online replacement of hot-swappable cards (without bringing down the cluster). See Replacing LAN or Fibre Channel Cards in Chapter 8 of Managing Serviceguard for more information. (You can find the manual at http://www.hp.com/go/hpux-serviceguard-docs.) NOTE: If for some reason you need to proceed without the patch, you must follow the Off-Line Replacement procedure under Replacing LAN or Fibre Channel Cards in Chapter 8 of Managing Serviceguard.

Fixed in This Version


This section lists defects that have been fixed in this release. NOTE: Serviceguard A.1 1.19 also includes all fixes already included in patches to earlier Serviceguard versions; these fixes are not necessarily documented here. You can find more information about these defects at ITRC.hp.com. Proceed as follows. 1. 2. 3. 4. Log in to ITRC.hp.com Choose >> Search knowledge base in the left frame Enter the defect number beginning QXCR as the search string (for example QXCR1000472750), or the number beginning JAG if there is no QXCR number. Search against Engineering notes, solutions, bug reports, FAQs (uncheck the other items in the list).

Fixed in This Version

49

A.1 1.20 Defects Fixed in the April 201 Patches 1


QXCR1001047727: Out or order messages and aborts after lan failures with SG 11.19 QXCR1001046353: cmclconfd does not reset the hostname resolution family after deleting the cluster QXCR1001020964: CrossSubnet:Not able to form the cluster with IPv6 address. QXCR1001037833: cmpreparecl: Serviceguard is not installed QXCR1000924066: System becomes unresponsive after repeated Serviceguard service failures QXCR1001015937: Modular packages are at risk of failure during online reconfiguration QXCR1001017615: Member timeout difference after rolling upgrade causes cmapplyconf failure

Additional fixes are listed in the patch form text, which is delivered with the patches.

Defects Fixed in A.1 1.20


The following defects are fixed in Serviceguard A.1 1.20 (some may also have been fixed in recent patches). QXCR1001024162: cmlvmd reports Unable to initialize ESCSI kernel interface: File exists (17) QXCR1001023633: cmsnmpd experiences fd leak if identd fails QXCR1001021604: EMS resources are not unregistered when cmcld exits after unsuccessful cmrunnod QXCR1001021711: SG A.11.19 does not clear cluster lock area when cluster starts QXCR1001017930: Removing an external script from a modular package QXCR1001017176: multi-threaded volume group activation may result in non-attached PV's QXCR1001016805 : cmcld core and node TOC after rolling upgrade from 11.17 to 11.19 QXCR1001015586: SG PHSS_40152 may result in silent corruption on LVM versions lower than 0909 QXCR1001013739: Corrupt dlpi message can result in cmnetd failure and system TOC. QXCR1001011472: Modular packages are at risk of failure during online reconfiguration QXCR1001009221: Serviceguard 11.19 cmproxyd reports cluster is not configured when it is QXCR1001007814: port scan brings down cmcld QXCR1001007803: SG 11.20 unlimited modular package service restarts is impossible online QXCR1001007809: Nodes cannot be removed from cross subnet CFS cluster online

50

Serviceguard Version A.1 1.20 Release Notes

QXCR1001007816: Serviceguard 11.20 cmcld abort on unexpected UDP message version QXCR1001007347: Serviceguard 11.20 installation does not preserve hacl-cfg options QXCR1001005198: cmcld has a file descriptor and memory leak when connect errors occur QXCR1001005241: cmmodnet -t incorrectly plumbs secondary interface on lan QXCR1001005296: cmcld runs into an assertion at cluster start QXCR1001003944: SG cross subnet functionality breaks non symmetric cluster configurations QXCR1000984406: cmruncl -n may fail if MEMBER_TIMEOUT is 60 seconds or more QXCR1000984418: cmclconfd -p does not process query requests from commands after nessus scan QXCR1000984388: Packages may re-start if they are halted at the same time and have dependencies QXCR1000984408: cmproxyd does not clean up named pipe files in /var/adm/cmcluster/proxies dir QXCR1000984401: script_log_file does not resolve more than one $SG_PACKAGE variable QXCR1000984411: Serviceguard 11.19 unlimited modular package service restarts is impossible QXCR1000957441: ALL nodes in ALL 11.19 clusters fail after quorum server re-configuration QXCR1000945173: all cmcld threads in ksleep() safetytimer expiration node TOC QXCR1000939872: cmcld abort due to closing the same fd twice after QS failure QXCR1000930233: 11.19 cmclconfd liable to core dump and log messages omit hostname QXCR1000926553:cmclconfd hang QXCR1000924956: SG cmcld does not log .cm_start_time message correctly in syslog after abort QXCR1000924958: cmcld SIGSEGV when starting a 1 node cluster after upgrade to A.11.19 QXCR1000924001: cmfileassistd should not die after 1min of inactivity QXCR1000924069: Serviceguard does not detect cluster lock recovery QXCR1000924116: cmdisklock core dumps if the open fails on a locklun QXCR1000923632: cmcld aborts after short hangs on one-node-cluster QXCR1000923641: cmcld aborts after short hangs on one-node-cluster

Additional fixes are listed in the patch form text, which is delivered with the patches.

Fixed in This Version

51

Problems Fixed in this Version of the Serviceguard Manager Plug-in


The following defects have been fixed in Version B.03.10 of Serviceguard Manager: QXCR1001103488: SGManager displays wrong status of SGeRAC as configured QXCR1001031495: DTS AR1009 FR: SG Mgr - Metrocluster configuration ease of use input QXCR1001047635: Package goes into maintenance if there is any .debug file in package directory QXCR1001061996: Toolkit maintenance mode state being incorrectly reported by SGMgrPI QXCR1001067056: Disallow Halt Package on node where it is not running QXCR1001075056: Add ASM PKG dependency automatically to ASM_DB package QXCR1001075087: RACDB and ASMDG pkgs should display maintenance status when CRS is in maintenance mode QXCR1001079529: ServiceguardManagerPI operations fail when launched from HP SIM on windows CMS QXCR1001081511: Serviceguard Manager PI 2.0 refresh is very slow when node down or fails to start QXCR1001100842: SGMgr displays wrong warning on mouse over for SGeSAP alert QXCR1001086491: Tooltip for Package "State/Status" does not mention "Detached" state QXCR1001088102: Preview Halt node operation should display proper message QXCR1001090394: OLH page should be updated for Cluster and Node status QXCR1001093940: Message in SGMgr GUI should change after operation is complete QXCR1001096366: Running multiple Packages configured for different nodes in single run operation QXCR1001098626: Map View does not display icons for cluster, node and package in Internet Explorer QXCR1001098914: OLH page for SAP Database Instance Module must be updated for DB2 QXCR1001099255: SGeRAC packages maintenance mode tooltip should be consistent with OLH QXCR1001100699: OLH for SGeRAC Oracle RAC DB, ASMDG parameters should be consistent with tooltip QXCR1001100700: SGeRAC Toolkit RAC DB Package configuration page should be updated QXCR1001101146: Configuration menu should be disabled while packages detached in the cluster

Additional fixes are listed in the patch form text, which is delivered with the patches.

52

Serviceguard Version A.1 1.20 Release Notes

Known Problems
This section lists problems in Serviceguard Version A.1 1.20 known at the time of publication. This list is subject to change without notice. More-recent information may be available from your HP support representative, or on the Hewlett-Packard IT Resource Center (ITRC): http://www.itrc.hp.com (Americas and Asia Pacific) or http://www.europe.itrc.hp.com (Europe). You can find details of each defect on the ITRC web site. See Fixed in This Version (page 49) for instructions on looking up a defect.

Known Problems for Serviceguard


QXCR1001063125: SGWBEMProviders A.03.10.00 reports installation error QXCR1000876609: member crashes during cluster reformation, syslog shows cmcld was running QXCR1001008705: cmclconfd & cmproxyd cored in sg_is_ipv6only_hostname () QXCR1001028611: SGProvider core when cmrunnode/cmruncl with detached packages fail QXCR1001019766: SGProvider: Caught CIM Exception messages in syslog QXCR1001026866: SGProvider logs Permission denied to 127.0.0.1 msg in syslog when cluster is not configured QXCR1001020972: SG package validation script incorrectly thinks mount point is nested QXCR1001108352: Package IP addresses for MNPs should result in cmapplyconf failures QXCR1001107652: Regarding multiple entries issue using cmmakepkg -t option QXCR1001092092: The sg_log function for packages does not display the date correctly QXCR1001112638: cmcheckconf reports error for low member timeout on single node cluster QXCR1001111657: cmcheckconf dumps core for huge CONFIGURED_IO_TIMEOUT_EXTENSION QXCR1001113972: cmmodnet fails adding valid IP address ending on 255 QXCR1001105246: ST:cmcld core dump at in cl_list2_enqueue -at utils/cl_list2.c:184 QXCR1000992056: Changing polling interval online can result in false failovers QXCR1001109402: SafetyTimer getting terminated after 24 hours of run QXCR1001098978: cmapplyconf does not validate mount options for ckpt package. QXCR1001098295: Long mount point results in awk error for CFS MP DG pkg QXCR1001098287: Package incorrectly fails when using cfs_mount_options

Known Problems

53

Known Problems for Serviceguard Manager


The following problems in Serviceguard Manager B.03.10 were known at the time of publication. QXCR1001119074: Some SGMgr labels and notes need to be localized QXCR1001119063: In a site aware cluster, OC package per site is allowed ONLY when configured through package easy deployment.

About Serviceguard Releases


Types of Releases and Patches
Versions of Serviceguard are provided as platform releases, feature releases, or patches.

Platform Release
A platform release is a stable version of Serviceguard, which is the preferred environment for the majority of Serviceguard customers. Platform releases may also contain new Serviceguard features. These releases are supported for an extended period of time, determined by HP. Patches will be made available within the extended support time frame even though a newer version of Serviceguard is available. Serviceguard A.1 1.20 is a platform release. NOTE: For compatibility information for this and earlier releases, see the Serviceguard Compatibility and Feature Matrix at the address given under Documents for This Version (page 34). See also Compatibility Information and Installation Requirements (page 35) in these Release Notes.

Feature Release
A feature release contains new Serviceguard features. Feature releases are for customers who want to use the latest features of Serviceguard. In general, feature releases will be supported until a newer version of Serviceguard becomes available. In order to receive fixes for any defects found in a feature release after a newer version is released, you will need to upgrade to the newer, supported version.

Patch
A patch to a release may be issued in response to a critical business problem found by a Serviceguard customer, or a patch may enable new features. (Such features do not affect the running of an existing Serviceguard cluster until you activate them, for example by reconfiguring the cluster or reworking a package.) In the case of a patch, the following is guaranteed: Patch-specific release testing is performed before the patch is posted. Existing functionality, scripts, etc. will continue to operate without modification.

All fixes from the previous patch are incorporated. Certification testing for a patch is recommended only for those fixes that are important to your specific installation, or for new features that you intend to use.

Supported Releases
For information about the support life of a given release and its compatibility with versions of related products, see the Serviceguard Compatibility and Feature Matrix, at http://www.hp.com/ go/hpux-serviceguard-docs.

54

Serviceguard Version A.1 1.20 Release Notes

Version Numbering
Serviceguard releases employ a version numbering string that helps you identify the characteristics of the release. The version string has 4 parts: Alphabetic Prefix First Numeric Field Second Numeric Field

Third Numeric Field When a new release is issued, different portions of the version string are incremented to show a change from a previous version of the product.

Release Notes Revisions


Occasionally, important new information warrants revising the Release Notes after they have gone to press. In such cases HP updates the Release Notes at http://www.hp.com/go/ hpux-serviceguard-docs. Versions with the same part number are differentiated by the publication date.

Native Languages
Serviceguard Manager provides Native Language Support, but the command-line interface does not. IMPORTANT: Even though the command-line interface does not provide Native Language Support, Serviceguard functions correctly whether or not the LANG variable is set to C. See Serviceguard Manager (page 24) for a list of the languages supported by Serviceguard Manager. Documentation for major releases of Serviceguard is translated into the following languages: Japanese Simplified Chinese

Release Notes Revisions

55

Das könnte Ihnen auch gefallen