Sie sind auf Seite 1von 718

V4.

cover



Front cover

HACMP System
Administration I: Planning and
Implementation
(Course code AU54)

Student Notebook
ERC 8.0

IBM certified course material

Student Notebook

Trademarks
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AIX
BladeCenter
DS4000
Enterprise Storage Server
HACMP
POWER
Redbooks
System i5
System Storage
WebSphere

AIX 5L
Cross-Site
DS6000
General Parallel File
System
NetView
POWER5
Requisite
System p
Tivoli

Approach
DB2
DS8000
GPFS
Notes
pSeries
SP
System p5
TotalStorage

Windows is a trademark of Microsoft Corporation in the United States, other countries, or


both.
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the
United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Other company, product, or service names may be trademarks or service marks of others.

June 2008 edition


The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customers ability to evaluate and integrate them into the customers operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.
Copyright International Business Machines Corporation 1998, 2008. All rights reserved.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.

V4.0
Student Notebook

TOC

Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Course description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Unit 0. Course introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-1
Course objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2
Course agenda (1 of 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-3
Course agenda (2 of 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-4
Course agenda (3 of 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-5
Course agenda (4 of 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-6
Course agenda (5 of 5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-7
Lab exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-8
Student Guide font conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-9
Course overview summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-10
Unit 1. Introduction to HACMP for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
1.1 High Availability concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
High availability and HACMP concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
So, what is High Availability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Eliminating single points of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6
High availability clusters (HACMP base) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
So, what about site failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
IBM's HA solution for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Fundamental HACMP concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
A highly available cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
HACMPs topology components (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
HACMPs topology components (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
HACMP's resource components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
What is HACMP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
Additional features of HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24
Some Assembly Required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
Lets review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-28
1.2 What does HACMP do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-29
Topic 2 objectives: What does HACMP do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-30
Just What Does HACMP Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-31
What happens when something fails? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-32
What happens when a problem is fixed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-33
Standby (active/passive) with fallback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-34
Standby (active/passive without fallback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-36
Mutual takeover: Active/Active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-37
Concurrent: multiple active nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-39
Copyright IBM Corp. 1998, 2008
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Contents

iii

Student Notebook

Points to ponder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-41


Other considerations for HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-42
Things HACMP Does Not Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-44
When is HACMP not the correct solution? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-45
What do we plan to achieve this week? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-47
Overview of the implementation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-48
Hints to get started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-50
Sources of HACMP information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-51
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-52
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-53
Unit 2. Networking considerations for high availability . . . . . . . . . . . . . . . . . . 2-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2
2.1 How HACMP uses networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
How HACMP uses networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-4
How does HACMP use networks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-5
Providing HA client access to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-7
What HACMP detects and diagnoses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-9
Heartbeat packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-11
Failure detection versus failure diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-13
Failure diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-14
What if all heartbeat packets stop? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-15
CRITICAL: All clusters require a non-IP network . . . . . . . . . . . . . . . . . . . . . . . . . .2-17
The two subnet rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-19
Failure recovery and reintegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-20
Lets review topic 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-21
2.2 HACMP concepts and configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-23
HACMP concepts and configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-24
HACMP networking support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-25
Network types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-27
HACMP topology components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-29
Naming nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-31
HACMP network component terms (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-33
HACMP network component terms (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-34
IP network configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-36
Non-service IP address examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-39
Non-ip network configuration rules: Point-to-point . . . . . . . . . . . . . . . . . . . . . . . . .2-40
Non-IP network configuration rules: Multi-node . . . . . . . . . . . . . . . . . . . . . . . . . . .2-43
Persistent node IP labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-45
Lets review topic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-47
2.3 Implementing IP address takeover (IPAT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49
Implementing IP Address Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-50
IP Address Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-51
Two ways to implement IPAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-53
IPAT via IP aliasing configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-55
IPAT via IP aliasing at startup of resource group . . . . . . . . . . . . . . . . . . . . . . . . . .2-58
IPAT via IP aliasing after an interface fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-59
iv

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

TOC

IPAT via IP aliasing after a node fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-61


IPAT via IP aliasing: Distribution preference for service IP label aliases . . . . . . . 2-62
IPAT via IP aliasing summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-64
IPAT via IP replacement overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-66
Service IP address examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-68
Adopt labeling/naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-69
Hostname resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-70
Other configurations - Etherchannel (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-72
Other configurations - Etherchannel (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-73
Other configurations: Base virtual Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-74
HACMP view of virtual Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-75
Other configurations: Single IP adapter nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-76
Talk to your network administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-77
Changes to AIX start sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-78
Changes to /etc/inittab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-79
Common TCP/IP configuration problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-80
Lets review topic 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-81
2.4 The impact of IPAT on clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-83
The impact of IPAT on clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-84
How are users affected? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-85
What about the users's computers? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-87
Local or remote client? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-88
Gratuitous ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-89
Gratuitous ARP support issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-90
What if gratuitous ARP is not supported? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-91
Option 1: clinfo on the client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-92
Option 2: clinfo from within the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-93
clinfo.rc script (extract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-94
Option 3: Hardware address takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-96
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-98
Unit summary (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-99
Unit summary (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-100
Unit 3. Shared storage considerations for high availability . . . . . . . . . . . . . . . . 3-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.1 Fundamental shared storage concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Fundamental shared storage concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
What is shared storage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
What is private storage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Access to shared data must be controlled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Who owns the storage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Reserve/release-based protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
Reserve/release disk takeover: Manual move . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
Reserve/release disk takeover - failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Reserve/release ghost disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
RSCT-based shared storage protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18
Enhanced concurrent volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Contents

Student Notebook

ECMVG varyon - active versus passive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-22


ECMVG state: Active versus passive . . . . . . . . . . . . . . . . 3-24
How ECMVGs work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-25
Determining ECMVG or Group Services status . . . . . . . . . . . . . . . . . . . . . . . . . . .3-27
RSCT-based fast disk takeover: Manual move . . . . . . . . . . . . . . . . . . . . . . . . . . .3-28
RSCT-based fast disk takeover: Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-29
Fast disk takeover details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-30
Lets review topic 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-31
3.2 Shared disk technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-33
Shared disk technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-34
Shared disk and HACMP strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-35
Virtual storage (VIO) and HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-37
IBM SAN storage and HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-43
Non-IBM SAN storage and HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-44
SCSI technology and HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-46
Physical volume IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-49
Support for OEM disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-51
Lets review topic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-55
3.3 Shared storage from the AIX perspective. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-57
Topic 3 objectives: Shared storage from the AIX perspective . . . . . . . . . . . . . . . .3-58
Logical Volume Manager review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-59
LVM relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-61
ODM-LVM relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-62
Creating a shared volume group: Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-63
LVM mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-66
Steps to create a mirrored file system - manually . . . . . . . . . . . . . . . . . . . . . . . . . .3-68
MIrroring? Lets talk quorum checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-70
Elimination of quorum issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-73
Allow HACMP to handle it: Forced varyon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-75
Recommendations for forced varyon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-77
LVM and HACMP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-78
Support for OEM volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-80
Support for OEM file systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-82
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-84
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-85
Unit 4. Planning for applications and resource groups. . . . . . . . . . . . . . . . . . . 4-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-2
How to define an application to HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-3
Application considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-5
Writing start and stop scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-8
Where should data go? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-10
Resource group policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-12
Startup policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-14
Online on all available nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-16
Fallover policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-18
Fallback policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-20
vi

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

TOC

Valid combinations of policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Dependent applications/resource groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-22
4-23
4-25
4-26

Unit 5. HACMP installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.1 Installing the HACMP 5.4.1 software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Installing the HACMP software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
Steps for successful implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Where are we in the implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
First steps in planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
What is on the CD? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Install the HACMP filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
Dont forget the prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Some final things to check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Install HACMP client machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Lets review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
5.2 What was installed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21
What was installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22
The layered look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23
HACMP components and features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
Cluster manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Cluster secure communication subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Cluster communication daemon (clcomd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
clcomd standard connection authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
RSCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-33
HACMP from an RSCT perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-34
Heartbeat rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36
HACMPs SNMP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38
Cluster information daemon (clinfo) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-39
Highly available NFS server support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-41
Shared external disk access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-43
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-44
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-45
Unit 6. Initial cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
What we are going to achieve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Where are we in the implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
The topology configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
Configuration methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Planning and base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9
The top-level HACMP smit menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11
The standard configuration method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Add nodes to an HACMP cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
What did we get? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Contents

vii

Student Notebook

Now define highly available resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-16


Start with service addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-18
Adding service IP labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19
Add xweb service label (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-20
Add xweb service label (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-21
Continue with application servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-23
Add xwebserver application server (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-24
Add xweb application server (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-25
Configure volume groups (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-27
Discover the volume groups for pick-lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-28
Adding the xwebgroup resource group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-29
Setting name, nodes, and policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-30
Adding resources to the xwebgroup RG (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-31
Adding resources to the xwebgroup RG (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . .6-32
Synchronize and test the changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-34
What do we have at this point? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-37
Extending the configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-39
Extended topology configuration menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-40
Communication interfaces and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-41
Defining a non-IP network (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-42
Defining a non-IP network (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-43
Defining a non-IP network (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-44
Defining persistent node IP labels (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-46
Defining persistent node IP labels (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-47
Defining persistent node IP labels (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-48
Synchronize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-49
Save configuration: snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-51
Save configuration: xml file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-52
Two-node cluster configuration assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-53
What does the two-node assistant give you? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-55
Where are we in the implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-57
Starting Cluster Services (1 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-58
Starting Cluster Services (2 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-59
Starting Cluster Services (3 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-60
Starting Cluster Services (4 of 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-61
Removing a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-62
We're there! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-63
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-64
Break time! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-65
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-66
Unit 7. Basic HACMP administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-2
7.1 Topology and resource group management . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Topology and resource group management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-4
Yet another resource group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-5
Adding the third resource group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-6

viii

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

TOC

Adding a third service IP label (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7


Adding a third service IP label (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Adding a third application server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
Adding resources to the third RG (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-11
Adding resources to the third RG (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
Synchronize your changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
Expanding the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
Adding a new cluster node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
Add node: Standard path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
Add node: Standard path (in progress) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18
Add node: Extended path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19
Define the non-IP rs232 networks (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
Define the non-IP rs232 networks (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Synchronize your changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23
Start Cluster Services on the new node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-24
Add the node to a resource groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-25
Shrinking the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
Removing a cluster node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27
Removing an application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-28
Removing a resource group (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-29
Removing a resource group (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31
Removing a resource group (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
Lets review: Topic 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
7.2 Cluster single point of control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35
Cluster single point of control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-36
Administering a high availability cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-38
Cluster single point of control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40
The top-level C-SPOC menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-42
Starting cluster services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-43
Verifying that cluster services has started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-45
Checking on what actually happened . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-47
Stopping cluster services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-48
Verifying that cluster services has stopped (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . 7-50
Verifying that cluster services has stopped (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . 7-52
Managing shared LVM components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-54
Creating a shared volume group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-56
Discover, add VG to resource group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-57
Creating a shared file system (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-58
Creating a shared file system (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-60
LVM change management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-61
LVM changes: Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-63
LVM changes: Lazy update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-64
LVM changes: C-SPOC synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-66
Enhanced concurrent mode volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-68
The best method: C-SPOC LVM changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-69
LVM changes: Select your file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-70

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Contents

ix

Student Notebook

Update the size of a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-71


HACMP resource group operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-72
Priority override location (POL): Old . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-73
Priority override location (POL): New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-75
Moving a resource group (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-76
Moving a resource group (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-78
Bring a resource group offline (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-79
Bring a resource group offline (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-80
Bring a resource group offline (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-81
Bring a resource group back online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-82
Log files generated by HACMP - before HACMP 5.4.1 . . . . . . . . . . . . . . . . . . . . .7-83
Log files generated by HAMCP - HACMP 5.4.1 and later . . . . . . . . . . . . . . . . . . .7-84
Lets review topic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-86
7.3 Dynamic automatic reconfiguration event facility . . . . . . . . . . . . . . . . . . . . . . 7-87
Dynamic Automatic Reconfiguration Event facility . . . . . . . . . . . . . . . . . . . . . . . . .7-88
Dynamic reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-89
What can DARE do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-90
What limitations does DARE have? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-91
So how does DARE work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-92
Verifying and synchronizing (standard) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-94
Verifying and synchronizing (extended) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-95
Discarding unwanted changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-97
Rolling back from a DARE operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-99
What if DARE fails? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-101
Dynamic reconfiguration lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-103
Lets review: Topic 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-104
7.4 WebSMIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-105
Implementing WebSMIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-106
Web-enabled SMIT (WebSMIT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-107
WebSMIT main page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-109
WebSMIT context menu controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-111
WebSMIT associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-112
WebSMIT online documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-114
WebSMIT configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-115
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-120
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-121
Unit 8. Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-2
8.1 HACMP events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Topic 1 objectives: HACMP events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-4
What is an HACMP event? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-5
HACMP basic event flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-6
Recovery programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-7
Recovery program example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-8
Event scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-9
process_resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-10
x

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

TOC

First node starts cluster services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11


Another node joins the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-12
Node leaves the cluster (stopped) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Lets review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14
8.2 Cluster customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Topic 2 objectives: Event customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Event processing customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Adding/changing cluster events (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Adding/changing cluster events (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Adding/changing cluster events (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-22
Recovery commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-23
Adding/changing recovery commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-24
Points to note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-25
RG_Move event and selective fallover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26
Customizing event flow for other devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
Error notification within smit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Configuring automatic error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
Listing automatic error notification (non-virtual HACMP nodes) . . . . . . . . . . . . . . 8-31
Listing automatic error notification (virtual HACMP nodes) . . . . . . . . . . . . . . . . . . 8-33
Adding error notification methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-34
Emulating errors (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-36
Emulating errors (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38
What will this cause? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-40
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-41
Unit 9. Integrating NFS into HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
So, what is NFS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3
NFS background processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Combining NFS with HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
NFS fallover with HACMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6
Configuring NFS for high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-7
Cross-mounting NFS filesystems (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
Cross-mounting NFS filesystems (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11
Cross-mounting NFS filesystems (3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12
Choosing the network for cross-mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-13
Configuring HACMP for cross-mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Syntax for specifying cross-mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
Ensuring the VG major number is unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16
NFS with HACMP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-17
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-18
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19
Unit 10. Problem determination and recovery. . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
Unit objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Why do good clusters turn bad? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Contents

xi

Student Notebook

Test your cluster before going live! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-5


Tools to help you diagnose a problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-7
Tools available from smit menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-8
Automatic cluster configuration monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-9
Automatic connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-10
HACMP cluster test tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-13
Checking cluster processes (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-15
Checking cluster processes (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-17
Testing your network connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-18
Dead man's switch timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-20
Avoiding dead mans switch timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-21
Setting performance tuning parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-23
Enabling I/O pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-24
Changing the frequency of syncd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-25
SRC halts a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-26
Partitioned clusters and node isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-27
Avoiding partitioned clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-29
Automatic failure data capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-30
Check event status message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-31
Changing the timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-33
Recovering from an event script failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-34
Recovering from an event failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-36
A troubleshooting methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-37
Contacting IBM for support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-40
Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-41
Unit summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-42
Appendix A. Checkpoint solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Appendix B. Release Notes for HACMP 5.4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
Appendix C. IPAT via IP replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1
Appendix D. Configuring target mode SSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1

xii

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

TMK

Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AIX
BladeCenter
DS4000
Enterprise Storage Server
HACMP
POWER
Redbooks
System i5
System Storage
WebSphere

AIX 5L
Cross-Site
DS6000
General Parallel File
System
NetView
POWER5
Requisite
System p
Tivoli

Approach
DB2
DS8000
GPFS
Notes
pSeries
SP
System p5
TotalStorage

Windows is a trademark of Microsoft Corporation in the United States, other countries, or


both.
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the
United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Other company, product, or service names may be trademarks or service marks of others.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Trademarks

xiii

Student Notebook

xiv

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

pref

Course description
HACMP System Administration I: Planning and Implementation
Duration: 5 days
Purpose
This course is designed to prepare students to install and configure a
highly available cluster using HACMP for AIX.

Audience
The audience for this course is students who are experienced AIX
system administrators with TCP/IP networking and AIX LVM
experience who are responsible for the planning and installation of an
HACMP 5.4.1 cluster on an IBM System p server running AIX 5L V5.3
or later (the lab exercises are conducted on AIX 6.1).

Prerequisites
Students should ideally be qualified as IBM Certified Specialists - p5
and pSeries Administration and Support AIX 5L and in addition have
TCP/ IP, LVM storage and disk hardware implementation skills. These
skills are addressed in the following courses (or can be obtained
through equivalent education and experience):
AU16: AIX 5L System Administration II: Problem Determination
AU07: AIX V4 Configuring TCP/IP

Objectives
After completing this course, you should be able to:

Explain what high availability is.


Outline the capabilities of HACMP for AIX.
Design and plan a highly available cluster.
Install and configure HACMP for AIX in the following modes of
operation:
- Single resource group on a primary node with standby node
- Two resource groups in a mutual takeover configuration
Configure resource group startup, fallover, and fallback policies
Perform basic system administration tasks for HACMP.
Perform basic customization for HACMP.
Perform basic problem determination and recovery.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Course description

xv

Student Notebook

Curriculum relationship
This course should be taken before AU61
AU61: HACMP System Administration II: Administration and
Problem Determination

xvi

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

pref

Agenda
Day 1
Welcome
Unit 1 - Introduction to HACMP for AIX 5L
Unit 2- Networking considerations for high availability
Exercise 1
Exercise 2

Day 2
Unit 3- Shared storage considerations for high availability
Unit 4 - Planning for applications and resource groups
Unit 5 - HACMP installation
Exercise 3
Exercise 4
Exercise 5

Day 3
Unit 6 - Initial cluster configuration
Exercise 6

Day 4
Unit 7 - Basic HACMP administration Unit 8 - EventsExercise 7
Exercise 8

Day 5
Unit 9 - Integrating NFS into HACMP
Unit 10 - Problem determination and recovery
Exercise 9
Exercise 10

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Agenda

xvii

Student Notebook

Text highlighting
The following text highlighting conventions are used throughout this book:
Bold

Identifies file names, file paths, directories, user names, and


principals.

Italics

Identifies links to Web sites, publication titles, is used where the


word or phrase is meant to stand out from the surrounding text,
and identifies parameters whose actual names or values are to
be supplied by the user.

Monospace

Identifies attributes, variables, file listings, SMIT menus, code


examples of text similar to what you might see displayed,
examples of portions of program code similar to what you might
write as a programmer, and messages from the system.

Monospace bold

Identifies commands, daemons, menu paths, and what the user


would enter in examples of commands and SMIT menus.

xviii HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 0. Course introduction


What this unit is about
This unit describes the content of this course.

What you should be able to do


After completing this unit, you should understand the aim of this
course.

Copyright IBM Corp. 1998, 2008

Unit 0. Course introduction

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

0-1

Student Notebook

Course objectives
After completing this unit, you should be able to:
Define high availability.
Outline the capabilities of HACMP for AIX
Design and plan a highly available cluster
Install and configure HACMP in the following modes of
operation:
Single resource group on a primary node with a standby node
Two resource groups in a mutual takeover configuration

Perform basic system administration tasks for HACMP


Perform basic problem determination and recovery

Copyright IBM Corporation 2008

Figure 0-1. Course objectives

AU548.0

Notes:

0-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Course agenda (1 of 5)
Day 1

Welcome
Unit 1 - Introduction to HACMP for AIX 5L
Unit 2 - Networking Considerations for High Availability
Exercise 1
Exercise 2

Copyright IBM Corporation 2008

Figure 0-2. Course agenda (1 of 5)

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 0. Course introduction

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

0-3

Student Notebook

Course agenda (2 of 5)
Day 2

Unit 3 - Shared Storage Considerations for High Availability


Unit 4 - Planning for Applications and Resource Groups
Unit 5 - HACMP Installation
Exercise 3
Exercise 4
Exercise 5

Copyright IBM Corporation 2008

Figure 0-3. Course agenda (2 of 5)

AU548.0

Notes:

0-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Course agenda (3 of 5)
Day 3
Unit 6 - Initial Cluster Configuration
Exercise 6

Copyright IBM Corporation 2008

Figure 0-4. Course agenda (3 of 5)

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 0. Course introduction

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

0-5

Student Notebook

Course agenda (4 of 5)
Day 4

Unit 7 - Basic HACMP Administration


Unit 8 - Events
Exercise 7
Exercise 8

Copyright IBM Corporation 2008

Figure 0-5. Course agenda (4 of 5)

AU548.0

Notes:

0-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Course agenda (5 of 5)
Day 5

Unit 9 - Integrating NFS into HACMP


Unit 10 - Problem Determination and Recovery
Exercise 9
Exercise 10

Copyright IBM Corporation 2008

Figure 0-6. Course agenda (5 of 5)

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 0. Course introduction

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

0-7

Student Notebook

Lab exercises
Points to note:
Work as a team and split the workload.
Manuals are available online.
HACMP software has been loaded and might have already been
installed.
TCP/IP and LVM have not been configured.
Each lab must be completed successfully before continuing on to the
next lab, as each lab is a prerequisite for the next one.
If you have any questions, ask your instructor.

Copyright IBM Corporation 2008

Figure 0-7. Lab exercises

AU548.0

Notes:

0-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Student Guide font conventions


The following text highlighting conventions are used throughout
this book:
Bold

Identifies file names, file paths, directories, user names,


principals, menu paths, and menu selections. Also
identifies graphical objects, such as buttons, labels, and
icons that the user selects.
Identifies links to Web sites and publication titles; is used
where the word or phrase is meant to stand out from the
surrounding text; and identifies parameters whose
actual names or values are to be supplied by the user.
Identifies attributes, variables, file listings, SMIT menus,
code examples, and command output that you would
see displayed on a terminal, and messages from the
system.
Identifies commands, subroutines, daemons, and text that
the user would type.

Italics

Monospace

Monospace bold

Copyright IBM Corporation 2008

Figure 0-8. Student Guide font conventions

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 0. Course introduction

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

0-9

Student Notebook

Course overview summary


Key points for the course:
There is ample time for the lab exercises.
Thorough design, planning, and teamwork are essential.
Prior AIX, LVM, Storage Management and TCP/IP experience
is assumed and required.

Copyright IBM Corporation 2008

Figure 0-9. Course overview summary

AU548.0

Notes:

0-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 1. Introduction to HACMP for AIX


What this unit is about
This unit introduces the concepts of High Availability and HACMP
(High Availability Cluster Multi-Processing) for AIX (Advanced
Interactive eXecutive).

What you should be able to do


After completing this unit, you should be able to:
Define High Availability and explain why it is needed
List the key considerations when designing and implementing a
High Availability cluster
Outline the features and benefits of HACMP for AIX
Describe the components of an HACMP for AIX cluster
Explain how HACMP for AIX operates in typical cases

How you will check your progress


Accountability:
Checkpoint

References
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Define High Availability and explain why it is needed
List the key considerations when designing and implementing
a high availability cluster
Outline the features and benefits of HACMP for AIX
Describe the components of an HACMP for AIX cluster
Explain how HACMP for AIX operates in typical cases

Copyright IBM Corporation 2008

Figure 1-1. Unit objectives

AU548.0

Notes:
Objectives
In this unit, we introduce the concept of High Availability, examine why you might want
to implement a High Availability solution, and compare High Availability with some
alternative availability technologies.

HACMP terminology
This course uses the following terminology:
- HACMP means any version and release of the HACMP product.
- HACMP x means version x and any release of that version.
- HACMP x.y means a specific version and release.

1-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

1.1 High Availability concepts

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-3

Student Notebook

High Availability and HACMP concepts


After completing this topic, you should be able to:
Define High Availability
Recognize that eliminating single points of failure (SPOFs) is
part of the HACMP implementation process
Outline the features and benefits for HACMP for AIX
Describe the HACMP concepts of topology and resources
Give examples of topology components and resources
Provide a brief description of the software and hardware
components of a typical HACMP cluster

Copyright IBM Corporation 2008

Figure 1-2. High availability and HACMP concepts

AU548.0

Notes:

1-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

So, what is High Availability?


High Availability characteristics:
The masking or elimination of both planned and unplanned downtime
The elimination of single points of failure (SPOFs)
Fault resilience and system hardening
No specialized hardware requirement
Workload Fallover

WAN

Standby Node/LPAR

Production Node/LPAR

client
Copyright IBM Corporation 2008

Figure 1-3. So, what is High Availability?

AU548.0

Notes:
High Availability characteristics
A High Availability solution ensures that the failure of any component of the solution, be
it hardware, software, or system management, does not cause the application and its
data to be inaccessible to the user community. This is achieved through the elimination
or masking of both planned and unplanned downtime. High availability solutions should
eliminate single points of failure (SPOF) through appropriate design, planning, selection
of hardware, configuration of software, and carefully controlled change management
discipline. High Availability does not mean no interruption to the application; thus, we
say fault resilient instead of tolerant.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-5

Student Notebook

Eliminating single points of failure


Cluster Object

Eliminated as a single point of failure by:

Node

Using multiple nodes

Power source

Using multiple circuits or uninterruptible power supplies

Network adapter
Network

Using redundant network adapters


Using multiple networks to connect nodes

TCP/IP subsystem

Using non-IP networks to connect adjoining nodes and clients

Disk adapter
Disk

Using redundant disk adapter or multipath hardware


Using multiple disks with mirroring or raid

Application

Adding node for takeover; configuring application monitor

VIO Server

Implementing dual VIO Servers

Site

Adding an additional site

The fundamental goal of (successful) cluster design is


the elimination of single points of failure (SPOFs).
Copyright IBM Corporation 2008

Figure 1-4. Eliminating single points of failure

AU548.0

Notes:
Eliminating single points of failure
Each of the items in the left-hand column is a physical or logical component which, if it
fails, renders the HA clusters application unavailable.
Remember that generally some SPOFs are not eliminated. For example, most clusters
are not designed to deal with the server room being flooded with water, or with the
entire city being without electrical power for two weeks. Site recovery would be a
possible solution here using HACMP/XD.
Focus on the art of the possible. In other words, spend your efforts dealing with SPOFs
that can be reasonably handled.
Document the SPOFs which you have decided to not deal with; then you can review
them from time to time to consider whether some of them now need to be dealt with (for
example, site failures if cluster becomes very important).

1-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

High availability clusters (HACMP base)


System p and AIX RAS features include:
9 Application and Partition Mobility
9 First Failure Data Capture (FFDC)
9 Dynamic CPU Deallocation
9 Flexible Service Processor
9 Redundant Power and Cooling
9 Error Correction Checking Memory
9 Hot Swap Adapters
9 Dynamic Kernel
9 Journaled Filesystem
9 Redundant Data Paths
Dual Disk Adapters (MPIO)
9 Data Mirroring and/or Striping
9 Hot Swap / Hot Spare Storage
9 Redundant Power/Cooling for Storage Arrays
9
9
9
9
9

With High Availability Clustering (HACMP)


Protection against node and OS failure with Redundant nodes
Protection against NIC failure with Redundant Network Adapters
Protection against Network failure with Redundant Networks
Self-healing clusters with Application Monitoring
Protection against Site Failure (typically limited by SAN infrastructure)
or no distance limitations with HACMP/XD
Copyright IBM Corporation 2008

Figure 1-5. High availability clusters (HACMP base)

AU548.0

Notes:
High availability clustering
The High Availability solution addresses the fundamental weakness of both the
stand-alone and stand-alone enhanced storage systems; that is, it has two of
everything. If any component of the solution should fail, a redundant back-up
component is waiting to take over the workload. The systems that are clustered can be
stand-alone systems or Logical Partitions (LPARs). Virtualization is supported as well,
providing all of the requirements are met. Those will be pointed out later.
Do feel free to examine the high-availability solutions offered by our competitors. IBMs
HACMP product has been ranked (and continues to be ranked) the leading
high-availability solution for UNIX servers by D.H. Brown Associates
(www.dhbrown.com) for many years. We are confident that by the end of this course,
youll also agree that HACMP 5 is a mature, robust, and feature-rich product that
delivers significantly improved availability on the IBM System p platform.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-7

Student Notebook

Drawback
The base product HACMP 5 only partially solves the site SPOF in the case where data
does not have to be replicated. This can be done with LVM mirroring using SAN
technology.

1-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What about site failure?


Limited distance (LVM mirroring and SAN): HACMP for AIX
Extended distance: Geographic Clustering Solution
(that is, HACMP/XD)

Distance unlimited
Application, disk, and network independent
Automated site failover and reintegration
A single cluster across two sites
Get more details in HACMP System Administration III AU620

Metro Mirror/PPRC
GLVM
GeoRM
Toronto

Data Replication

Brussels

Copyright IBM Corporation 2008

Figure 1-6. So, what about site failure

AU548.0

Notes:
What about Site Failure and data replication?
Limited distance
The base product HACMP 5.2 and later allows you to create sites as long as you can
use LVM mirroring for redundancy. Using SAN technology, you can get limited distance
support for site failures.
Extended distance
The HACMP/XD (Extended Distance) priced feature provides three distinct software
solutions for disaster recovery. These solutions enable an HACMP cluster to operate
over extended distances at two sites. For more information, see the HACMP System
Administration III: Virtualization and Disaster Recovery course, AU620.
a. HACMP/XD for Metro Mirror/PPRC increases data availability for IBM TotalStorage
ESS/DS/SVC volumes that use Peer-to-Peer Remote Copy (PPRC) to copy data to
a remote site for disaster recovery purposes. HACMP/XD for Metro Mirror/PPRC
Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-9

Student Notebook

takes advantage of the PPRC fallover/fallback functions and HACMP cluster


management to reduce downtime and recovery time during disaster recovery. When
PPRC is used for data mirroring between sites, the physical distance between sites
is limited to the capabilities of the ESS/DS/SVC hardware.
b. HACMP/XD for Geographic Logical Volume Manager (GLVM) increases data
availability for IBM volumes that use GLVM to copy data to a remote site for disaster
recovery purposes. HACMP/XD for GLVM takes advantage of the following
components to reduce downtime and recovery time during disaster recovery:
AIX GLVM data mirroring and synchronization
TCP/IP-based unlimited distance network support
HACMP for AIX cluster management
Multiple data mirroring networks are supported increasing the availability and
performance. Enhanced concurrent mode volume groups (in any type of HACMP
resource group) are supported on each sites nodes, but concurrent access is not
supported across sites.
Additionally, as HACMP/XD for GLVM is positioned as the replacement for
HACMP/XD for HAGEO, getting started with GLVM as a means of replicating data
across sites without making the commitment to HACMP/XD is possible. This is
referred to as Standalone GLVM and is supported in AIX 5.3 and later. This is a no
cost function.
For a whitepaper on the configuration of GLVM, refer to
http://www.ibm.com/servers/aix/whitepapers/aix_glvm.html
c. HACMP/XD for HAGEO Technology uses the TCP/IP network to enable unlimited
distance for data mirroring between sites. (Note that although the distance is
unlimited, practical restrictions exist on the bandwidth and throughput capabilities of
the network). This technology is based on the IBM High Availability Geographic
Cluster for AIX (HAGEO) product. HACMP/XD for HAGEO Technology extends an
HACMP for AIX cluster to encompass two physically separate data centers. Data
entered at one site is sent across a point-to-point IP network and mirrored at a
second, geographically distant location.
HACMP/XD is independent of the application, disk technology, data, and distances
between the sites. HACMP/XD can work across any network that supports TCP/IP and
offers automated fallover of applications and data from one site to another (maximum
two sites) in the event of a site disaster.
In the past, HACMP/XD was called HAGEO. Try both names if youre searching for
information on HACMP/XD.

1-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

IBM's HA solution for AIX


HACMP for AIX characteristics:
Stands for High Availability Cluster Multi-processing
Is based on cluster technology (RSCT)
Provides two environments (which can co-exist simultaneously):
Serial (High Availability): the process of ensuring that an application is
available for use through the use of serially accessible shared data and
duplicated resources
Parallel (Cluster Multiprocessing): concurrent access to shared data

Copyright IBM Corporation 2008

Figure 1-7. IBM's HA solution for AIX

AU548.0

Notes:
HACMP characteristics
IBMs HACMP product is a mature and robust technology for building a high-availability
solution. A high-availability solution based upon HACMP provides automated failure
detection, diagnosis, recovery and reintegration. With an appropriate application,
HACMP can also work in a concurrent access or parallel processing environment, thus
offering excellent horizontal scalability.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-11

Student Notebook

Fundamental HACMP concepts


Topology: Physical networking centric components
Resources: Entities that are being made highly available
Resource group: A collection of resources, which HACMP controls
as a single unit
A given resource can appear only in, at most, one resource group

Resource group policies:


startup policy: which node the resource group is activated on
fallover policy: determines target when there is a failure
fallback policy: determines fallback behavior

Customization
The process of augmenting HACMP, typically via implementing scripts
Minimum: application start and stop scripts
Optional:
Application monitoring scripts (highly recommended!)
Event customization
Notification, pre- and post-event scripts, recovery scripts, user-defined events, time until warning
(config_too_long timeout)

Copyright IBM Corporation 2008

Figure 1-8. Fundamental HACMP concepts

AU548.0

Notes:
Terminology
A clear understanding of the above concepts and terms is important as they appear
over and over again both in the remainder of the course and throughout the HACMP
documentation, log files and SMIT screens.

1-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

A highly available cluster


Fundamental Concepts

clstrmgr

clstrmgr

ur c e
Reso group

Shared Storage
Node

Node
Fallover

Cluster is comprised of physical components (topology) and logical components


(resource groups and resources).
Copyright IBM Corporation 2008

Figure 1-9. A highly available cluster

AU548.0

Notes:
Fundamental concepts
HACMP is based on the fundamental concepts of cluster, resource group, and cluster
manager (clstrmgr).

Cluster
A cluster is comprised basically of nodes, networks, and network adapters. These
objects are referred to as Topology objects.

Resource group
A resource group is typically comprised of an application, network address, and volume
group using shared disks. These objects are referred to as Resource objects.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-13

Student Notebook

clstrmgr
The cluster manager daemons together are the software components that
communicate with each other to control on which node a resource group is activated or
where the resource group is moved on a fallover based on parameters set up by the
administrator. The clstrmgr runs on all the nodes of the cluster.
Here is a simple diagram of a two-node cluster, using shared disk, and providing
fallover for a single application.

1-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP's topology components (1 of 2)

IP ork
tw
Ne
-IP k
on or
N tw
e
N

Communication
Interface

n
tio
ca
i
un e
m evic
m
Co D

No
de

r
ste
Clu

The Topology components consist of a cluster, nodes and the


technology that connects them together.
Copyright IBM Corporation 2008

Figure 1-10. HACMPs topology components (1 of 2)

AU548.0

Notes:
Topology components
A clusters topology is the cluster, nodes (pSeries servers), networks (connections
between the nodes), the communication interfaces (for example, Ethernet or token-ring
network adapters), and the communication devices (/dev/rhdisk for heartbeat on disk or
/dev/tty for RS232 for example).

Nodes
In the context of HACMP, the term node means any IBM pSeries system that is a
member of a high-availability cluster running HACMP.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-15

Student Notebook

Networks
Networks consist of IP and non-IP networks. The non-IP networks ensure that cluster
monitoring can be done if there is a total loss of IP communication. Non-IP networks are
strongly recommended to be configured in an HACMP.
Networks can also be logical or physical. Logical networks have been used with the IBM
SP environments when different frames were in different subnets but needed to be
treated as if they were in the same network for HACMP purposes.

1-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMPs topology components (2 of 2)

Node

Any-to-any, including LPARs


Minimum number of physical adapters for redundancy must be
considered
Ethernet / Etherchannel

Networking

PC

Ethernet

Server

Server
Server

Physical and virtual


Etherchannel

Server

Heartbeat on Disk
RS232/422

Non-IP

Non -IP

Heartbeat on disk, RS-232, Target-mode SCSI


Fibre Channel

Shared storage

Physical

RS/6000

DS4000

RS/6000

SAN

SCSI or Fibre Channel

Virtual SCSI

DS8000

IBM

Fibre

Copyright IBM Corporation 2008

Figure 1-11. HACMPs topology components (2 of 2)

AU548.0

Notes:
Supported nodes
As you can see, the range of systems that supports HACMP is, well, everything. The
only requirement is that the system should have at least four adapter slots spare (two
for network adapters and two for disk adapters). Any other adapters (for example,
graphics adapters) occupy additional slots. The internal Ethernet adapter fitted to most
entry-level pSeries servers cannot be included in the calculations. It should be noted
that even with four adapter slots free, there is still be a single point of failure as the
cluster is able to accommodate only a single TCP/IP local area network between the
nodes.
HACMP 5 works with pSeries servers in a no-single-point-of-failure server
configuration. HACMP for AIX supports the System p models that are designed for
server applications and that meet the minimum requirements for internal memory,
internal disk, and I/O slots. For a current list of systems that are supported with the
version of HACMP that you want to use, see the Sales manual at
Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-17

Student Notebook

www.ibm.com/common/ssi, choose the HW & SW desc (Sales Manual, RPQ) option


from the pull-down menu, click Advanced Search. Type hacmp in the Title: field, go to
the bottom of the screen, and click Search. The list of HACMP documents will display.

Unsupported nodes and adapters


With the introduction of AIX V5.2, the micro channel range of systems is excluded,
because AIX V5.2 does not support micro channel systems. However, you can still run
AIX V5.1 with HACMP 5.2 (and earlier) on micro channel systems.
On most IBM System p5 and IBM System i5 servers, the integrated serial ports are not
enabled when the HMC ports are connected to a Hardware Management Console.
Either the HMC ports or the integrated serial ports can be used, but not both. Moreover,
the integrated serial ports are supported only for modem and async terminal
connections. Any other applications using serial ports, including HACMP, require a
separate serial port adapter to be installed in a PCI slot. Consult the Sales Manual if you
intend to use an integrated serial port.

LPAR support
There is also support for dynamically adding LPAR resources in AIX V5.2 or later LPAR
environments to take advantage of Capacity Upgrade of Demand (CUoD).
HACMP 5.2 (and later) supports Virtual SCSI (VSCSI) and Virtual LAN (VLAN) on
POWER5 (IBM System p5 and IBM System i5). See
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10390 for
more details.

Supported networks
HACMP 5 supports client users on a LAN using TCP/IP. HACMP monitors and performs
IP address switching for the following TCP/IP-based communications adapters on
cluster nodes:
-

Ethernet
EtherChannel
Token ring
FDDI
SP Switches
ATM
ATM LAN Emulation

HACMP also supports non-IP networks, such as RS232/442, Target Mode SCSI
(TMSCSI), Target Mode SSA (TMSSA), and Heartbeat on Disk (using Enhanced
Concurrent Mode Volume Groups).
It is highly recommended to have both IP and non-IP networks defined to HACMP. For a
list of specific adapters, you can consult the Sales Manual.
1-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unsupported networks
The following networks are not supported:
-

Serial Optical Channel Converter (SOCC)


SLIP
Fibre Channel Switch (FCS)
802_ether
Virtual IP Address (VIPA) facility of AIX
The pseudo IP address provided by VIPA cannot be reliably monitored by RSCT
or HACMP. The failure of the underlying devices that are used to service the
pseudo device cannot be coordinated with HACMP recovery processing. VIPA
can be configured and used outside of HACMP, but when using these facilities
on an HACMP cluster node, ensure that they are configured on the subnets that
are completely different from the subnets used by HACMP. If any VIPA
addresses are configured on the same subnet that is used for an HACMP
network, HACMP might not be able to properly detect failures and manage
recovery.
- Aggregate IP Interface with the SP Switch2
With the SP Switch2 you have css0 and css1, PSSP allows you to configure an
Aggregate IP switch. This is an ml0 interface with its own IP address. This ml0
interface is not supported by HACMP.
- IP V6

Adapters versus devices


HACMP distinguishes between communication adapters and communication devices
for network support. For IP networks, the term adapter is used. For non-IP adapters, the
term communication device is used. This will be discussed further in the networking unit
of this course.

Shared storage environments


HACMP is largely unconcerned about the disk storage that you select. Supported
technologies include Fibre Channel and SCSI. Most IBM storage is supported with
HACMP, and third-party storage, such as EMC and Hitachi, can be used, although
some custom modifications may be required.
It is also important to note that data availability is not ensured by HACMP. This must be
done either through the redundancy features of a storage device or through AIX LVM
mirroring.
For a complete list of supported devices, see the HACMP 5.4.1 Announcement Letter
at:
http://www.ibm.com/systems/p/ha.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-19

Student Notebook

ica
pl
Ap
tio
n
er
rv
Se

Se
Ad rvic
dr e I
es P
s

Vo
Gr lum
ou e
p

HACMP's resource components

le
Fi tem
s
Sy

Group
e
c
r
u
Re s o
s
Node e Policies
im
Run-t ces
ur
Reso
Copyright IBM Corporation 2008

Figure 1-12. HACMP's resource components

AU548.0

Notes:
Resource group
A resource group is a collection of resources treated as a unit along with the nodes that
they can potentially be activated on and what policies the cluster manager should use to
decide which node to choose during startup, fallover, and fallback. A cluster can have
more than one resource group (usually one for each application), thus allowing for very
flexible configurations. Resource groups will be covered in more detail in Unit 4.

Resources
Resources are logical components that can be put into a resource group. Because they
are logical components, they can be moved without human intervention.
The resources shown in the visual are a typical set of resources used in resource
groups, such as:

1-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Service IP Address - Users need to be able to connect to the application. Typically,


the users are given an IP address or hostname to connect to. This IP
address/hostname becomes a resource in the resource group because it must be
associated with the same node that is running the application. The IP
address/hostname resource is referred to as the Service IP Label in the resource
group. More than one Service IP Label may be configured for a resource group.
Volume Group - If the application requires shared disk storage, this storage is
contained within volume groups.
Filesystem - An application often requires that certain filesystems be mounted.
Application Server - The application itself must be part of the resource group
(strictly speaking, the application server actually consists of scripts which start and
stop the application as required by HACMP). The use of Application Server can be
confusing because this term is used popularly by application vendors to describe a
layer in their implementation. The use of the term in HACMP describes the start/stop
methods (scripts) for the application. It is an object that points to the start/stop
methods.
In addition to the resources listed in the figure, you can also associate with a resource
group the following:
NFS mounts - An application might require that an NFS filesystem be mounted by
the node running the application
NFS exports - A resource group might be configured to provide NFS server
services by NFS exporting some of its filesystems.
Finally, attributes, such as Force vary on of volume groups, can be assigned. These
will be covered later in this course in Unit 6.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-21

Student Notebook

What is HACMP?
An application which:

Controls where resource groups run


Monitors and reacts to events
Provides tools for cluster-wide configuration and synchronization
Relies on other AIX Subsystems (ODM, LVM, RSCT, TCP/IP,
SRC, and so on)
Cluster Manager Subsystem (clstrmgrES)

clcomdES

Topology
manager

Resource
manager

Event
manager

SNMP
manager

RSCT
(topsvcs, grpsvcs, RMC
subsystems)

snmpd

clinfoES

clstat
Copyright IBM Corporation 2008

Figure 1-13. What is HACMP?

AU548.0

Notes:
HACMP core components
HACMP comprises of a number of software components:
- The cluster manager clstrmgr is the core process that monitors cluster membership.
The cluster manager includes a topology manager to manage the topology
components, a resource manager to manage resource groups, an event manager
with event scripts that works through the RMC facility, and RSCT to react to failures.
- In HACMP 5.3 and later, the cluster manager contains the SNMP SMUX Peer
function (previously provided by the clsmuxpd) for the cluster manager MIBs, which
allows for SNMP-based monitoring to be done manually or by using an SNMP
manager, such as Tivoli, BMC, or OpenView.
- The clinfo process provides an API for communicating between cluster manager
and your application. Clinfo also provides remote monitoring capabilities and can
run a script in response to a status change in the cluster. Clinfo is an optional
1-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

process that can run on both servers and clients (the source code is provided). The
clstat command uses clinfo to display status via ASCII, Xwindow, or Web browser
interfaces.
- In HACMP 5, clcomdES allows the cluster managers to communicate in a secure
manner without using rsh and the /.rhost files.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-23

Student Notebook

Additional features of HACMP


Configuration
Assistant

OLPW
smit via web

Verification
Auto tests

clstrmgrES

CSPOC
DARE

SNMP

Tivoli
Integration

Application
Monitoring

HACMP is shipped with utilities to simplify configuration, monitoring,


customization, and cluster administration.

Copyright IBM Corporation 2008

Figure 1-14. Additional features of HACMP

AU548.0

Notes:
Additional features
HACMP also has additional software to provide facilities for administration, testing,
remote monitoring, and verification:
- Application monitoring should be used to monitor the clusters applications and
restart them should they fail. Multiple monitors can be defined for an application,
including monitoring the startup.
- Configuration changes can be made to the cluster while the cluster is running. This
facility is known as Dynamic Automatic Reconfiguration Event (or DARE for short).
- C-SPOC is a series of SMIT menus that allow AIX-related cluster tasks to be
propagated across all nodes in the cluster. It includes an RG_Move facility, which
allows a resource group to be placed offline or on another node without stopping the
cluster manager.

1-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- Administration is made easier by the use of Online Planning Worksheets (OLPW)


and a Web-based SMIT interface.
- A two-node configuration assist facility enables you to configure an HACMP cluster
with very little input.
- Verification is provided at HACMP startup time, as part of synchronization, as a
manual process and a daily Automatic Cluster Configuration Monitoring function.
- There is an automatic correction facility, which will be covered in more detail in the
HACMP Administration II: Administration and Problem Determination course.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-25

Student Notebook

Some assembly required


HACMP can be used out of the box; however, some assembly is
required.

Minimum:
Application Start/Stop/Monitor scripts
Optional:
Customized pre/post event scripts
Reaction to events
Error notification Methods
User Defined Events (UDEs)
Cluster State Change

HACMP's flexibility allows for complex customization in order to


meet availability goals

Copyright IBM Corporation 2008

Figure 1-15. Some Assembly Required

AU548.0

Notes:
Not just HACMP
The final high-availability solution is more than just HACMP. A high-availability solution
comprises a reliable operating system (AIX), applications that are tested to work in a
high-availability cluster, storage devices, appropriate selection of hardware, trained
administrators, and thorough design and planning.

Customization required
HACMP is shipped with event scripts (Korn Shell scripts) which handle the failure
scenarios.
Application Server start/stop scripts are written to control the application(s) based on
the status of the cluster nodes. Most often, all the script writing that is required to
integrate an application into the cluster is done in the Application Server start/stop
scripts.
1-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Smart Assists are provided in HACMP (since HACMP 5.2) to help ease the
customization for the applications that they address. In HACMP 5.4 and later, an API is
provided that allows third-party application vendors to write Smart Assists.
In the rare circumstance where you have a requirement to customize some special
fallover behavior, this is done with pre- and post-event scripts.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-27

Student Notebook

Lets review
1.

2.

3.

4.

Which of the following items are examples of topology components in HACMP? (Select
all that apply.)
a. Node
b. Network
c. Service IP label
d. Hard disk drive
True or False?
All nodes in an HACMP cluster must have roughly equivalent performance
characteristics.
Which of the following is a characteristic of high availability?
a. High availability always requires specially designed hardware components.
b. High availability solutions always require manual intervention to ensure
recovery following fallover.
c. High availability solutions never require customization.
d. High availability solutions use redundant standard equipment (no specialized
hardware).
True or False?
A thorough design and detailed planning is required for all high availability solutions.

Copyright IBM Corporation 2008

Figure 1-16. Lets review

AU548.0

Notes:

1-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

1.2 What does HACMP do?

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-29

Student Notebook

What does HACMP do?


After completing this topic, you should be able to:
Describe the failures that HACMP detects directly
Provide an overview of the standby and takeover cluster
configuration options in HACMP
Describe some of the considerations and limits of an HACMP
cluster

Copyright IBM Corporation 2008

Figure 1-17. Topic 2 objectives: What does HACMP do?

AU548.0

Notes:
In this topic, we take a look at what HACMP does.

1-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Just what does HACMP do?

HACMP functions:

Monitors the states of nodes, networks, network adapters


and devices
Strives to keep resource groups highly available
Optionally, monitors the state of the applications, and can be
customized to react to every possible failure
Copyright IBM Corporation 2008

Figure 1-18. Just What Does HACMP Do?

AU548.0

Notes:
HACMP basic functions
HACMP detects three kinds of network related failures.
a. A communications adapter or device failure
b. A node failure (all communication adapters/devices on a given node)
c. A network failure (all communication adapters/devices on a given network)
HACMP also interfaces to the AIX error log to respond to the loss of quorum for a
volume group when the loss is detected by the LVM. Most other failures are handled
outside of HACMP, either by AIX or LVM, and can be handled in HACMP via
customization.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-31

Student Notebook

What happens when something fails?

How the cluster responds to a failure depends on what has


failed, what the resource group's fallover policy is, and if
there are any resource group dependencies:
Typically, another equivalent component takes over duties of
failed component (for example, another node takes over from
a failed node).
Copyright IBM Corporation 2008

Figure 1-19. What happens when something fails?

AU548.0

Notes:
How HACMP responds to a failure
HACMP generally responds to a failure by using a still available component to take over
the duties of the failed component. For example, if a node fails, then HACMP initiates a
fallover, an action that consists of moving the resource groups that were previously on
the failed node to a surviving node. If a Network Interface Card (NIC) fails, HACMP
usually moves any IP addresses being used by clients to another available NIC. If there
are no remaining available NICs, HACMP initiates a fallover. If only one resource group
is affected, then only the one resource group is moved to another node.

1-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What happens when a problem is fixed?

?
How the cluster responds to the recovery of a failed component
depends on what has recovered, what the resource group's fallback
policy is, and the resource group dependencies:

Typically, administrators need to indicate or confirm that the


fixed component is approved for use. Some components are
integrated automatically; for instance, when a communication
interface recovers.
Copyright IBM Corporation 2008

Figure 1-20. What happens when a problem is fixed?

AU548.0

Notes:
How HACMP responds to a recovery
When a previously failed component recovers, it must be reintegrated back into the
cluster (reintegration is the process of HACMP recognizing that the component is
available for use again). Some components, such as NICs, are automatically
reintegrated when they recover. Other components, such as nodes, cannot be
reintegrated until the cluster administrator explicitly requests the reintegration (by
starting the HACMP daemons on the recovered node, starting cluster services, and
possibly moving the resource group, or bringing it online).

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-33

Student Notebook

Standby (active/passive) with fallback

USA returns

Node UK fails

Node USA fails

One node is primary

(no change)

RG can be configured
to come online on the
primary or any node

UK returns

Copyright IBM Corporation 2008

Figure 1-21. Standby (active/passive) with fallback

AU548.0

Notes:
Standby
Standby configurations are configurations where one (or more) nodes have no
workload.

Standby node with one node primary


In a two-node cluster, there is a single application (that is, resource group), which must
run as much as possible on a primary or home node, and the node with no workload is
the secondary, standby, or back-up node. To accomplish this, there would be a start-up
policy to indicate which node is primary (or home), a fallover policy to allow fallover if
the primary node fails, and a fallback policy is set so that the resource group
automatically falls back to the primary node when the primary node recovers.

1-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Drawbacks
- One node is not used (this is ideal for availability but not from a utilization
perspective).
- A second outage on the fallback is possible.

Extending this concept to more nodes


This concept can be extended to multiple nodes in two ways:
i.

All nodes, except one, have applications, and the one node is a standby node.
This could lead to performance problems if more than one application must be
moved to the standby node.
ii. The resource group could be configured to have multiple layers of back-up
nodes. The resource group would usually be configured to run on the highest
priority (most preferred) available node.
Multiple layers of back-up nodes are possible--fallover policy determines which
node. For example: primary -> secondary -> tertiary -> quaternary -> quinary ->
senary -> septenary -> octonary -> nonary -> denary...
A tidbit for the wordsmiths in the audience: The sequence, which starts primary,
secondary, and tertiary, continues with quaternary, quinary, senary, septenary,
octonary, nonary, and denary. There is no generally accepted word for eleventh
order although duodenary means twelfth order. The word for twentieth order is
vigenary.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-35

Student Notebook

Standby (active/passive) without fallback

USA fails

UK returns

Eliminates another
outage
Reduces downtime

UK fails

USA returns
A

Copyright IBM Corporation 2008

Figure 1-22. Standby (active/passive without fallback

AU548.0

Notes:
Minimize downtime
A resource group can be configured to not fall back to the primary node (or any other
higher priority node) when it recovers. This avoids the second outage, which results
when the fallback occurs.
The cluster administrator can request that HACMP move the resource group back to
the higher priority node at an appropriate time or it can simply be left on its current node
indefinitely (an approach that calls into question the terms primary and secondary, but
which is actually quite a reasonable approach in many situations).

Extending to more nodes


This can result in multiple applications ending up on the node that stays up the longest.

1-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Mutual takeover: Active/Active

USA fails

UK fails

Very common
B A

No one node/LPAR
is left idle

A B

UK returns

USA returns
(with Fallback)

(with Fallback)

Copyright IBM Corporation 2008

Figure 1-23. Mutual takeover: Active/Active

AU548.0

Notes:
Takeover
Takeover configurations imply that there is workload on all nodes which might or might
not be under the control of HACMP, but that a node can take over the work of another
node in the cluster.

Mutual takeover
An extension of the primary node with a secondary node configuration is to have two
resource groups, one failing from right to left and the other failing from left to right. This
is referred to as mutual takeover.
Mutual takeover configurations are very popular configurations for HACMP because
they support two highly available applications at a cost, which is not that much more
than would be required to run the two applications in separate stand-alone
configurations.
Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-37

Student Notebook

Additional costs
Note that there are at least a few additional costs:
- Each cluster node probably needs to be somewhat larger than the stand-alone
nodes because they must each be capable of running both applications, possibly in
a slightly degraded mode, should one of the nodes fail.
- Additional software licenses might be required for the applications when they run on
their respective back-up nodes (this is a potentially significant cost item, which is
often forgotten in the early cluster planning stages).
- HACMP for AIX license fees.
- This is not intended to be an all inclusive list of additional costs.

1-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Concurrent: Multiple active nodes


USA, Germany, and UK are
all running Application A, each
using a separate IP Address

If nodes fail, the application remains


continuously available as long as there are
surviving nodes to run on.
Fixed nodes resume running their copy of
the application.

Application must be designed to run simultaneously on


multiple nodes.
This has the potential for essentially zero downtime.
Copyright IBM Corporation 2008

Figure 1-24. Concurrent: multiple active nodes

AU548.0

Notes:
Concurrent mode
HACMP also supports resource groups in which the application is active on multiple
nodes simultaneously. In such a resource group, all nodes run a copy of the application
and share simultaneous access to the disk. This style of cluster is often referred to as a
concurrent access cluster or concurrent access environment.

Service labels
Since the application is active on multiple nodes, each node has its own service IP
label. The client systems must be configured to randomly (or otherwise) select which
service IP address to communicate with, and be prepared to switch to another service
IP address should the one that theyre dealing with stop functioning (presumably,
because the node with the service IP address has failed). It is also possible to configure
an IP multiplexer between the clients and the cluster which redistributes the client

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-39

Student Notebook

sessions to the cluster nodes, although care must be taken to ensure that the IP
multiplexer does not itself become a single point of failure.

How to choose
Whether this mode of operation can be used for your application is a function of the
application, not of HACMP.

1-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Points to ponder
Resource groups:
Must be serviced by at least two nodes
Can have different policies
Can be migrated (manually or automatically) to rebalance loads

Clusters:

Must have at least one IP network and one non-IP network


Need not have any shared storage
Can have any combination of supported nodes *
Can be split across two sites
Might or might not require replicating data (HACMP/XD).

Applications:
Can be restarted via monitoring
Must be manageable via scripts (start/restart and stop)
* Application performance requirements and other operational issues
almost certainly impose practical constraints on the size and
complexity of a given cluster.
Copyright IBM Corporation 2008

Figure 1-25. Points to ponder

AU548.0

Notes:
Importance of planning
Planning, designing, configuring, testing, and operating a successful HACMP cluster
requires considerable attention to detail. In fact, a careful methodical approach to all the
phases of the clusters life-cycle is probably the most important factor in determining the
ultimate success of the cluster.

Methodical approach
A careful methodical approach considers the relevant points above, and many other
issues that are discussed this week or that are discussed in the HACMP
documentation.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-41

Student Notebook

Other considerations for HACMP


Design, planning, testing
Focus on service and availability
Apply appropriate risk analysis
Disciplined system administration practices
Documented operational procedures
Systems
Management

People

High
availability

Data

Continuous
availability

Networking

Continuous
operation
Hardware
Environment
Software
Copyright IBM Corporation 2008

Figure 1-26. Other considerations for HACMP

AU548.0

Notes:
Design, planning, testing
Design, planning, and testing are all critical steps that cannot be skipped when
implementing a high-availability solution. As youll learn this week, there should be no
shortage of time spent designing, planning, and documenting your proposed cluster
solution. Time well spent in these areas of the project reduces the amount of unneeded
administration time required to manage your cluster solution.
Unfortunately, its too often the case that there isnt enough time to do it right first time,
but always time enough to do it over when things go wrong.
Remember the reason why we worry about node failures and disk failures and such is
not because we are particularly concerned with their actual failure, but rather we are
concerned with the impact that their failure might have.

1-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Focus on service and availability


Focus on making the service highly available and view the hardware and software as
the tools that you use in accomplishing this goal. Users are not interested in highly
available hardware or software. They are interested in availability of services. So
hardware and software should be used to make the services highly available. Cluster
design decisions should be based on whether they contribute to availability (that is,
eliminate a SPOF) or detract from availability (gratuitously complex)

Apply appropriate risk analysis


Because it is probably not possible to fix all SPOFs, the risk analysis process can be
used for deciding if a defensive measure is warranted. The process can be applied to
identify those that must be dealt with as well as those that can be tolerated.
Risk analysis involves the following steps:
a. Identify relevant policies. What existing risk tolerance policies are available?
b. Study current environment. An example would be that the server room is on a
properly sized UPS but there is no disk mirroring today.
c. Perform requirements analysis. How much availability is required and what is the
acceptable likelihood of a long outage?
d. Hypothesize vulnerabilities. What could go wrong?
e. Identify and quantify risks. Estimate the cost of a failure versus the probability that it
occurs.
f. Evaluate counter measures. What does it take to reduce the risk or consequence to
an acceptable level?
g. Make decisions, create a budget, and plan the cluster.
h. Do not be fooled by the apparent determinism (that is, the formula that always
seems to come up with an answer) of risk analysis:
It simply is not possible to predict all the possible or even likely vulnerabilities.
Estimating the likelihood of a vulnerability occurring can be extremely difficult.
Some vulnerabilities do not lend themselves to any sort of quantifiable analysis.
For example, if there is a genuine risk that someone could die, then the cost of
this sort of failure would be irrelevant in any meaningful sense.
Finally, do not get trapped into a mode of thinking in which all conceivable risk of
outages must be eliminated. Such a goal is, in general, simply impossible to attain with
any technology.

Disciplined system administration practice


In a cluster environment, it is very easy to do commands that interfere with availability
software or to not propagate changes or to have a person take over that does not
understand the cluster environment. So, discipline and documentation are required.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-43

Student Notebook

Things HACMP does not do

Back-up and restoration


Time synchronization
Application specific configuration
System administration tasks unique to each node

Copyright IBM Corporation 2008

Figure 1-27. Things HACMP Does Not Do

AU548.0

Notes:
Things HACMP does not do
HACMP does not automate your back-ups, neither does it keep time in sync between
the cluster nodes. These tasks do require further configuration and software; for
example, Tivoli Storage Manager for back-up and a time protocol, such as xntp for time
synchronization.

1-44 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

When is HACMP not the correct solution?


Zero downtime required
Maybe a fault tolerant system is the correct choice.
Availability 7x24x365; HACMP occasionally needs to be shut down for
maintenance.
Life-critical environments.

Security issues
Too little security
Many people can change the environment.

Too much security


C2 and B1 environments might not allow HACMP to function as
designed.

Unstable environments
HACMP cannot make an unstable and poorly managed environment
stable.
HACMP tends to reduce the availability of poorly managed systems.

Copyright IBM Corporation 2008

Figure 1-28. When is HACMP not the correct solution?

AU548.0

Notes:
Zero downtime
An example of zero down time is the intensive care room. Also HACMP is not designed
to handle many failures at once.

Security issues
One security issue that is now addressed is the need to eliminate .rhost files. Also
there is better encryption possible with inter node communications, but this might not be
enough for some security environments.

Unstable environments
The prime cause of problems with HACMP is poor design, planning, implementation,
and administration. If you have an unstable environment, with poorly trained

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-45

Student Notebook

administrators, easy access to the root password, and a lack of change control,
HACMP is not the solution for you.
With HACMP, the only thing more expensive than employing a professional to plan,
design, install, configure, customize, and administer the cluster is employing an
amateur.
Other characteristics of poorly managed systems are:
-

Lack of change control


Failure to treat cluster as single entity
Too many cooks
Lack of documented operational procedures

1-46 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What do we plan to achieve this week?


Your mission this week is to build a two-node mutual takeover highly
available cluster using two previously separate AIX systems, each of
which has an application which needs to be made highly available.

Copyright IBM Corporation 2008

Figure 1-29. What do we plan to achieve this week?

AU548.0

Notes:
Goals
During this week you will design, plan, configure, customize, and administer a two-node
high-availability cluster running HACMP 5.4.1 on an AIX system.
You will learn how to build a standby environment for one application as well as a
mutual takeover environment for two applications. In the mutual takeover environment,
each system will eventually be running its own highly available application, and
providing fallover back-up for the other system.
Some classroom environments will involve creating the cluster on a single pSeries
system between two LPARs. Although this is not a recommended configuration for
production, it provides the necessary components for a fruitful HACMP configuration
experience.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-47

Student Notebook

Overview of the implementation process


Plan and configure AIX

Elimination of single points of failure


Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP networks, and devices)
Application start and stop scripts

Install the HACMP filesets (Note: 5.3 and earlier reboot!)


Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks

Resources and Resource groups:


Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem

Synchronize, then start HACMP


Note: If using two nodes and one application Configure the HACMP
environment can be done in one step.
Copyright IBM Corporation 2008

Figure 1-30. Overview of the implementation process

AU548.0

Notes:
Implementation process
The process should include at least the following:
Work as a team. It cannot be stressed enough that it will be necessary to work with
others when you build your HACMP cluster in your own environment. Practice here will
be useful.
Look at the AIX environment.
- For storage, plan for adapters and LVM components required for application.
- For networks, plan and for communication interfaces, devices, name resolution via
/etc/hosts and service address for the application.
- For application build start and stop script and test outside of the control of HACMP.
Install the HACMP for AIX software and reboot.
Configure the topology and resource groups (and resources).

1-48 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Synchronize, start, and test.

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-49

Student Notebook

Hints to get started


HACMP Cluster
for
the ABC company

hints

Node A
Service
Boot
Standby

IP Label
database
nodeaboot
nodeastand

user
community

IP Address Netmask
192.168.9.3 255.255.255.0
192.168.9.4 255.255.255.0
192.168.254.3 255.255.255.0

Node A
Service
Boot
Standby

IP Label
webserv
nodebboot
nodebstand

IP Address Netmask
192.168.9.5 255.255.255.0
192.168.9.6 255.255.255.0
192.168.254.3 255.255.255.0

Public Network

Node Name
Resource group
Applications
Resources
A-B
Priority
CWOF

= nodea
= dbrg
= database
= cascading

Label
Device

= a_tmssa
= /dev/tmssa1

Label
Device

= a_tty
= /dev/tty1

= 1,2
= yes

Node Name
Resource group
Applications
Resources
B-A
Priority
CWOF

=nodeb
= httprg
= http
= cascading

Label
Device

= b_tmssa
= /dev/tmssa2

Label
Device

= a_tty
= /dev/tty1

= 2,1
= yes

tmssa network

Draw a diagram.
Use (online) planning sheets.
Focus on eliminating SPOFs.
Always factor in a non-IP network.
Ensure that you have multipath
access to shared storage devices.
Document a test plan.
Test the cluster carefully.
Be methodical.

serial network

rootvg
raid1
9.1GB

VG =httpvg
Raid1
9GB

rootvg
raid1
9.1GB

VG = dbvg
Raid5
100GB

Resource Group databaserg contains


Volume Group
= dbvg
hdisk3, hdisk4, hdisk5, hdisk6, hdisk7
Major #
= 51
JFS Log
= dblvlog
Logical Volume
= dblv1, dblv2
FS Mount Point
= /db, /dbdata

Resource Group httprg contains


Volume Group
= httpvg
hdisk2,hdisk8
Major #
= 50
JFS Log
= httplvlog
Logical Volume
= httplv
FS Mount Point
= /http

Copyright IBM Corporation 2008

Figure 1-31. Hints to get started

AU548.0

Notes:
Hint
Create a cluster diagram--a picture is worth 10 thousand words (because of inflation, a
thousand is not enough!).
Use the Online Planning Worksheets. They can be used without installing HACMP and
can be used to generate AND save HACMP configurations.
Try to reduce SPOFs.
Always include a non-IP network.
Access storage over multiple paths or mirror across power and buses.
Document test plan. HACMP also provides test scripts called auto test.
Be methodical.
Execute the test plan prior to placing the cluster into production!

1-50 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Sources of HACMP information


HACMP manuals come with the product

cluster.doc.en_US.es.html
cluster.doc.en_US.es.pdf

HACMP documentation also available online

http://www.ibm.com/servers/eserver/pseries/library/hacmp_docs.html

Release Notes contain important information about the version release

/usr/es/sbin/cluster/release_notes

Sales manual: http://www.ibm.com/common/ssi


IBM courses:

HACMP Admin. I: Planning and Implementation (AU540/AU54)


HACMP Admin II: Admin. and Problem Determination (AU610/AU61)
HACMP Administration III: Virtualization and Disaster Recovery (AU620/AU62)
HACMP V5 Internals (AU60)

IBM Web site:

http://www-03.ibm.com/systems/p/ha/

Non-IBM sources (not endorsed by IBM but probably worth a


look):

http://lpar.co.uk
http://portal.explico.de/
http://www.matilda.com/hacmp/
http://groups.yahoo.com/group/hacmp/
Copyright IBM Corporation 2008

Figure 1-32. Sources of HACMP information

AU548.0

Notes:
Manuals on CD
The HACMP 5.4.1 manuals are:
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
SC23-4864-10 HACMP for AIX, Version 5.4.1: Concepts and Facilities Guide
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide

Additional Web sites for storage


http://www.storage.ibm.com
http://www-1.ibm.com/servers/storage/support/software/sdd.html
ftp://ftp.software.ibm.com/storage/fastt/fastt500/HACMP_config_info.pdf
Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-51

Student Notebook

Checkpoint
1. True or False?
Resource Groups can be moved from node to node.
2. True or False?
HACMP/XD is a complete solution for building
geographically distributed clusters.
3. Which of the following capabilities does HACMP not
provide? (Select all that apply.)
a. Time synchronization
b. Automatic recovery from node and network adapter failure
c. System Administration tasks unique to each node; back-up and
restoration
d. Fallover of just a single resource group

4. True or False?
All nodes in a resource group must have equivalent
performance characteristics.
Copyright IBM Corporation 2008

Figure 1-33. Checkpoint

AU548.0

Notes:

1-52 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Having completed this unit, you should be able to:
Define high availability and explain why it is needed
Outline the various options for implementing high availability
List the key considerations when designing and implementing
a high availability cluster
Outline the features and benefits of HACMP for AIX
Describe the components of an HACMP for AIX cluster
Explain how HACMP for AIX operates in typical cases

Copyright IBM Corporation 2008

Figure 1-34. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 1. Introduction to HACMP for AIX

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

1-53

Student Notebook

1-54 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 2. Networking considerations for high


availability
What this unit is about
This unit describes the HACMP functions related to networks. You
learn which networks are supported in an HACMP cluster and what
you have to take into consideration for planning it.

What you should be able to do


After completing this unit, you should be able to:

Discuss how HACMP uses networks


Describe the HACMP networking terminology
Explain and configure IP Address Takeover (IPAT)
Configure an IP network for HACMP
Configure a non-IP network
Explain how client systems are likely to be affected by HACMP
Minimize the impact of failure recovery on client systems

How you will check your progress


Accountability:
Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html

HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Discuss how HACMP uses networks
Describe the HACMP networking terminology
Explain and set up IP Address Takeover (IPAT)
Configure an IP network for HACMP
Configure a non-IP network
Explain how client systems are likely to be affected by failure
recovery
Minimize the impact of failure recovery on client systems

Copyright IBM Corporation 2008

Figure 2-1. Unit objectives

AU548.0

Notes:
Unit objectives
This unit discusses networking in the context of HACMP.

2-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

2.1 How HACMP uses networks

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-3

Student Notebook

How HACMP uses networks


After completing this topic, you should be able to:
Explain how HACMP uses networks to:

Provide client access to the cluster


Detect failures
Diagnose failures
Communicate with other nodes in the cluster

Explain why a non-IP network is an essential part of any


HACMP cluster

Copyright IBM Corporation 2008

Figure 2-2. How HACMP uses networks

AU548.0

Notes:
Topic 1 objectives
This topic explores how HACMP uses networks. The HACMP concept of IP Address
Takeover (IPAT), where application addresses are relocated when failures occur will be
looked at in more detail in a later section.

2-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

How does HACMP use networks?


HACMP uses networks to:
1.
2.
3.

Provide clients with highly available access to the cluster's applications


Detect and diagnose node, network, and NIC failures
Communicate with other HACMP daemons on other nodes in the cluster

en0

en1

en0

en1

2
RSCT

RSCT

3
clcomd

clcomd

Copyright IBM Corporation 2008

Figure 2-3. How does HACMP use networks?

AU548.0

Notes:
Network design for availability
To design a network that supports high availability using HACMP, we must understand
how HACMP uses networks.

Client access to applications


From the users perspective, the only reason that the cluster is on a network is that the
network provides them with access to the clusters highly available applications. As we
see, satisfying this requirement for client access to the cluster involves a bit more than
just plugging in a network cable.

Detection and diagnosis of failures


In contrast, the fact that HACMP uses the networks to detect and diagnose various
failures is likely to be of considerably more interest to the cluster designers and
Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-5

Student Notebook

administrators. Just being able to detect node, network, and NIC failures imposes
several requirements on how the networks are designed. Being able to distinguish
between certain failures (for example the failure of a network and the failure of a node),
imposes yet more requirements on the network design.
Reliable Scalable Cluster Technology (RSCT) provides facilities for monitoring node
membership; network interface and communication interface health; and event
notification, synchronization, and coordination via reliable messaging.

HACMP internode communications


The final way in which HACMP uses networks to communicate with HACMP daemons
running on other nodes in the cluster is rather mundane. Assuming that the
requirements imposed by the first two uses are properly satisfied, this last use does not
impose any additional requirements on the network design.
All communication between nodes is sent through the Cluster Communications
daemon, clcomd, that runs on each node. The clcomd daemon manages the
connection authentication between nodes and any message authentication or
encryption configured.

2-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Providing HA client access to the cluster


Providing clients with highly available access to the cluster's
applications requires:
Multiple physical NICs per network per node
Virtual Ethernet is supported with a single interface
(Possibly) multiple networks per node
Careful network design and implementation all the way out to the client's systems

Copyright IBM Corporation 2008

Figure 2-4. Providing HA client access to the cluster

AU548.0

Notes:
Network interface card and single point of failure
When using physical networking and not Etherchannel (more on these topics in a few
visuals), the goal is to avoid the NIC being a single point of failure. To achieve that, each
cluster node requires at least two NICs per network. The alternative is that the loss of a
single NIC would cause a significant outage while the application (that is, the resource
group) is moved to another node.
For etherchannel or virtual ethernet configurations, the norm is to have only a single
interface in the network. The use of a special file that provides additional addresses for
diagnosis processing is necessary. That file is called the netmon.cf. We will see more
on that in a few visuals.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-7

Student Notebook

Network as SPOF
The network itself is, of course, a single point of failure because the failure of the
network will disrupt the users ability to communicate with the cluster. The probability of
this SPOF being an issue can be reduced by careful network design, an approach that
is often considered sufficient.

Eliminating the network as a SPOF


If the network as a SPOF must be eliminated, then the cluster requires at least two
networks. Unfortunately, this only eliminates the network directly connected to the
cluster as a SPOF. It is not unusual for the users to be located some number of hops
away from the cluster. Each of these hops involves routers, switches, and cabling-each of which typically represents yet another SPOF. Truly eliminating the network as a
SPOF can become a massive undertaking. Most organizations that are concerned
about the network as a SPOF usually compromise by designing the network to ensure
that no single failure deprives all key users of their access to the cluster.

Importance of careful network design


In the end, there is simply no replacement for careful network design and
implementation all the way out to the users. Failure to perform this design and
implementation activity properly could easily become a crippling issue when the cluster
is put into production.

2-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What HACMP detects and diagnoses


Remember, HACMP only handles the following failures directly:
NIC failure
Node failure
Network failure

IP network
en0

en1

en0

en1

non-IP network

uk
usa

Copyright IBM Corporation 2008

Figure 2-5. What HACMP detects and diagnoses

AU548.0

Notes:
Failures that HACMP handles directly
HACMP uses RSCT to detect failures. Actually, the only thing that RSCT can detect is
the loss of heartbeat packets. RSCT sends heartbeats over IP and non-IP networks. By
gathering heartbeat information from multiple NICs and non-IP devices on multiple
nodes, HACMP makes a determination of what type of failure this is and takes
appropriate action. Using the information from RSCT, HACMP handles only three
different types of failures:
- NIC failures
- Node failures
- Network failures

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-9

Student Notebook

Other failures
HACMP uses AIX features to respond to other failures (for example, the loss of a
volume group can trigger a fallover), but HACMP is not directly involved in detecting
these other types of failures.

2-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Heartbeat packets
HACMP sends heartbeat packets across networks.
Heartbeat packets are sent and received by every NIC.
This is sufficient to detect all NIC, node, and network failures.
Heartbeat packets are not acknowledged.

en0

en1

en0

Application
Data

usa

en1

uk

Copyright IBM Corporation 2008

Figure 2-6. Heartbeat packets

AU548.0

Notes:
Heartbeat packets
HACMPs primary monitoring mechanism is to send heartbeat packets. The cluster
sends heartbeat packets from every NIC and to every NIC and to and from non-IP
devices.

Heartbeating pattern
In a typical two-node cluster with two NICs on the network, the heartbeat packets are
sent in the pair-wise fashion shown above. The pattern gets more complicated when the
cluster gets larger as HACMP uses a pattern that is intended to satisfy three
requirements:
- That each NIC be used to send heartbeat packets (to verify that the NIC is capable
of sending packets)

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-11

Student Notebook

- That heartbeat packets be sent to each NIC (to verify that the NIC is capable of
receiving heartbeat packets)
- That no more heartbeat packets are sent than are necessary to achieve the first two
requirements (to minimize the load on the network)
The details of how HACMP satisfies the third requirement are discussed in a later unit.

Detecting failures
Heartbeat packets are not acknowledged. Instead, each node knows what the
heartbeat pattern is and simply expects to receive appropriate heartbeat packets on
appropriate network interfaces. Noticing that the expected heartbeat packets have
stopped arriving is sufficient to detect failures.

2-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Failure detection versus failure diagnosis


Failure detection is realizing that something is wrong.
For example, realizing that packets have stopped flowing between usa's
en1 and uk's en1

Failure diagnosis is figuring out what is wrong.


For example, figuring out that usa's en1 NIC has failed

HACMP uses RSCT to do both detection and diagnosis.

en0

en1

en0

Application
Data

usa

en1

uk

Copyright IBM Corporation 2008

Figure 2-7. Failure detection versus failure diagnosis

AU548.0

Notes:
Diagnosis
The heartbeat patterns just discussed are sufficient to detect a failure in the sense of
realizing that something is wrong. They are not sufficient to diagnose a failure in the
sense of figuring out exactly what is broken.
For example, if the en1 interface on the usa node fails as in the visual above, usa stops
receiving heartbeat packets via its en1 interface, and uk stops receiving heartbeat
packets via its en1 interface. Usa and uk both realize that something has failed, but
neither of them has enough information to determine what has failed.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-13

Student Notebook

Failure diagnosis
When a failure is detected, HACMP (RSCT topology services)
uses specially crafted packet transmission patterns to
determine (that is, diagnose) the actual failure by ruling out
other alternatives.
Example:
1.

2.
3.
4.

RSCT on usa notices that heartbeat packets are no longer arriving via en1 and
notifies uk (which has also noticed that heartbeat packets are no longer arriving via
its en1).
RSCT on both nodes send diagnostic packets between various combinations of
NICs (including out via one NIC and back in via another NIC on the same node).
The nodes soon realize that all packets involving usa's en1 are vanishing but
packets involving uk's en1 are being received.
Diagnosis: usa's en1 has failed.

Copyright IBM Corporation 2008

Figure 2-8. Failure diagnosis

AU548.0

Notes:
Diagnostic heartbeat patterns
When one or more cluster nodes detect a failure, they share information and plan a
diagnostic packet pattern or series of patterns, which will diagnose the failure.
These diagnostic packet patterns can be considerably more network-intensive than the
normal heartbeat traffic; although, they usually only take a few seconds to complete the
diagnosis of the problem.

2-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What if all heartbeat packets stop?


A node might notice that heartbeat packets are no longer arriving on any
NIC.
In the following configuration, it is impossible for either node to distinguish
between failure of the network and failure of the other node.
Each node concludes that the other node is down!

en0

en1

en0

Application
Data

usa

en1

uk

Result is a
partitioned cluster
and likely data
divergence.

Copyright IBM Corporation 2008

Figure 2-9. What if all heartbeat packets stop?

AU548.0

Notes:
Total loss of heartbeat traffic
If a node in a two-node cluster realizes that it is no longer receiving any heartbeat
packets from the other node, then it starts to suspect that the other node has gone
down. When it determines that it is totally unable to communicate with the other node, it
concludes that the other node has failed.

Both nodes try to take control


In the above configuration, if the network fails, then each node soon concludes that the
other node has failed. Each node then proceeds to take over any resource groups
configured to be able to run on both nodes, but currently resident on the other node.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-15

Student Notebook

Partitioned cluster
Because each node is, in fact, still very alive, the result is that the applications are now
running simultaneously on both nodes. If the shared disks are also online to both nodes,
then the result could be a massive data corruption problem. This situation is called a
partitioned cluster. It is, clearly, a situation that must be avoided.
Note that essentially equivalent situations can occur in larger clusters. For example, a
five-node cluster might become split into a group of two nodes and a group of three
nodes. Each group concludes that the other group has failed entirely and takes what it
believes to be appropriate action. The result is almost certainly very unpleasant.

2-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

CRITICAL: All clusters require a non-IP network


There must be more than one network to distinguish between:
Failure of the other node
Failure of a network

There must be a non-IP network to distinguish between:


Failure of the other node's IP subsystem
Total failure of the other node

Therefore, ALL CLUSTERS SHOULD HAVE A NON-IP NETWORK!

en0

en1

en1

en0

non-IP network

Application
Data
usa

uk
Copyright IBM Corporation 2008

Figure 2-10. CRITICAL: All clusters require a non-IP network

AU548.0

Notes:
Required?
To be completely accurate, you do not have to configure a non-IP network. But for the
reasons outlined as follows, you will want to implement at least one non-IP network and
possibly more. So it is not technically accurate that a non-IP network is required, but it is
definitely practically accurate that one is required. That is why the title indicates
required, while the content of the visual indicates should.

Why we need more than one network


Distinguishing between the failure of the network and the failure of the other node
requires that there be a path between the two nodes that does not involve the network
in question. Consequently, if a partitioned cluster is to be avoided, then every cluster
must be configured with at least two ways for nodes to communicate with each
other.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-17

Student Notebook

Why we need a non-IP network


Although rather unlikely, it is also possible for the entire IP subsystem to fail on a node
without the node crashing. To distinguish between the failure of the IP subsystem on a
node and the failure of the node itself, every cluster must be configured with a way
to communicate between nodes, that does not require IP to be operational.

Both IP and non-IP networks are needed


These pathways that do not require IP are called non-IP networks. Every cluster must
be configured with enough non-IP networks to ensure that any node can communicate
with every other node (possibly by asking an intermediate node to pass along
messages) without requiring any nodes IP subsystem to be operational.

Both IP and non-IP networks are used


Many untrained people seem to assume that the non-IP network is for heartbeating with
the implication possibly being that the IP networks are not used for heartbeating or that
the non-IP networks are not used for heartbeating. Neither implication is true. HACMP
sends heartbeat packets across all configured networks, IP and non-IP. HACMP uses
any available network to communicate with other cluster nodes.

Terminology: serial networks versus non-IP networks


Older HACMP documentation generally refers to these non-IP networks as serial
networks. This is because the predominant non-IP network was the RS-232 type. With
the advent and ease of configuration of heartbeat on disk, the term serial network must
only be used to refer to the RS-232 type. Therefore, use the term non-IP network to
refer to the concept of a network between nodes that does not involve IP (today that
generally means heartbeat on disk or RS-232), and use the specific network type
(RS-232 or heartbeat on disk), when referring to the type of network to be implemented.

2-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

The two subnet rule


HACMP must ensure that heartbeats are sent out via all NICs and
know which NIC is used.
If a node has multiple NICs on the same logical subnet, then AIX
can rotate which NIC is used to send packets to the network.
Therefore, each NIC on each physical IP network on any given
node must have an IP address on a different logical subnet.

en1
192.168.2.1

en0
192.168.1.1

en0
192.168.1.2

en1
192.168.2.2

non-IP network

usa

Note:
Doesnt apply for single
adapter networks, like
etherchannel or
virtual ethernet.
uk

Copyright IBM Corporation 2008

Figure 2-11. The two subnet rule

AU548.0

Notes:
Requirements for HACMP to monitor every NIC
If a node has two NICs on the same logical IP subnet and a network packet is sent to an
IP address on the same logical subnet, then the AIX kernel is allowed to use either NIC
on the sending node to send the packet.
Because this is incompatible with HACMPs requirement that HACMP be able to dictate
which NIC is to be used to send heartbeat packets, HACMP requires that each NIC on
each node be on a different logical IP subnet.
We will give some examples of valid and invalid configurations later in this unit, after we
have covered the other subnetting rules.
Note: There is an exception to the requirement that each NIC be on a different logical IP
subnet. We will discuss that shortly.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-19

Student Notebook

Failure recovery and reintegration


HACMP continues to monitor failed components to detect their
recovery.
Recovered components are reintegrated back into the cluster.
Reintegration might trigger significant actions.
For example, recovery of primary node will optionally trigger fallback of resource group
to primary node.

en0

en1

en0

usa

en1

uk

en0

en1

en0

usa

en1

uk

Copyright IBM Corporation 2008

Figure 2-12. Failure recovery and reintegration

AU548.0

Notes:
NIC and network recovery
NICs and networks are automatically reintegrated into the cluster when they recover.

Node recovery
In contrast, a node is not considered to have recovered until the Cluster Services has
been started on the node. This allows the node to be rebooted and otherwise exercised
as part of the repair process without HACMP declaring failures or performing
reintegration or both, while the repair action is occurring.
The reintegration of a component might trigger quite significant actions. For example, if
a node is reintegrated, which has a high priority within a resource group, then,
depending on how the resource group is configured, the resource group might fall back.

2-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review: Topic 1


1. How does HACMP use networks? (Select all that apply.)
a.Provide client systems with highly available access to the cluster's applications
b.Detect failures
c.Diagnose failures
d.Communicate between cluster nodes
e.Monitor network performance
2. Using information from RSCT, HACMP directly handles only three types of failures:
______________, ______________, and ______________.
3. True or False?
Heartbeat packets must be acknowledged or a failure is assumed to have
occurred.
4. True or False?
Clusters should include a non-IP network.
5. True or False?
Each NIC on each physical IP network on each node is required to have an IP
address on a different logical subnet.

Copyright IBM Corporation 2008

Figure 2-13. Lets review topic 1

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-21

Student Notebook

2-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

2.2 HACMP concepts and configuration rules

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-23

Student Notebook

HACMP concepts and configuration rules


After completing this topic, you should be able to:
List networks that HACMP supports
Describe the different HACMP network types
Describe the purpose of public and private HACMP networks
Describe the topology components and their naming rules
Define key networking-related HACMP terms
Describe the basic HACMP network configuration rules
Describe what a persistent node IP label is and its typical
uses

Copyright IBM Corporation 2008

Figure 2-14. HACMP concepts and configuration rules

AU548.0

Notes:
Topic 2 objectives
This section will explore HACMP networking concepts, terms and configuration rules in
more detail.

2-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP networking support


Supported IP networking technologies:
Ethernet
All speeds
Includes etherchannel
Not the IEEE 802.3 frame type which uses et0, et1 ...

FDDI
Token-Ring
ATM and ATM LAN Emulation
SP Switch 1 and SP Switch 2

Supported non-IP network technologies:


Heartbeat over Disks (diskhb)
Requires Enhanced Concurrent Volume Group

Multinode Disk Heartbeat (mndhb)


Online on All Nodes only

RS232/RS422 (rs232)
Target Mode SSA (tmssa)
Target Mode SCSI (tmscsi)
Copyright IBM Corporation 2008

Figure 2-15. HACMP networking support

AU548.0

Notes:
Supported IP networks
HACMP supports all of the popular IP networking technologies (and a few that are
possibly not quite as popular). Note that the IEEE 802.3 Ethernet frame type is not
supported.

Supported non-IP networks


HACMP supports multiple non-IP networking technologies. Heartbeat on disk and
rs-232 are the most prevalent. The advantage of heartbeat on disk is that there is no
need for additional hardware, assuming that you have shared storage and are willing to
create an enhanced concurrent mode volume group. No data area is used for the
heartbeat on disk processing.
Multinode disk heartbeat is new with HACMP 5.4.1. It provides a method by which
multiple nodes access multiple shared logical volumes to ensure that the loss of access
Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-25

Student Notebook

from a node or nodes to the rest of the cluster nodes via all routes, IP and non-IP will be
treated as a loss of quorum. This in turn will cause the node or nodes to stop accessing
the data. This is to prevent (or minimize) data corruption in the event of a domain merge
(split brain). This is implemented only with Resource Groups that have a startup policy
of Online on All Nodes (OOAN, also known as concurrent resource groups).

2-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Network types
HACMP categorizes all networks:
IP networks:

Network type:

Network attribute:

ether,token,fddi,atm,
hps (SP Switch or High Performance Switch)
public or private

Non-IP networks:

Network type:

rs232, tmssa, tmscsi, diskhb, mndhb


Copyright IBM Corporation 2008

Figure 2-16. Network types

AU548.0

Notes:
IP networks
As mentioned before, IP networks are used by HACMP for:
- HACMP heartbeat (failure detection and diagnosis)
- Communications between HACMP daemons on different nodes
- Client network traffic

IP network attribute
The default for this attribute is public. Oracle uses the private network attribute
setting to select networks for Oracle inter-node communications. This attribute is not
used by HACMP itself. See the HACMP for AIX: Planning Guide for more information.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-27

Student Notebook

HACMP and virtual Ethernet


HACMP 5.3 and later supports virtual Ethernet in POWER5-based systems; however,
there are some considerations. We summarize some of them as follows; for complete
details on using virtual I/O with HACMP, see:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10390
- IP Address Takeover (IPAT) via Aliasing must be used. IPAT via Replacement and
Hardware Address Takeover (HWAT) are not supported. In general, IPAT via
Aliasing is recommended for all HACMP networks that can support it.
Note: We will discuss IPAT and HWAT in detail in the next topic in this unit.
- HACMPs PCI Hot Plug facility cannot be used. PCI Hot Plug operations are
available through the VIO Server. Note that when an HACMP node is using Virtual
I/O, HACMPs PCI Hot Plug facility is not meaningful because the I/O adapters are
virtual rather than physical.
- All Virtual Ethernet interfaces defined to HACMP should be treated as
single-adapter networks as described in the Planning Guide. In particular, the
netmon.cf file must be used to monitor and detect failure of the network interfaces.
netmon.cf should include a list of clients to ping. Because of the nature of Virtual
Ethernet, other mechanisms to detect the failure of network interfaces are not
effective.
- If the VIO Server has multiple physical interfaces on the same network, or if two or
more HACMP nodes are using VIO Servers in the same frame, HACMP will not be
informed of (and hence will not react to) single physical interface failures. This does
not limit the availability of the entire cluster because VIOS itself routes traffic around
the failure. The VIOS support is analogous to EtherChannel in this regard. Other
methods (not based the VIO Server) must be used for providing notification of
individual adapter failures.
If the VIO Server has only a single physical interface on a network, then a failure of
that physical interface will be detected by HACMP. However, that failure will isolate
the node from the network.

Non-ip networks
HACMP uses non-IP networks for:
- Alternative non-IP path for HACMP heartbeat and messaging
- Differentiates between node/network failure
- Eliminates IP as a single point of failure

2-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP topology components


HACMP uses some unique terminology to describe the type and
function of topology (as in, network) components under its control.
IP

IP lab
el

Ne
two
rk

s
IP addres

vancouver-service

192.168.5.2

TCP/IP network - Internalnet


Comm
u

rface

Communication Device
Serial
Port

non IP - rs232
non IP - diskhb

usa

non IP - mndhb
node

Network
Interface
Card

Network
Interface
Card

Network
Interface
Card

Network
Interface
Card

ne non
tw -IP
or
k

Serial
Port

nicatio
n Inte

name

non
- IP
non
- IP

net
wor
k

uk

net
wo
rk

Copyright IBM Corporation 2008

Figure 2-17. HACMP topology components

AU548.0

Notes:
Terminology
HACMP has quite a few special terms that are used repeatedly throughout the
documentation and the HACMP smit screens. Over the next few visuals we will discuss
some of the network related terminology in detail.
- node
An IBM system p server operating within an HACMP cluster
- node name
The name of a node from HACMPs perspective
- IP label
For TCP/IP networks, the name specified in the /etc/hosts file or by the Domain
Name Service for a specific IP address

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-29

Student Notebook

In many configurations, HACMP nodes will have multiple NICs, and thus multiple IP
labels, but only one hostname. We will look at the relationship between hostname,
node name, and IP labels in the next visual.
In HACMP, IP labels are either service IP labels or non-service IP labels. We will
discuss this distinction in the next few visuals.
- IP network
A network that uses the TCP/IP family of protocols
- non-IP network or serial network
A point-to-point network, which does not rely on the TCP/IP family of protocols
- communication interface
A network connection onto an IP network (slightly better definition coming shortly)
- communication device
A port or device connecting a node to a non-IP network (slightly better definition
coming shortly)

2-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Naming nodes
A node can have several names, including the AIX hostname, the
HACMP node name, and one of the IP labels. These concepts
should not be confused.
AIX hostname

HACMP node name

# hostname
gastown
# uname -n
gastown

# usr/es/sbin/cluster/utlities/get_local_nodename
vancouver

IP labels
# netstat -i
Name Mtu Network
lo0 16896 link#1
lo0 16896 127
lo0 16896 ::1
tr0 1500 link#2
tr0 1500 192.168.1
tr1 1492 link#3
tr1 1492 192.168.2
tr2 1492 link#4
tr2 1492 195.16.20

Address
localhost
0.4.ac.49.35.58
vancouverboot1
0.4.ac.48.22.f4
vancouverboot2
0.4.ac.4d.37.4e
db-app-svc

Ipkts
5338
5338
5338
76884
76884
476
476
5667
5667

Ierrs
0
0
0
0
0
0
0
0
0

Opkts
5345
5345
5345
61951
61951
451
451
4500
4500

Oerrs
0
0
0
0
0
13
13
0
0

Coll
0
0
0
0
0
0
0
0
0

Copyright IBM Corporation 2008

Figure 2-18. Naming nodes

AU548.0

Notes:
Hostname
Each node within an HACMP cluster has a hostname associated with it that was
assigned when the machine was first installed onto the network. For example, a
hypothetical machine might have been given the name gastown.

HACMP node name


Each node within an HACMP cluster also has a node name. The node name for a
machine is almost always the same as the hostname, because the alternative would
result in unnecessary confusion. Remember that node names are not required to be the
same as hostnames. For example, our hypothetical machine with a hostname of
gastown might have a node name of vancouver.
Note: The Canadian city of Vancouver was once called Gastown.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-31

Student Notebook

IP labels
Each IP address used by an HACMP cluster almost certainly has an IP label associated
with it. In non-HACMP systems, it is not unusual for the systems only IP label to be the
same as the systems hostname. This is rarely a good naming convention within an
HACMP cluster because there are just so many IP labels to deal with, and having to
pick which one gets a name that is the same as a nodes hostname is a pointless
exercise.

IP label naming conventions: non-service IP labels


Preferably, assign IP labels to IP addresses that describe, in some sense, the purpose
of the IP address.
For IP addresses that are not associated with an application (non-service), It is usually
useful to include which node the IP address is associated with. In example in the visual,
there are two NICs that have a vancouver prefix on their IP labels because these
particular IP labels will never be associated with any other node.

IP label naming conventions: service IP labels


Service IP labels/addresses, which are used in IPAT, can move from node to node.
Service IP labels should not contain the name of any node since they are not always
associated with any particular node. Experience shows that including a node name or a
hostname as any part of an IPAT service IP label is almost always the source of
significant confusion (significant in the sense that it leads to a cluster outage or other
painful experience).
In the example in the visual, there is one service IP label: db-app-svc.

2-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP network component terms (1 of 2)


Communication Interface:
A communication interface refers to IP-based networks and NICs.
An HACMP communication interface is a combination of:
A network interface
for example: en0
An IP label / address
for example: db-app-svc, 195.16.20.10

Communication Device:
A communication device refers to one end of a point-to-point
non-IP network connection, such as /dev/tty1, /dev/hdisk1 or
/dev/tmssa1.
Communication Adapter:
A communication adapter is an X.25 adapter used to support a
Highly Available Communication Link.

Copyright IBM Corporation 2008

Figure 2-19. HACMP network component terms (1 of 2)

AU548.0

Notes:
HACMP network terminology
When using HACMP SMIT, it is important to understand the difference between
communication interfaces, devices and adapters:
- Communication interfaces:
Interfaces for IP-based networks
Note: The term communication interface in HACMP refers to more than just the
physical NIC. From HACMPs point of view, a communication interface is an object
defined to HACMP, which includes:
The logical interface (the name for the physical NIC), such as en0
The IP label / address
- Communication devices:
Devices for non-IP networks
- Communication adapters:
X.25 adapters
Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-33

Student Notebook

HACMP network component terms (2 of 2)


Service IP label / address: Address configured by HACMP to support client
traffic. It is kept highly available by HACMP.
Not stored in AIX ODM
Configured by Cluster Manager on an interface either by replacement or by alias.

Non-service IP label / address: An IP label / address defined to HACMP for


communication interfaces and is not used by HACMP for client traffic. Two
types:
Referred to as a boot or base address (stored in AIX ODM)
Persistent (see the following text).

Service interface: A communications interface configured with a service IP


label / address (either by alias or replacement).
Non-service interface: A communications interface not configured with a
service IP label / address. Used as a potential location for a service IP label /
address.
Persistent IP label / address: An IP label / address, defined as an alias to
an interface, which stays on a single node and is kept available on that node
by HACMP.
Copyright IBM Corporation 2008

Figure 2-20. HACMP network component terms (2 of 2)

AU548.0

Notes:
More HACMP terminology
Another set of important terms are service, non-service, and persistent:

Service IP label / address


An IP label or address intended to be used by client systems to access services running
within the cluster. Used with IP Address Takeover (IPAT).

Non-service IP label / address


An IP address that is configured onto a NIC using AIXs TCP/IP smit screens and stored
in the AIX ODM. In other words, it is the IP address that a NIC has immediately after
AIX finishes booting. HACMP might replace a non-service IP address with a service IP
address depending on factors that are explained shortly.

2-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Note: In earlier versions of HACMP, the terms boot IP label and boot IP address were
used to refer to what is now being called non-service IP label / address. The older terms
still appear in a few places in the HACMP 5.x documentation.

Applications should use the service IP label / address


Non-service IP labels and non-service IP addresses should not be used by client
systems to contact the clusters applications. This is particularly important if IPAT is
configured, because a client system that gets into the habit of connecting to its
application using a non-service IP label / address cannot find its application after a
fallover to a different node.

Service interface
A communications interface configured with a service IP label / address (either by alias
or by replacement).

Non-service interface
A communications interface not configured with a service IP label / address. Used as a
backup for a service IP label / address.

Persistent IP label / address


An IP address monitored by HACMP but it stays on the node on which it is configured. It
is implemented as an alias and HACMP will attempt to keep this IP label / address
highly available on the same node. Persistent IP labels / addresses are discussed later
in this unit.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-35

Student Notebook

IP network configuration rules


General
Each node must have at least one direct connection with every other
node.
Do not place network equipment that filters packets between nodes.

Non-service IP Address Rules


Heartbeating over IP interfaces (the default)
Each IP address on a node must be on a different logical subnet.
With multiple NICs on the same subnet, HACMP cannot reliably monitor each
NIC.

Each logical subnet must use the same subnet mask.


There must be at least one subnet in common with all nodes.

Heartbeating over IP aliases


No subnet restrictions on all service and non-service IP addresses.
You specify a base address for the heartbeat paths.
This is called an offset in the pubs.

HACMP configures a set of IP addresses and subnets for heartbeating.


Heartbeating addresses are applied to NICs as aliases, allowing all NICs
to be monitored.
Copyright IBM Corporation 2008

Figure 2-21. IP network configuration rules

AU548.0

Notes:
Network configuration rules for heartbeating
The visual shows some of the rules for configuring HACMP IP-based networks. These
are not quite the complete set of rules as we have not had a close enough look at IPAT
yet, and there are a few other issues still to be discussed. In particular, we will discuss
the rules for the service IP addresses later in the unit, when we discuss IPAT.

General rules
The primary purpose of these rules is to ensure that cluster heartbeating can reliably
monitor NICs, networks and nodes.
The two basic approaches are:
- Heartbeating over IP interfaces (the default)
- Heartbeating over IP aliases

2-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

In either case:
- HACMP requires that each node in the cluster have at least one direct, non-routed
network connection with every other node.
- Between cluster nodes, do not place intelligent switches, routers, or other network
equipment that do not transparently pass through UDP broadcasts and other
packets to all cluster nodes. Bridges, hubs, and other passive devices that do not
modify the packet flow may be safely placed between cluster nodes.

Heartbeating over IP interfaces


In this case, the configured service addresses and non-service addresses are used for
heartbeating. Because of this, there are requirements on how the addresses are
configured to ensure that heartbeating can occur reliably:
- Each interface on a node must be on a different logical subnet.
If there are multiple interfaces on the same logical subnet, AIX can use any one of
them for outgoing messages. In this case, HACMP cannot select which interface will
be used for heartbeating; so it cannot reliably monitor all interfaces.
- Each logical subnet should use the same subnet mask.
- There must be at least one subnet in common with all nodes.

IP configuration rules too restrictive?


If it is difficult to conform to the IP address configuration rules for heartbeating over IP
interfaces, there are two choices:
- Heartbeating over IP aliases
- Using netmon.

Heartbeating over IP aliases


With this heartbeating method, the service and non-service addresses are not used for
heartbeating. Instead, you specify an IP address offset to be used for heartbeating.
HACMP then configures a set of IP addresses and subnets for heartbeating, which are
totally separate from those used as service and non-service addresses. The
heartbeating addresses are added to the NICs using IP aliases.
Because HACMP automatically generates the proper addresses required for
heartbeating, all other addresses are free of any constraints. Of course, you must
reserve a unique address and subnet range that is used specifically for heartbeating.
These addresses are not to be routed in your network. They are used solely for cluster
node-to-cluster node communications.
For more details, see the HACMP Planning Guide.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-37

Student Notebook

Subnet considerations for heartbeating over IP alias


Heartbeating over IP Aliases provides the greatest flexibility for configuring non-service
and service IP addresses.
HACMP installations typically require many subnets. If you only have a limited number
of subnets available, you may consider using heartbeating over IP alias and putting
multiple service IP addresses on the same subnet or putting a service address on the
same subnet as non-service addresses. While this is perfectly acceptable in terms of
HACMP heartbeating, it needs to be well thought out because of the way AIX handles
multiple routes to the same destination.
AIX supports multiple routes to the same destination and, by default, will round robin
between the available routes. This could create a problem for your application.
Consider the following scenario:
The non-service addresses on en1 and en2 on node1 are in the same subnet as an
applications service address. The service address starts on en1. en1 fails and
HACMP moves the service address to en2. AIX does not know that en1 has failed;
therefore, it will continue to round robin packets between en1 and en2 (because
they have the same subnet destination). Packets sent to en1 will be lost because of
the failure.
AIXs active Dead Gateway Detection provides a way for AIX to detect routes that are
down and adjust the routing table; however, this does involve some additional network
traffic. For more information about AIXs support for multipath routing and active Dead
Gateway Detection, see the man page for the no command and the AIX Version 5.3
System Management Guide: Communications and Networks.

netmon
netmon, the network monitor portion of RSCT Topology Services, enables you to create
a configuration file that specifies additional network addresses to which ICMP ECHO
requests can be sent as an additional way to monitor interfaces. netmon is outside the
scope of this class. See the HACMP for AIX: Planning Guide for information on using
netmon.

Unmonitorable NICs
One final point: If no other mechanism has been configured into the cluster, HACMP
attempts to monitor an otherwise unmonitorable NIC by checking to see if packets are
arriving and being sent via the interface. This approach is not sufficiently robust to be
relied upon-- use heartbeating via IP aliases or netmon to get the job done right.

2-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Non-service IP address examples

IP Address
node1

IP Address
node2

Valid boot addresses?


Assume a subnet mask of 255.255.255.0

192.168.5.1 192.168.5.2
192.168.6.1 192.168.6.2

Yes

192.168.5.1 192.168.5.2

Maybe, but NICs are a single point of failure.


Requires netmon.cf file or Heartbeat over IP Alias.

192.168.5.1 192.168.6.2

No, a direct non-routed network connection does


not exist between the two nodes.

192.168.5.1 192.168.5.2
192.168.6.1 192.168.5.3

Maybe, but both node2 interfaces are on same


subnet; they cannot be monitored. Same
comment as above.

192.168.5.1 192.168.5.2
192.168.6.1 192.168.6.2
192.168.7.1
192.168.8.1

Yes, but third and fourth interfaces on node1 do


not have a common subnet with another node;
they cannot be monitored. Same comment as
above.
Copyright IBM Corporation 2008

Figure 2-22. Non-service IP address examples

AU548.0

Notes:
Examples
The visual shows some non-service IP address examples. Well see the service IP
address examples later, when we discuss IPAT.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-39

Student Notebook

Non-IP network configuration rules: Point-to-point


Non-IP networks are strongly recommended to provide an
alternate communication path between cluster nodes in the event
of an IP network failure or IP subsystem failure.
With more than two nodes, you can configure the non-IP network
topology using one of the following layouts:
Mesh: Each node is connected to all other nodes.
This is the most robust, but requires the most hardware.
Ring (or Loop): Each node is connected to its two adjacent neighbors. Each node has
two non-IP connections for heartbeating.
Star: One node is connected to all other nodes.
This is the least robust; the center node becomes a single point of failure for all the
associated networks.
net_rs232_04

node1

net_rs232_01

node2

net_rs232_02

node3

net_rs232_03

node4

Copyright IBM Corporation 2008

Figure 2-23. Non-ip network configuration rules: Point-to-point

AU548.0

Notes:
Non-ip networks
Non-IP networks are point-to-point; that is, each connection between two nodes is
considered a network and a separate non-IP network label because it is created in
HACMP.
For example, the visual shows four RS232 networks, in a ring configuration, connecting
four nodes to provide full cluster non-IP connectivity.

Types of non-IP networks


You can configure heartbeat paths over the following types of networks:
- Serial (RS232)
- Disk heartbeat (over an enhanced concurrent mode disk)
- Multi-node Disk Heartbeat (for use with resource groups with Startup Policy of
Online on All Available Nodes)
2-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- Target Mode SSA


- Target Mode SCSI

Rules
The rules for non-IP networks are considerably simpler than the rules for IP networks
although they are just as important.
The basic rule is that you must configure enough non-IP networks to provide a non-IP
communication path, possibly via intermediate nodes, between every pair of nodes in
the cluster. In other words, every node must have a non-IP network connection to at
least one other node. Additional communication paths, such as the ring or mesh
topologies discussed in the visual, provide more robustness.
In addition, there are some considerations based on the type of non-IP network you are
using.

Planning disk heartbeat networks


Any shared disk in an enhanced concurrent mode volume group can support a
point-to-point heartbeat connection. Each disk can support one connection between two
nodes. The connection uses the shared disk hardware as the communication path.
A disk heartbeat network in a cluster contains:
- Two nodes
A node can be a member of any number of one disk heartbeat networks. A cluster
can include up to 256 communications devices.
- An enhanced concurrent mode disk that participates in only one heartbeat network
Keep in mind the following points when selecting a disk to use for disk heartbeating:
- A disk used for disk heartbeating must be a member of an enhanced concurrent
mode volume group. However, the volume groups associated with the disks used for
disk heartbeating do not have to be defined as resources within an HACMP
resource group. In other words, an enhanced concurrent volume group associated
with the disk that enables heartbeating does not have to belong to any resource
group in HACMP.
You can convert an existing volume group to enhanced concurrent mode. For
information about converting a volume group, see Chapter 11: Managing Shared
LVM Components in a Concurrent Access Environment in the Administration Guide.
- The disk should have fewer than 60 seeks per second at peak load. (Disk
heartbeats rely on being written and read within certain intervals.)
Use the AIX filemon command to determine the seek activity, as well as the I/O
load for a physical disk.
Typically, most disk drives that do not have write caches can perform about 100
seeks per second. Disk heartbeating uses 24 seeks.
Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-41

Student Notebook

Disks that are RAID arrays, or subsets of RAID arrays, might have lower limits.
Check with the disk or disk subsystem manufacturer to determine the number of
seeks per second that a disk or disk subsystem can support. However, if you choose
to use a disk that has significant I/O load, increase the value for the timeout
parameter for the disk heartbeat network.
- When SDD is installed and the enhanced concurrent volume group is associated
with an active vpath device, ensure that the disk heartbeating communication device
is defined to use the /dev/vpath device (rather than the associated /dev/hdisk
device).
- If a shared volume group is mirrored, at least one disk in each mirror should be used
for disk heartbeating.
This is particularly important if you plan to set the forced varyon option for a resource
group.

Planning serial point-to-point networks


When planning a serial (RS232) network, remember the following points:
- If there are no native serial ports available, and your planned HACMP configuration
for that node uses an RS232 network, the configuration requires an RS232 serial
adapter.
- All RS232 networks defined to HACMP are brought up by RSCT with a default of
38400 bps. The tty ports should be defined to AIX as running at 38400 bps. RSCT
supports baud rates of 38400, 19200, 9600.
Any serial port that meets the following requirements can be used for heartbeats:
- The hardware supports use of that serial port for modem attachment.
- The serial port is free for HACMP exclusive use.
Certain System p systems do not support the use of native serial ports. Check the
hardware documentation for the system being used as well as the HACMP Release
Notes for specifics on which systems allow use of the native serial ports.

Planning target mode networks


Target mode SCSI and target mode SSA are also supported for point-to-point heartbeat
communications. Each of these types of networks includes two nodes, a shared disk,
and SCSI or SSA communications (as appropriate to the disk type).

2-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Non-IP network configuration rules: Multi-node


Multi-node disk heartbeat interconnects multiple nodes; so
actual space is required.
Traditional disk heartbeat uses non-data area of the disk as the communications
medium to create a point to point connection between two nodes.
For Resource Groups with Startup Policy of Online on All Nodes only

A single logical volume per volume group must be allocated for


MNDHB.
Consider three per volume group.

The logical volume for MNDHB must:


Be at least 32M
Not use LVM mirroring, MWC or "write verify (do not want LVM to interfere with DHB)
Reside on a single physical disk that is accessible from all cluster nodes
ecmvg
(e.g.. Oracle RAC
Voting disks)

n1
lv1
n2

MNDHB 1
MNDHB 2
MNDHB 3

lv2
lv3

n3

Copyright IBM Corporation 2008

Figure 2-24. Non-IP network configuration rules: Multi-node

AU548.0

Notes:
Fencing
When a cluster partition occurs HACMP will determine the losing side and fence those
nodes away from the shared storage.
Fencing uses the same function as LVM uses when quorum is lost on a mirrored volume
group with quorum on access to the disks is blocked and any further I/O attempts fail.
Note that the VG does not have to be defined with quorum and mirroring.
The losing side is determined by a simple quorum calculation a node must have access
to at least one more than half of the disks.

Quorum checking and disk fencing


The quorum check is performed only on ECM volume groups used in a concurrent (OOAN)
resource group.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-43

Student Notebook

It does not apply to ECM used for fast disk takeover.


The quorum check happens automatically if there are ECM volume groups in an OOAN
resource group with one or more disk heartbeat disks.
There is no way to disable it.
The quorum check is performed at two times:
1. When a node comes on line
2. When a node gets an indication that another node has failed (with which it shared one
or more OOAN resource groups)
The quorum check is on disks used for disk heart beating, not on the total number of disks
in the volume group.
The anticipated use is with Oracle RAC, with the voting files contained in a three-disk
ECM volume group.

2-44 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Persistent node IP labels


An IP label associated with a particular node
Useful for administrative purposes:
Provides highly available IP address associated with a particular node
Allows external monitoring tools (for example, Tivoli) and administrative scripts to reach
a particular node

Assigned, via IP aliasing, after node synchronization, to a


communications interface on the node
HACMP will strive to keep the persistent node IP label available
on that node -- never moved to another node.
Maximum of one persistent node IP label per network per node
Persistent node IP labels must adhere to subnet rules:
Persistent node IP labels must not be in any non-service interface subnet

Copyright IBM Corporation 2008

Figure 2-25. Persistent node IP labels

AU548.0

Notes:
Rationale
In earlier releases of HACMP, the only way to guarantee that a known IP address would
always be available on each node for administrative purposes was to configure a
separate network, which was never used for IPAT. Such a configuration limits the
usefulness of the administrative network because the loss of that network adapter
would result in an inability to reach the node for administrative purposes.
Additionally, there are applications and functions that require a reliable address that is
used to reach a specific node, one that does not move from one node to another. GLVM
is one AIX function that requires an address to be bound to a node but kept highly
available amongst adapters on that network. Applications such as Tivoli Management
Region (TMR), require that a static IP address be assigned to each node it manages.
This is accomplished through a persistent address.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-45

Student Notebook

Persistent IP labels
As an optional network component, users can configure persistent node IP labels.
These are IP aliases that are configured on a node and kept available as long as at
least one communication interface remains active on the associated network.
Persistent IP labels can be used with IPAT. Persistent IP labels do not move as part of
IPAT from node to node, but will move to another interface on the same node in the
event an adapter failure occurs.

If Cluster Services is not up


If Cluster Services is not up on the node, then the persistent node IP label is still aliased
to a communication interface; although, the failure of the underlying communication
interface will, of course, cause the persistent node IP label to become unavailable. This
is done via rc.init in the /etc/inittab.

More on persistent IP labels


A persistent node IP label is an IP alias that has been assigned to a specific node in the
cluster, and always stays on the same node. The persistent node IP label coexists on
an interface with the non-service or service label that is already there. Persistent node
IP labels do not require installation of additional physical adapters, and they are not
included in any resource groups (the clients of a concurrent access resource group
might be configured to use the persistent node IP label). A persistent node IP label is
intended primarily to provide administrative access to the node, but also plays a role in
HATivoli and HACMP/XD for GLVM clusters.
Persistent node IP labels are supported on the following types of IP-based networks
only:
- Ethernet
- Token Ring
- FDDI
- ATM LANE

2-46 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review: Topic 2


1. True or False?
Clusters must always be configured with a private IP network for
HACMP communication.

2. Which of the following options are true statements about


communication interfaces? (Select all that apply.)
a.Has an IP address assigned to it using the AIX TCP/IP SMIT screens
b.Might have more than one IP address associated with it
c.Sometimes but not always used to communicate with clients
d.Always used to communicate with clients

3. True or False?
Persistent node IP labels are not supported for IPAT via IP
replacement.

4. True or False?
There are no exceptions to the rule that, on each node, each NIC on
the same LAN must have an IP address in a different subnet.

Copyright IBM Corporation 2008

Figure 2-26. Lets review topic 2

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-47

Student Notebook

2-48 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

2.3 Implementing IP address takeover (IPAT)

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-49

Student Notebook

Implementing IP Address Takeover


After completing this topic, you should be able to:
Describe IPAT via IP aliasing and IPAT via IP replacement:
How to configure a network to support them
What happens when:

There are no failed components


A communication interface fails
A communication interface recovers
A node fails
A node recovers

Select the style of IPAT that is appropriate in a given context


Describe how the AIX boot sequences changes when IPAT is
configured in a cluster
Describe the importance of consistent IP addressing and
labeling conventions

Copyright IBM Corporation 2008

Figure 2-27. Implementing IP Address Takeover

AU548.0

Notes:
Topic 3 objectives
This section explains how to configure both variants of IP Address Takeover.

2-50 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

IP Address Takeover
Each highly available application is likely to require its own IP
address (called a service IP address).
This service IP address is placed in the application's resource
group.

mo

un

ex

po

r ts

ts

Vo
G lu m
ro e
up

NF

e
F il e m
st
Sy

NF

Se
I P r v ic e
la b
el

HACMP is responsible for ensuring that the service IP address is available on the node
currently responsible for the resource group.

Ap

ti
p lic a

er
on S

ver

Reso
ur
Grou ce
p

Copyright IBM Corporation 2008

Figure 2-28. IP Address Takeover

AU548.0

Notes:
Service IP address
Most highly available applications work best, from the users perspective, if the
applications IP address never changes. This capability is provided by HACMP using a
feature called IP Address Takeover. An IP address is selected that is associated with
the application. This IP address is called a service IP address because it is used to
deliver a service to the use. It is placed in the applications resource group. HACMP
then ensures that the service IP address is kept available on whichever node the
resource group is currently on. The process of moving an IP address to another NIC or
to a NIC on another node is called IP address takeover (IPAT).

Applications that do not need IPAT


Although very common, IPAT is an optional behavior that must be configured into the
cluster. An example of an application that might not require IPAT is a database server

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-51

Student Notebook

for which the client software can be configured to check multiple IP addresses when it is
looking for the server.
Also, IPAT is not supported for resource groups configured with a Startup Policy of
Online on All Node (concurrent access) because the application in such a resource
group is active on all the nodes that are currently up. Consequently, clients of a
concurrent access resource group must be capable of finding their server by checking
multiple IP addresses.

2-52 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Two ways to implement IPAT


IPAT via IP aliasing:
HACMP adds the service IP address to an (AIX) interface IP address using AIX's IP
aliasing feature:
ifconfig en0 alias 192.168.1.2

IPAT via IP replacement:


HACMP replaces an (AIX) interface IP addresses with the service IP addresses:
ifconfig en0 192.168.1.2

Copyright IBM Corporation 2008

Figure 2-29. Two ways to implement IPAT

AU548.0

Notes:
IPAT via IP aliasing
IPAT via IP aliasing takes advantage of AIXs ability to have multiple IP addresses
associated with a single NIC. This ability, called IP aliasing, allows HACMP to move
service IP addresses between NICs (or between nodes) without having to either change
existing IP addresses on NICs or worry about whether or not there is already a service
IP label on the NIC.

IPAT via IP replacement


IPAT via IP replacement involves replacing the IP address currently on a NIC with a
service IP address. This approach supports a facility called hardware address takeover,
that we will discuss shortly. It has the limitation of supporting only one service IP label
per adapter, which restricts the number of resource groups that can use IPAT and, in
practical terms, the number of service IP labels in a resource group.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-53

Student Notebook

Which is better?
We will examine the advantages and disadvantages of each method in the next few
pages. Remember that the question is not which is better but rather which is better
suited to a particular context.

2-54 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

IPAT via IP aliasing configuration


Define IP address for each network interface in the AIX ODM.
Each interface IP address must be in a different logical IP subnet.*
Define these addresses in the /etc/host file and configure them in HACMP
topology as communication interfaces.

Define service addresses in /etc/hosts and in HACMP resources.


They must not be in the same logical IP subnet as any of the interface IP
addresses.
HACMP will configure them to AIX when needed.

Before starting the application resource group


192.168.10.1 (ODM)

192.168.11.1 (ODM)

192.168.10.2 (ODM)

192.168.11.2 (ODM)

* Refer to earlier discussion of heartbeating and failure diagnosis for explanation of why
Copyright IBM Corporation 2008

Figure 2-30. IPAT via IP aliasing configuration

AU548.0

Notes:
Requirements
Before configuring an HACMP network to use IPAT via IP aliasing, ensure that:
- The network is a type that supports IPAT via IP aliasing:

Ethernet
Token-ring
FDDI
SP switch

- No service IP labels on the network require hardware address takeover (HWAT)


- The non-service IP addresses on each node are all on separate IP subnets
- The service IP addresses are on separate IP subnets from all non-service IP
addresses

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-55

Student Notebook

IPAT via aliasing subnet rules example


The interfaces must all be on different subnets, and the service IP labels cannot be in
any of the non-service subnets.
For example, in a cluster with one network using IPAT via aliasing, where each node
has two communication interfaces and there are two service IP labels, the network can
require up to four subnets: one for each set of non-service IP labels and one for each
service label (three would be required if the service addresses were on the same
subnet):
Node name
node1
node1
node2
node2
Service address
Service address

subnet
192.168.10/24
192.168.11/24
9.47.87/24
9.47.88/24

NIC
en0
en1
en0
en1

IP Label
n1boot1
n1boot2
n2boot1
n2boot2
appA-svc
appB-svc

IP Address
192.168.10.1
192.168.11.1
192.168.10.2
192.168.11.2
9.47.87.22
9.47.88.22

IP labels
n1boot1, n2boot1
n1boot2, n2boot2
appA-svc
appB-svc

Hardware address takeover


HWAT is not supported on networks that use IPAT via IP aliasing (HWAT is discussed in
detail in Appendix C along with IPAT via Replacement). The reason is that the service
IP label is configured as an alias on top of the existing interface. Because the
underlying interfaces IP address is not changed, its hardware address is also expected
to remain the same. Use HWAT when you have a networking component that does not
use Gratuitous ARP, which is the mechanism that is relied upon for IPAT via aliasing.

Planning considerations
A node on a network that uses IPAT via aliasing can be the primary node for multiple
resource groups on the same network, regardless of the number of actual boot
interfaces on the node. Still, users should plan their networks carefully to balance the
RG load across the cluster.

Additional background information


HACMP systems try to keep the number of service IP labels on each NIC roughly equal;
although, it has no way to predict which service IP labels will be most popular.
2-56 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Consequently, any load balancing is the responsibility of the cluster administrator (and
will require customization, which is beyond the scope of this course).

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-57

Student Notebook

IPAT via IP aliasing at startup of resource group


When the resource group comes up on a node, HACMP
aliases the service IP label onto one of the node's available
(that is, currently functional) interfaces (ODM).
After starting the application resource group

9.47.87.22 (alias)
192.168.10.1 (ODM)

192.168.11.1 (ODM)

192.168.10.2 (ODM)

192.168.11.2 (ODM)

Copyright IBM Corporation 2008

Figure 2-31. IPAT via IP aliasing at startup of resource group

AU548.0

Notes:
Operation
HACMP uses AIXs IP aliasing capability to alias service IP labels included in resource
groups onto interfaces (NICs) on the node that runs the resource group. With aliasing,
the non-service IP address (stored in the ODM) is still present.
Note that one advantage of sorts of IPAT via IP aliasing is that the non-service IP
addresses do not need to be routable from the client/user systems.

Applications should use the service address


Strongly discourage users from using anything other than approved service IP
addresses when contacting the cluster because the NICs associated with these
non-service IP addresses might fail or the application might move to a different node
while the non-service IP labels remain behind on the original node.

2-58 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

IPAT via IP aliasing after an interface fails


If the communication interface being used for the service IP label
fails, HACMP aliases the service IP label onto one of the node's
remaining available (currently functional) non-service (ODM)
interfaces.
The eventual recovery of the failed boot adapter makes it
available again for future use.

192.168.10.1 (ODM)

9.47.87.22 (alias)
192.168.11.1 (ODM)

192.168.10.2 (ODM)

192.168.11.2 (ODM)

Copyright IBM Corporation 2008

Figure 2-32. IPAT via IP aliasing after an interface fails

AU548.0

Notes:
Interface failure
If a communication interface fails, HACMP moves the service IP addresses to another
communication interface, which is still available, on the same network. If no remaining
available NICs are on the node for the network, then HACMP initiates a fallover for that
resource group.

Interface failure with IPAT


The failure of an interface is generally handled locally on the node that experienced the
failure by moving the IP address to a still available interface. The outage in this case is
considerably shorter than the one that occurs when a node fails.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-59

Student Notebook

Users perspective
Because existing TCP/IP sessions generally recover cleanly from this sort of
failure/move-IP-address operation, users might not even notice the outage if they are
not interacting with the application at the time of the failure.

Failure of all interfaces on a node


If the last remaining interface on a node fails, then HACMP triggers a fallover for any
resource groups with service IP addresses on the failed interfacess network.

2-60 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

IPAT via IP aliasing after a node fails


If the resource group's node fails, HACMP moves the resource
group to a new node and aliases the service IP label onto one of
the new node's available (currently functional) non-service (ODM)
communication interfaces.

9.47.87.22 (alias)
192.168.10.2 (ODM)

192.168.11.2 (ODM)

Copyright IBM Corporation 2008

Figure 2-33. IPAT via IP aliasing after a node fails

AU548.0

Notes:
Node failure with IPAT
When a node that is running an IPAT-enabled resource group fails, HACMP moves the
resource group to an alternative node. Because the service IP address is in the
resource group, it moves with the rest of the resources to the new node. The service IP
address is aliased onto an available (currently functional) communication interface on
the takeover node.

Node failure from the users perspective


The users experience a short outage and then, from their perspective, the same server
is back up and running.
You probably shouldnt correct the user when they mention that the server was down for
a few minutes earlier when you happen to know that it is still down and undergoing
repair! Strictly speaking, the service was down for a few minutes and is now up again.
Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-61

Student Notebook

IPAT via IP aliasing: Distribution preference for


service IP label aliases
Network level attribute that controls the placement of service IP
labels onto communication interfaces
Useful for
Load balancing
Isolating traffic for VPN requirements

If there are insufficient interfaces available to satisfy the preference,


HACMP allocates service IP label aliases and persistent IP labels to
an existing active network interface card

Four choices:

Anti-Collocation
Collocation
Collocation with Persistent Label
Anti-Collocation with Persistent Label

Copyright IBM Corporation 2008

Figure 2-34. IPAT via IP aliasing: Distribution preference for service IP label aliases

AU548.0

Notes:
Distribution preference for service IP label aliases
You can configure a distribution preference for the placement of service IP labels that
are configured in HACMP. Starting with HACMP 5.1, HACMP lets you specify the
distribution preference for the service IP label aliases.
A distribution preference for service IP label aliases is a network-wide attribute used to
control the placement of the service IP label aliases on the communication interfaces on
the nodes in the cluster. Configuring a distribution preference for service IP label aliases
provides:
- Load balancing:
Enables you to customize the load balancing for service IP labels in the cluster,
taking into account the persistent IP labels previously assigned on the nodes.

2-62 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- VPN requirements:
Enables you to configure the type of the distribution preference suitable for the VPN
firewall external connectivity requirements.
HACMP will try to meet preferences, but will always keep service labels active:
The distribution preference is exercised as long as there are acceptable network
interfaces available. However, HACMP always keeps service IP labels active, even if
the preference cannot be satisfied.

Four possible values for this attribute


You can specify in SMIT the following distribution preferences for the placement of
service IP label aliases:
- Anti-Collocation:
This is the default. HACMP distributes all service IP label aliases across all
non-service IP labels using a least loaded selection process.
- Collocation:
HACMP allocates all service IP label aliases on the same communication interface
(adapter).
- Collocation with Persistent Label:
All service IP label aliases are allocated on the same communication interface that
is hosting the persistent IP label. This option may be useful in VPN firewall
configurations where only one interface is granted external connectivity and all IP
labels (persistent and service) must be allocated on the same interface card.
- Anti-Collocation with Persistent Label:
HACMP distributes all service IP label aliases across all active communication
interfaces that are not hosting the persistent IP label. HACMP will place the service
IP label alias on the interface that is hosting the persistent label only if no other
network interface is available.

How to configure
You use the extended path to configure distribution preferences. Follow this path:
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP Extended Resources Configuration -> Configure Resource Distribution
Preferences -> Configure Service IP Labels/Address Distribution Preference -> pick
your network -> toggle through the Distribution Preference menu options.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-63

Student Notebook

IPAT via IP aliasing summary


Configure each node's communication interfaces with nonservice IP addresses (each on a different subnet).
Assign service IP labels to resource groups as appropriate.
Must be on separate subnet from non-service IP addresses.
There is a total limit of 256 IP addresses known to HACMP and 64 resource
groups. Within those overall limits:
There is no limit on the number of service IP addresses in a resource group.
There is no limit on the number of resource groups with service IP labels.

HACMP assigns service IP labels to communication interfaces


using IP aliases based on resource group rules and available
hardware.
IPAT via IP aliasing requires that hardware address takeover
is not configured.
IPAT via IP aliasing requires gratuitous ARP support.

Copyright IBM Corporation 2008

Figure 2-35. IPAT via IP aliasing summary

AU548.0

Notes:
Summary
The visual summarizes IPAT via IP aliasing. Some additional considerations are
discussed as follows.

Advantages
Probably the most significant advantage to IPAT via IP aliasing is that it supports
multiple service IP labels per network per resource group on the same communication
interface and allows a node to easily support quite a few resource groups. In other
words, IPAT enables you to share several service labels on one interface. Thus, it can
require fewer adapters and interfaces than IPAT via replacement.

2-64 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Disadvantages
Probably the most significant disadvantage is that IPAT via IP aliasing does not support
hardware address takeover. You will rely on Gratuitous ARP as the means of resetting
the ARP entries on IPAT.
In addition, because you must have a subnet for each interface and a subnet for each
service IP label, IPAT via IP aliasing can require a lot of subnets.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-65

Student Notebook

IPAT via IP replacement overview


AIX boots with a non-service (ODM) IP address on each interface.
When Cluster Services are started, the non-service IP labels are
replaced with service IP labels for the resource groups that are
being brought online.
Only one service IP label can be on an interface at a time.
If the interface hosting a service IP label fails, HACMP will attempt
to replace the non-service IP label of another interface with the
service IP label, in order to maintain the service IP label.
Configuration rules:
Each service IP label must be in the same subnet as a non-service label subnet.
There must be at least as many interfaces on each node as there are service IP labels.
All service IP labels must be in the same subnet.

Advantages
Supports hardware address takeover
Requires fewer subnets

Disadvantages
Requires more interfaces to support multiple service IP labels
Is less flexible
Copyright IBM Corporation 2008

Figure 2-36. IPAT via IP replacement overview

AU548.0

Notes:
History
In the beginning, IPAT via IP replacement was the only form of IPAT available. IPAT via
IP aliasing became available when AIX could support multiple IP addresses associated
with a single NIC via IP aliasing. Because IPAT via IP aliasing is more flexible and
usually requires less network interface cards, IPAT via IP replacement is no longer the
recommended method. Many existing cluster implementations still have IPAT via
Replacement. When upgrading to versions of HACMP that support IPAT via Aliasing,
consider converting IPAT via Replacement configurations to Aliasing only if there is
another reason compelling you to do so. Otherwise, leave the IPAT via Replacement
configuration as it is. Any new implementations should strongly consider using IPAT via
Aliasing.
This visual gives a brief overview of IPAT via IP replacement. A detailed discussion can
be found in Appendix C.

2-66 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Configuration rules
The visual summarizes the configuration rules. Notice that they are almost the opposite
to the rules for IPAT via IP aliasing.

Advantages
Probably the most significant advantage of IPAT via IP replacement is that it supports
hardware address takeover (HWAT). HWAT may be needed if your local clients or
routers do not support gratuitous ARP. This will be discussed in a few pages.
Another advantage is that it requires fewer subnets. If you are limited in the number of
subnets available for your cluster, this may be important.
Note: If reducing the number of subnets needed is important, another alternative may
be to use heartbeating via aliasing, see Heartbeating over IP aliases on page 2-37.

Disadvantages
Probably the most significant disadvantages are that IPAT via IP replacement limits the
number of service IP labels per subnet per resource group on one communications
interface to one and makes it rather expensive (and complex) to support lots of
resource groups in a small cluster. In other words, you need more network adapters to
support more applications.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-67

Student Notebook

Service IP address examples


Valid service IP
addresses for IPAT via
IP aliasing

Valid service IP addresses


for IPAT via IP
replacement

192.168.7.1
192.168.183.57
198.161.22.1
192.168.8.1
192.168.183.57
198.161.22.1

192.168.5.3 and 192.168.5.97


OR
192.168.6.3 and 192.168.6.97

192.168.5.98
192.168.6.171

192.168.7.1
192.168.183.57
198.161.22.1

192.168.5.3 and 192.168.5.97


OR
192.168.6.3 and 192.168.6.97

192.168.5.2
192.168.6.2
192.168.7.2

192.168.4.1
192.168.10.1
192.168.183.57
198.161.22.1

192.168.5.3 and 192.168.5.97


OR
192.168.6.3 and 192.168.6.97
OR
192.168.7.3 and 192.168.7.97

IP addresses on
first node

IP addresses on
second node

192.168.5.1
192.168.6.1

192.168.5.2
192.168.6.2

192.168.5.1
192.168.6.1

192.168.5.2
192.168.7.1

192.168.5.1
192.168.6.14

192.168.5.1
192.168.6.1
192.168.7.1
192.168.8.1
102.168.9.1

192.168.5.3
192.168.5.97

Copyright IBM Corporation 2008

Figure 2-37. Service IP address examples

AU548.0

Notes:
Service IP address rules and examples
The rules for service IP addresses are straight-forward. It comes down to what subnet
the service IP address can be in.
For IPAT via Replacement, the service IP addresses must be in a subnet that is the
same as one of the non-service IP address subnets.
For IPAT via Aliasing, the service IP addresses must be in a subnet that is different
than the non-service IP address subnets.
The table above provides some examples. Notice that for a given set of IP addresses
on the interfaces (AIX ODM), service IP labels which are acceptable for IPAT via IP
aliasing are not acceptable for IPAT via replacement and vice-versa. Also notice that the
IPAT via Replacement column only contains subnets that are the same as the subnets
in the first two columns, while the IPAT via Aliasing column contains only subnets that
are different than the subnets in the first two columns.
2-68 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adopt labeling/naming conventions


HACMP cluster also tend to have quite a few IP labels and
other names associated with them
Adopt appropriate labeling and naming conventions:
For example:
Node-resident labels should include the node's name:
usaboot1, usaboot2, ukbase1, ukbase2, node1boot1, node1boot2

Service IP labels that move between nodes should describe the


application rather than the node:
web1-svc, infodb-svc, app1, prod1,

Persistent IP labels should include the node name (because they will not
be moved to another node) and should identify that they are persistent:
usa-per, uk-per, node1adm, usaadmin,

Why?
Conventions prevent mistakes
Preventing mistakes improves availability!

Copyright IBM Corporation 2008

Figure 2-38. Adopt labeling/naming conventions

AU548.0

Notes:
Using IP labeling and naming conventions
Again, the purpose of HACMP is to create a highly available environment for your
applications. A naming convention can make it easier for humans to understand the
configuration. This can reduce mistakes, leading to better availability.
Never underestimate the value of a consistent labeling or naming convention. It can
prevent mistakes which can, in turn, prevent outages.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-69

Student Notebook

Hostname resolution
All of the cluster's IP labels must be defined in every cluster
node's /etc/hosts file:
127.0.0.1 loopback localhost
# cluster explorers
# netmask 255.255.255.0

127.0.0.1 loopback localhost


# cluster explorers
# netmask 255.255.255.0

# usa boot addresses


192.168.15.29
usaboot1
192.168.16.29
usaboot2

# usa boot addresses


192.168.15.29
usaboot1
192.168.16.29
usaboot2

# uk boot addresses
192.168.15.31
ukboot1
192.168.16.31
ukboot2

# uk boot addresses
192.168.15.31
ukboot1
192.168.16.31
ukboot2

# persistent IP labels
192.168.5.29
usa-per
192.168.5.31
uk-per

# persistent IP labels
192.168.5.29
usa-per
192.168.5.31
uk-per

# Service IP labels
192.168.15.92
xweb-svc
192.168.15.70
yweb-svc

# Service IP labels
192.168.5.92
xweb-svc
192.168.5.70
yweb-svc

# test client node


192.168.5.11
test

# test client node


192.168.5.11
test
Copyright IBM Corporation 2008

Figure 2-39. Hostname resolution

AU548.0

Notes:
/etc/hosts
Make sure that the /etc/hosts file on each cluster node contain all of the IP labels used
by the cluster (you do not want HACMP to be in a position where it must rely on an
external DNS server to do IP label to address mappings).

But Im using DNS / NIS


If NIS or DNS is in operation, IP label lookup defaults to a nameserver system for name
and address resolution. However, if the nameserver was accessed through an interface
that has failed, the request does not complete, and eventually times out. This might
significantly slow down HACMP event processing.
To ensure that the cluster event completes successfully and quickly, HACMP disables
NIS or DNS hostname resolution by setting the following AIX environment variable
during service IP label swapping:
2-70 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

NSORDER = local
As a result, the /etc/hosts file of each cluster node must contain all HACMP-defined IP
labels for all cluster nodes.

Maintaining /etc/hosts
The easiest way to ensure that all of the /etc/hosts file contain all of the required
addresses is to get one /etc/hosts file set up correctly and then copy it to all of the other
nodes or use the filecollections facility of HACMP 5.x.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-71

Student Notebook

Other configurations: Etherchannel (1 of 2)

n1boot1 192.168.1.1 n1boot2 192.168.2.1


en4

n2boot1 192.168.1.2 n2boot2 192.168.2.2

en5

en4

en5

sw1
sw2
en0 en1 en2 en3

en0 en1 en2 en3

Shared Storage
Heartbeat on disk

appB 192.168.3.20

appA 192.168.3.10

Copyright IBM Corporation 2008

Figure 2-40. Other configurations - Etherchannel (1 of 2)

AU548.0

Notes:
Etherchannel details
Etherchannel is a trunking technology that allows grouping several Ethernet links.
Traffic is distributed across the links, providing higher performance and redundant
parallel paths. When a link fails, traffic is redirected to the remaining links within the
channel without user intervention and with minimal packet loss.
EtherChannel was invented by Kalpana in the early 1990s and bought by CISCO in
1994. Other popular trunking technologies exist: Adaptec's Duralink trunking / Nortel
MLT MultiLink Trunking.
Interoperability between technologies is a problem.
A standard IEEE 802.3ad was finalized in 2000.

2-72 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Other configurations: Etherchannel (2 of 2)


If configured correctly, NIC failures go unnoticed by HACMP;
that is, are handled by the Etherchannel technology.
Minimum downtime is experienced; NIC failure should not result in
Clients need to reconnect.

Gives you the performance improvement of link aggregation


while allowing the hardware to deal with NIC failures.
Hardware address takeover is not supported when
implemented with IPAT via Replacement.

Copyright IBM Corporation 2008

Figure 2-41. Other configurations - Etherchannel (2 of 2)

AU548.0

Notes:
Very useful information can be found in the following documents (although dated, the
information is still very relevant).
A Techdoc regarding experiences and configuration:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD101785

The Flash announcing support for Etherchannel with HACMP:


http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10284

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-73

Student Notebook

Other configurations: Base virtual Ethernet


Virtual I/O Server (VIOS1)

Frame1

ent3
(LA)

ent4
(SEA)

ent1 ent0
(phy) (phy)

ent2
(virt)

AIX Client LPAR 1

Virtual I/O Server (VIOS2)

en0
Control
Channel

Control
Channel

ent5
(virt)

ent0
(virt)

ent5
(virt)

ent4
(SEA)

ent3
(LA)

ent2
(virt)

ent1 ent0
(phy) (phy)

Hypervisor

Ethernet Switch

Ethernet Switch

Hypervisor

Frame2

ent1 ent0
(phy) (phy)

ent2
(virt)

ent3
(LA)

ent4
(SEA)

ent5
(virt)

ent0
(virt)

Control
Channel

en0

Virtual I/O Server (VIOS1)

AIX Client LPAR 2

ent5
(virt)
Control
Channel

ent2
(virt)

ent1 ent0
(phy) (phy)

ent4
(SEA)

ent3
(LA)

Virtual I/O Server (VIOS2)

Copyright IBM Corporation 2008

Figure 2-42. Other configurations: Base virtual Ethernet

AU548.0

Notes:
Where to get more information
To get more information on this configuration, consult the following resources:
Redbooks:
Implementing HACMP Cookbook
http://publib-b.boulder.ibm.com/abstracts/sg246769.html?Open
Advanced POWER Virtualization on IBM System p5: Introduction and Configuration
http://www.redbooks.ibm.com/abstracts/sg247940.html?Open
Other education:
AU620, HACMP System Administration III: Virtualization and Disaster Recovery

2-74 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP view of virtual Ethernet


net_ether_0

9.19.51.20
(service IP)
9.19.51.10 (persistent IP)
192.168.100.1
( base address)

(service IP)
9.19.51.21
(persistent IP) 9.19.51.11
Topsvcs heartbeating

192.168.100.2
( base address)

serial_net1

en0

en0

HACMP Node 1

HACMP Node 2
FRAME 2

FRAME 1

Hypervisor

ent1 ent0
(phy) (phy)

ent2
(virt)

ent3
(LA)

ent4
(SEA)

FRAME X

ent5
(virt)

ent0
(virt)

Control
Channel

en0

Virtual I/O Server (VIOS1)

AIX Client LPAR

ent5
(virt)
Control
Channel

ent2
(virt)

ent4
(SEA)

ent1 ent0
(phy) (phy)

ent3
(LA)

Virtual I/O Server (VIOS2)

Copyright IBM Corporation 2008

Figure 2-43. HACMP view of virtual Ethernet

AU548.0

Notes:
Additional information
Single adapter Ethernet networks in HACMP require the use of a netmon.cf file.
Note that there does not have to be link aggregation at the VIO Server level.
You could configure a single NIC and rely on the other VIO Server for redundancy.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-75

Student Notebook

Other configurations: Single IP adapter nodes


Single IP Adapter nodes might seem attractive because they
appear to reduce the cost of the cluster.
The cost reduction is an illusion:
1. A node with only a single adapter on a network is a node with a single point of
failure-- the single adapter.
2. Clusters with unnecessary single points of failure tend to suffer more outages.
3. Unnecessary outages cost (potentially quite serious) money.

One of the fundamental cluster design goals is to reduce


unnecessary outages by avoiding single points of failure.
HACMP requires at least two NICs per IP network for failure
diagnosis.
Clusters with fewer than two NICs per IP network, though
supported, are not recommended*.
* Virtual Ethernet and certain Cluster 1600 SP Switch-based clusters are
supported with only one adapter per network.
Copyright IBM Corporation 2008

Figure 2-44. Other configurations: Single IP adapter nodes

AU548.0

Notes:
Single IP adapter nodes
It is not unusual for a customer to try to implement an HACMP cluster in which one or
more of the cluster nodes have only a single network adapter (the motivation is usually
the cost of the adapter but the additional cost of a backup system with enough PCI slots
for the second adapter can also be the issue).
The situation is actually, quite simple: with the exception of virtual ethernet
implementations and certain Cluster 1600 clusters that use the SP Switch facility, any
cluster with only one NIC on a node for a given network has a single point of failure, the
solitary NIC, and is not supported.
Nodes with only a single NIC on an IP network are, at best, a false economy. At worst,
they are a fiasco waiting to happen as the lack of a second NIC on one or more of the
nodes could lead to extended cluster outages and just generally strange behavior
(including HACMP failing to detect failures which would have been detected had all
nodes had at least two NICs per IP network).
2-76 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Talk to your network administrator


Explain how HACMP uses networks.
Ask for what you need:
IPAT via IP Aliasing:
Service IP labels/addresses in the production network for client
connections to the cluster applications
Additional subnets for non-service interface (ODM) labels
One per network interface on the node with the most network adapters.
These do not need to be routable.

IPAT via IP Replacement:


Service IP labels/addresses
Interface IP label for each network adapter (one must be in the same
subnet as the service label)
A different subnet for each interface
One per adapter on the node with the most adapters.
Only the subnet containing the service label need be routable.

Persistent node IP label for each node on at least one network


(very useful but optional)

Ask early (getting subnets assigned might take some time).


Copyright IBM Corporation 2008

Figure 2-45. Talk to your network administrator

AU548.0

Notes:
Getting IP addresses and subnets
Unless you happen to be the network administrator (in which case, you can feel free to
spend time talking to yourself), you need to get the network administrator to provide you
with IP addresses for your cluster. The requirements imposed by HACMP on IP
addresses are rather unusual and might surprise your network administrator; so be
prepared to explain both what you want and why you want it. Also, ask for what you
want well in advance of the date that you need it because it might take some time for
the network administrator to find addresses and subnets for you that meet your needs.
Do not accept IP addresses that do not meet the HACMP configuration rules. Even if
you can get them to appear to work, they almost certainly will not work at a point in time
when you can least afford a problem.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-77

Student Notebook

Changes to AIX start sequence


The startup sequence of AIX networking is changed when IPAT is
enabled.
/etc/inittab
/sbin/rc.boot
cfgmgr
/etc/rc.net
cfgif
/etc/rc
mount all
/etc/rc.tcpip
daemons start
/etc/rc.nfs
daemons start
exportfs

IPAT changes the init


sequence

/etc/inittab
/sbin/rc.boot
cfgmgr
/etc/rc.net (modified for ipat)
exit 0
/etc/rc
mount all
/usr/sbin/cluster/etc/harc.net
/etc/rc.net -boot
cfgif
< Cluster Services startup > clstrmgr
event node_up
node_up_local
get_disk_vg_fs
acquire_service_addr
telinit -a
/etc/rc.tcpip
daemons start
/etc/rc.nfs
daemons start
exportfs

Copyright IBM Corporation 2008

Figure 2-46. Changes to AIX start sequence

AU548.0

Notes:
/etc/inittab changes
A node with a network configured for IPAT must not start inetd until HACMP has had a
chance to assign the appropriate IP addresses to the nodes interfaces. Consequently,
the AIX start sequence is modified slightly if a node has a resource group that uses
either form of IPAT.

2-78 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Changes to /etc/inittab
init:2:initdefault:
brc::sysinit:/sbin/rc.boot 3 >/dev/console 2>&1 # Phase 3 of system boot
. . .
srcmstr:23456789:respawn:/usr/sbin/srcmstr # System Resource Controller
harc:2:wait:/usr/es/sbin/cluster/etc/harc.net # HACMP for AIX network startup
rctcpip:a:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP daemons
rcnfs:a:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons
. . .
qdaemon:a:wait:/usr/bin/startsrc -sqdaemon
writesrv:a:wait:/usr/bin/startsrc -swritesrv
. . .
ctrmc:2:once:/usr/bin/startsrc -s ctrmc > /dev/console 2>&1
ha_star:h2:once:/etc/rc.ha_star >/dev/console 2>&1
dt:2:wait:/etc/rc.dt
cons:0123456789:respawn:/usr/sbin/getty /dev/console
xfs:0123456789:once:/usr/lpp/X11/bin/xfs
hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init >/dev/console 2>&1
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit # HACMP for AIX These must be the last
entries of run level a in inittab!
pst_clinit:a:wait:/bin/echo Created /usr/es/sbin/cluster/.telinit > /dev/console # HACMP for
AIX These must be the last entries of run level a in inittab!

Copyright IBM Corporation 2008

Figure 2-47. Changes to /etc/inittab

AU548.0

Notes:
HACMP 5.x changes to /etc/inittab
The visual shows excerpts from /etc/inittab from a system running AIX 6.1 and HACMP
5.4.1.
HACMP 5.1 added the harc entry to the /etc/inittab file, that runs harc.net to
configure the network interfaces. Also, starting in HACMP 5.1, some of the other inittab
entries have been changed to run in run-level a. These are invoked by HACMP when it
is ready for the TCP/IP daemons to run. The final two lines use the touch command to
create a marker file when all of the run-level a items have been run. HACMP waits for
this marker file to exist so that it knows when the run-level a items have been
completed.
HACMP 5.3 made some additional changes to the inittab file. In HACMP 5.3 and later,
the HACMP daemons are running all the time, even before you start the cluster. These
daemons are started by the ha_star and hacmp entries in the inittab file.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-79

Student Notebook

Common TCP/IP configuration problems


Subnet masks are not consistent for all HA network adapters.
Interface IP labels on one node are placed on the same subnet.
Service and interface IP labels are placed in the same subnet in
IPAT via IP aliasing networks.
Service and interface IP labels are placed in different subnets in
IPAT via IP replacement networks.
Ethernet frame type is set to 802.3. This includes etherchannel.
Ethernet speed is not set uniformly or is set to autodetect.
The contents of /etc/hosts is different on the cluster nodes.
A different version of perl than is used by the HACMP verification
tools (resulting in what appears to be a network communications
problem).
Copyright IBM Corporation 2008

Figure 2-48. Common TCP/IP configuration problems

AU548.0

Notes:
Configuration problems
The visual shows some common IP configuration errors to watch out for.

2-80 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review: Topic 3


1. True or False?
A single cluster can use both IPAT via IP aliasing and IPAT via IP replacement.

2. True or False?
All networking technologies supported by HACMP support IPAT via IP aliasing.

3. True or False?
All networking technologies supported by HACMP support IPAT via IP replacement.

4. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and the
right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2, then
which of the following options are valid service IP addresses if IPAT via IP aliasing is
being used? (Select all that apply.)
a.(192.168.20.3 and 192.168.20.4) or (192.168.21.3 and 192.168.21.4)
b.192.168.20.3 and 192.168.20.4 and 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d.192.168.23.3 and 192.168.24.3

5. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and the
right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2, then
which of the following options are valid service IP addresses if IPAT via IP replacement is
being used? (Select all that apply.)
a.(192.168.20.3 and 192.168.20.4) or (192.168.21.3 and 192.168.21.4)
b.192.168.20.3, 192.168.20.4, 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d.192.168.23.3 and 192.168.24.3
Copyright IBM Corporation 2008

Figure 2-49. Lets review topic 3

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-81

Student Notebook

2-82 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

2.4 The impact of IPAT on clients

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-83

Student Notebook

The impact of IPAT on clients


After completing this topic, you should be able to:
Explain how user systems are affected by IPAT related
operations
Describe the ARP cache issue
Explain how gratuitous ARP usually deals with the ARP cache
issue
Explain three ways to deal with the ARP cache issue if
gratuitous ARP does not provide a satisfactory resolution to
the ARP cache issue:
Configure clinfo on the client systems
Configure clinfo within the cluster
Configure Hardware Address Takeover within the cluster

Copyright IBM Corporation 2008

Figure 2-50. The impact of IPAT on clients

AU548.0

Notes:
Topic 4 objectives
This section looks at the impact of IPAT on client systems.

2-84 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

How are users affected?


IP address moves and swaps within a node result in a short outage.

Long-term connection oriented sessions typically recover seamlessly (TCP


layer deals with packet retransmission).

Resource group fallovers to a new node result in a longer outage and


sever connection-oriented services (long-term connections must be
reestablished, short term connections retried).
In either case:

Short-lived TCP-based services such as http and SQL queries, experience


short server down outage.
UDP-based services must deal with lost packets.

Copyright IBM Corporation 2008

Figure 2-51. How are users affected?

AU548.0

Notes:
What users see
Users who are actively using the clusters services at the time of a failure will notice an
outage while HACMP detects, diagnoses and recovers from the failure.

How long does failure recovery take?


Three components contribute to the duration of the outage:
i.

How long it takes HACMP to decide that something has failed

ii. How long it takes HACMP to diagnose the failure (determine what failed)
iii. How long it takes HACMP to recover from the failure
The first two of these generally takes between about five and about thirty seconds
depending on the exact failure involved. The third component can take another dozen

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-85

Student Notebook

or so seconds when moving an IP address within a node or it can take a few minutes or
more in the case of a fallover.

Recovery without fallover


If the problem can be resolved without a fallover then the users generally notice a short
outage and then are able to continue with what they were doing. Their TCP/IP-based
sessions come back to life and everything appears to be fine again. Unless they are
actively using the clusters applications at the time, they might not even notice the
outage.

Recovery with fallover


If the problem requires a fallover, then existing TCP/IP sessions eventually fail (usually
as soon as the service IP address comes up on the takeover node, and AIX on that
node resets sessions that it gets packets for that it does not know about). Users are
also more likely to notice the outage because it typically takes a couple of minutes to
complete a fallover (much of this time is spent dealing with taking over volume groups,
checking file systems and recovering applications).
Each of these issues tends to be visible to the humans using the application in some
fashion or other. They might see a short period of total silence followed by a clean
recovery, or they might have to reconnect to the application. What they actually
experience generally depends far more on how the client side of the application is
designed and implemented than on anything within the control of the clusters
administrator.

2-86 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What about the user's computers?


An IPAT operation renders ARP cache entries on client systems obsolete.
Client systems must (somehow) update their ARP caches.
xweb (192.168.5.1) 00:04:ac:62:72:49
xweb (192.168.5.1) 00:04:ac:62:72:49

xweb (192.168.5.1) 00:04:ac:48:22:f4

xweb 192.168.5.1 (alias)


192.168.10.1 (ODM)
00:04:ac:62:72:49

192.168.11.1 (ODM)
00:04:ac:48:22:f4

192.168.10.1 (ODM)
00:04:ac:62:72:49

192.168.5.1 (alias) xweb


192.168.11.1 (ODM)
00:04:ac:48:22:f4

Copyright IBM Corporation 2008

Figure 2-52. What about the users's computers?

AU548.0

Notes:
ARP cache issues
Client systems that are located on the same physical network as the cluster might find
that their ARP cache entries are obsolete after an IP address moves to another NIC (on
the same node or on a different node).
The ARP cache is a table of IP addresses and the network hardware addresses (MAC
addresses) of the physical network cards that the IP addresses are assigned to. When
an IP address moves to a different physical network card, the clients ARP cache might
still have the old MAC address. It could take the client system a few minutes to realize
that its ARP cache is out-of-date and ask for an updated MAC address for the servers
IP address.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-87

Student Notebook

Local or remote client?


If the client is remotely connected through a router, it is the
router's ARP cache that must be corrected.
ARP:
router (192.168.8.1) 00:04:ac:42:9c:e2

ARP:
router (192.168.8.1) 00:04:ac:42:9c:e2

192.168.8.3
00:04:ac:27:18:09
ARP:
xweb (192.168.5.1) 00:04:ac:62:72:49
client (192.168.8.3) 00:04:ac:27:18:09

192.168.8.3
00:04:ac:27:18:09

192.168.8.1
00:04:ac:42:9c:e2

192.168.5.99
00:04:ac:29:31:37
xweb 192.168.5.1 (alias)
192.168.10.1 (ODM)
192.168.11.1 (ODM)
00:04:ac:62:72:49
00:04:ac:48:22:f4

ARP:
xweb (192.168.5.1) ???
client (192.168.8.3) 00:04:ac:27:18:09

192.168.8.1
00:04:ac:42:9c:e2

192.168.8.99
00:04:ac:29:31:37
192.168.5.1 (alias) xweb
192.168.10.1 (ODM)
192.168.11.1 (ODM)
00:04:ac:62:72:49
00:04:ac:48:22:f4

Copyright IBM Corporation 2008

Figure 2-53. Local or remote client?

AU548.0

Notes:
ARP cache entries are always local
ARP cache entries are only maintained by a system for the physical network cards that
it communicates with directly. If there is a router between the client system and the
cluster, then the client systems ARP cache has entry for the IP address and MAC
address for the routers network interface located on the clients side of the router. No
amount of IP address moves or node fallovers have any (positive or negative) impact on
what needs to be in the clients ARP cache.
Rather, it is the ARP cache entries for the router, that is on the clusters network which
must be up-to-date.
Most clusters have either a small handful or no client systems on the same physical
network as the cluster. Consequently, whatever ARP cache issues might exist in a
particular configuration, they do not usually affect very many systems. Its the ARP
cache entries of the routers that must be considered.

2-88 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Gratuitous ARP
AIX supports a feature called gratuitous ARP.
AIX sends out a gratuitous (that is, unrequested) ARP update whenever an IP address is
set or changed on a NIC.

Other systems on the local physical network are expected to


update their ARP caches when they receive the gratuitous ARP
packet.
Remember: Only systems on the cluster's local physical network
must respect the gratuitous ARP packet.
So ARP update problems have been minimized.
Gratuitous ARP is required if using IPAT via aliasing.

Copyright IBM Corporation 2008

Figure 2-54. Gratuitous ARP

AU548.0

Notes:
Gratuitous ARP
AIX supports a feature called gratuitous ARP. Whenever an IP address associated with
a NIC changes, AIX broadcasts out a gratuitous (in other words, unsolicited) ARP
update. This gratuitous ARP packet is generally received and used by all systems on
the clusters local physical network to update their ARP cache entries.
The result is that all relevant ARP caches are updated almost immediately after the IP
address is assigned to the NIC.
The problem is that not all systems respond or even necessarily receive these
gratuitous ARP cache update packets. If a local system either does not receive or
ignores the gratuitous ARP cache packet then its ARP cache remains out-of-date.
Note that unless the network is very overloaded, local systems generally either always
or never act upon the gratuitous ARP update packet.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-89

Student Notebook

Gratuitous ARP support issues


Gratuitous ARP is supported by AIX on the following network
technologies:

Ethernet (all types and speeds)


Token-Ring
FDDI
SP Switch

Gratuitous ARP is not supported on ATM.


Operating systems are not required to support gratuitous ARP
packets.
Practically every operating system does support gratuitous ARP.
Some systems (for example, certain routers) can be configured to respect or ignore
gratuitous ARP packets.

Copyright IBM Corporation 2008

Figure 2-55. Gratuitous ARP support issues

AU548.0

Notes:
Gratuitous ARP issues
Not all network technologies provide the appropriate capabilities to implement
gratuitous ARP. In addition, operating systems that implement TCP/IP are not required
to respect gratuitous ARP packets (although practically all modern operating systems
do).
Finally, support issues aside, an extremely overloaded network or a network that is
suffering intermittent failures might result in gratuitous ARP packets being lost. (A
network that is sufficiently overloaded to be losing gratuitous ARP packets or that is
suffering intermittent failures that result in gratuitous ARP packets being lost, is likely to
be causing the cluster and the cluster administrator far more serious problems than the
ARP cache issue involves.)

2-90 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What if gratuitous ARP is not supported?


If the local network technology doesn't support gratuitous ARP or
there is a client system or router on the local physical network that
must communicate with the cluster and that does not support
gratuitous ARP packets:
clinfo can used on the client to receive updates of changes.
clinfo can be used on the servers to ping a list of clients, forcing an update to their
ARP caches.
HACMP can be configured to perform Hardware Address Takeover (HWAT).

Suggestion:
Do not get involved with using either clinfo or HWAT to deal with
ARP cache issues until you have verified that there actually are
ARP issues that need to be dealt with.

Copyright IBM Corporation 2008

Figure 2-56. What if gratuitous ARP is not supported?

AU548.0

Notes:
If gratuitous ARP is not supported
HACMP supports three alternatives to gratuitous ARP. We will discuss these in the next
few pages.

Do not add unnecessary complexity


Cluster configurators should probably not simply assume that gratuitous ARP wont
provide a satisfactory solution because each of the alternatives introduce additional,
possibly unnecessary complexity into the cluster.
If the cluster administrator or configurator decides that the probability of a gratuitous
ARP update packet being lost is high enough to be relevant, then they should proceed
as though their context does not support gratuitous ARP.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-91

Student Notebook

Option 1: clinfo on the client


The cluster information daemon (clinfo ) provides a facility to automatically
flush the ARP cache on a client system.
In this option, clinfo must execute on the client platform.
clinfo executables are supplied for AIX.
clinfo source code is provided with HACMP to facilitate porting clinfo to other
platforms.
clinfo uses SNMP for communications with HACMP nodes.
/usr/es/sbin/cluster/etc/clhosts on the client system must contain a list of persistent
node IP labels (one for each cluster node).
clinfo.rc is invoked to flush the local arp cache.

192.168.10.1 (boot)
00:04:ac:62:72:49

192.168.5.1 (alias) xweb


192.168.11.1 (boot)
00:04:ac:48:22:f4

snmpd

clinfo
clstrmgr

clinfo.rc

Copyright IBM Corporation 2008

Figure 2-57. Option 1: clinfo on the client

AU548.0

Notes:
clinfo on the client
The cluster information service may be run on any client system. clinfo can execute a
script that flushes the local ARP cache and pings the servers following failure. clinfo
can detect failure either by polling or receiving SNMP traps from within the cluster.
The clinfo source code is provided with HACMP so that it can, at least in theory, be
ported to non-AIX client operating systems.

2-92 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Option 2: clinfo from within the cluster


clinfo can also be used on the cluster's nodes to force an ARP cache
update.
In this option, clinfo runs on every cluster node.
If clinfo is only run on one cluster node then that node become a single
point of failure!
clinfo flushes local ARP cache (on the cluster node) and then pings a
defined list of clients listed in /usr/es/sbin/cluster/etc/clinfo.rc.
Clients pick up the new IP address to hardware address relationship as a
result of the ping request.
192.168.10.1 (boot)
00:04:ac:62:72:49

192.168.5.1 (alias) xweb


192.168.11.1 (boot)
00:04:ac:48:22:f4

ping!
snmpd
clinfo
clstrmgr
clinfo.rc

Copyright IBM Corporation 2008

Figure 2-58. Option 2: clinfo from within the cluster

AU548.0

Notes:
clinfo on the cluster nodes
clinfo is already compiled and ready to run on the clusters servers. Once again
clinfo can execute a script on the servers that flushes the local ARP cache and pings
the local clients. These in-bound ping packets contain the new IP address-to-MAC
address relationship, and are used by the client operating system to update its ARP
cache. Unfortunately, this is not a mandatory feature of TCP/IP; so its possible
(although rather unusual) that a client operating system might fail to update its ARP
cache when the ping packet arrives.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-93

Student Notebook

clinfo.rc script (extract)


This script is located under /usr/es/sbin/cluster/etc and is present on an
AIX system if the cluster.client fileset has been installed.
A separate file /etc/cluster/ping_client_list can also contain a list of client
machines to ping.
# Example:
#
# PING_CLIENT_LIST="host_a host_b 1.1.1.3"
#
PING_CLIENT_LIST=""
TOTAL_CLIENT_LIST="${PING_CLIENT_LIST}"
if [[ -s /etc/cluster/ping_client_list ]] ; then
#
# The file "/etc/ping_client_list" should contain only a line
# setting the variable "PING_CLIENT_LIST" in the form given
# in the example above. This allows the client list to be
# kept in a file that is not altered when maintenance is
# applied to clinfo.rc.
#
. /etc/cluster/ping_client_list
TOTAL_CLIENT_LIST="${TOTAL_CLIENT_LIST} ${PING_CLIENT_LIST}"
fi
#
# WARNING!!! For this shell script to work properly, ALL entries in
# the TOTAL_CLIENT_LIST must resolve properly to IP addresses or hostnames
# (must be found in /etc/hosts, DNS, or NIS). This is crucial.
. . .

Copyright IBM Corporation 2008

Figure 2-59. clinfo.rc script (extract)

AU548.0

Notes:
clinfo.rc
The clinfo.rc script must be edited manually on the cluster nodes that run clinfo.
There is no reason why clinfo cannot also be run on the client systems; although,
these changes are only required on the cluster nodes that are running clinfo.rc.
Remember: All the cluster nodes should be running clinfo if clinfo is being used
within the cluster, to deal with ARP cache issues (because you never know which
cluster nodes will survive whatever has gone wrong).
Edit the /usr/es/sbin/cluster/etc/clinfo.rc file on each server node. Add the IP
label or IP address of each system that accesses service IP addresses managed by
HACMP to the PING_CLIENT_LIST list. Then start the clinfo daemon (clinfo can be
started as part of starting Cluster Services on the cluster nodes).

2-94 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

/etc/cluster/ping_client_list
You can also provide the list of clients to be pinged in the file
/etc/cluster/ping_client_list. This is probably the best method as it ensures that the
list of clients to ping is not overlaid by future changes to clinfo.rc.

More details
This script is invoked by HACMP as follows:
clinfo.rc {join,fail,swap} interface_name
The next set of details likely do not make sense until we are further into the course.
When clinfo is notified that the cluster is stable after undergoing a failure recovery of
some sort or when clinfo first connects to clsmuxpd (the SNMP part of HACMP), it
receives a new map (description of the clusters state). It checks for changed states of
interfaces:
- If a new state is UP, clinfo calls clinfo.rc join interface_name.
- If a new state is DOWN, clinfo calls clinfo.rc fail interface_name.
- If clinfo receives a node_down_complete event, it calls clinfo.rc with the fail
parameter for each interface currently UP.
- If clinfo receives a fail_network_complete event, it calls clinfo.rc with the
fail parameter for all associated interfaces.
- If clinfo receives a swap_complete event, it calls clinfo.rc swap
interface_name.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-95

Student Notebook

Option 3: Hardware Address Takeover


HACMP can be configured to swap a service IP label's hardware
address between network adapters.
HWAT is incompatible with IPAT via IP aliasing because each
service IP address must have its own hardware address, and a
NIC can support only one hardware address at any given time.
Cluster implementer designates a Locally Administered Address
(LAA), which HACMP assigns to the NIC that has the service IP
label.

Copyright IBM Corporation 2008

Figure 2-60. Option 3: Hardware address takeover

AU548.0

Notes:
Hardware address takeover
Hardware Address Takeover is the most robust method of dealing with the ARP cache
issue as it ensures that the hardware address associated with the service IP address
does not change (which avoids the whole issue of whether the client systems ARP
cache is out-of-date).
The essence of HWAT is that the cluster configurator designates a hardware address,
that is to be associated with a particular service IP address. HACMP then ensures that
whichever NIC the service IP address is on also has the designated hardware address.
HWAT is discussed in detail in Appendix C.

2-96 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HWAT considerations
Remember the following points when contemplating HWAT:
- The hardware address that is associated with the service IP address must be unique
within the physical network that the service IP address is configured for.
- HWAT is not supported by IPAT via IP aliasing because each NIC can have more
than one IP address but each NIC can only have one hardware address.
- HWAT is only supported for Ethernet, token ring, and FDDI networks (MCA FDDI
network cards do not support HWAT). ATM networks do not support HWAT.
- HWAT increases the takeover time (usually by just a few seconds).
- HWAT is an optional capability that must be configured into the HACMP cluster. (We
will see how to do that in detail in a later unit.)
- Cluster nodes using HWAT on token ring networks must be configured to reboot
after a system crash as the token ring card will continue to intercept packets for its
hardware address until the node starts to reboot.

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-97

Student Notebook

Checkpoint
1. True or False?
Clients are required to exit and restart their application after a
fallover.

2. True or False?
All client systems are potentially directly affected by the ARP cache
issue.

3. True or False?
clinfo must not be run both on the cluster nodes and on the
client systems.

4. If clinfo is run by cluster nodes to address ARP cache


issues, you must add the list of clients to ping to either the
__________________________ or the
__________________________ file.

Copyright IBM Corporation 2008

Figure 2-61. Checkpoint

AU548.0

Notes:

2-98 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary (1 of 2)
Key points from this unit:
HACMP uses networks to:
Provide highly available client access to applications in the cluster
Detect and diagnose NIC, node, and network failures using RSCT heartbeats
Communicate with HACMP daemons on other nodes

All HACMP clusters require a non-IP network


Differentiate between node, IP subsystem and network failures
Prevent cluster partitioning

HACMP networking terminology

Service IP label/address: HA address used by client to access application


Non-service IP label/address: Applied to NIC at boot time; stored in AIX ODM
Persistent IP label/address: Node bound HA address for admin access to a node
Communication interface: Association between a NIC and an IP label/address
Communication device: Device used in non-IP network
Communication adapter: X.25 adapter used in a HA communication link
IP Address Takeover (IPAT): Moves service IP address to working NIC after a failure
IPAT via aliasing: Adds the service address to a NIC using IP aliasing
IPAT via replacement: Replaces the non-service address with the service address

Copyright IBM Corporation 2008

Figure 2-62. Unit summary (1 of 2)

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 2. Networking considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

2-99

Student Notebook

Unit summary (2 of 2)
Key points from this unit (continued):
HACMP has very specific requirements for subnets.
IPAT via aliasing
NICs on a node must be on different subnets, which must use the same subnet
mask.
There must be at least one subnet in common with all nodes.
Service addresses must be on different subnet than any non-service address.
A service address can be on same subnet with another service address.
IPAT via replacement
NICs on a node must be on different subnets; which must use the same subnet
mask.
Each service address must be in same subnet as one of the non-service addresses
on the highest priority node.
Multiple service addresses must be in the same subnet.
Heartbeating over IP alias (any form of IPAT)
Service and non-service addresses can coexist on the same subnet, or be on
separate subnets.
One subnet required for heartbeating; does not need to be routed.

HACMP can update local clients ARP cache after IPAT.

Gratuitous ARP (default)


clinfo on clients
clinfo on server nodes
Hardware address takeover (HWAT)
Copyright IBM Corporation 2008

Figure 2-63. Unit summary (2 of 2)

AU548.0

Notes:

2-100 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 3. Shared storage considerations for high


availability
What this unit is about
This unit discusses the issue of shared storage in a high-availability
environment with a particular emphasis, of course, on shared storage
in an HACMP context.

What you should be able to do


After completing this unit, you should be able to:
Discuss the shared storage concepts that apply within an HACMP
cluster
Describe the capabilities of various disk technologies as they relate
to HACMP clusters
Describe the shared storage related facilities of AIX and how to use
them in an HACMP cluster

How you will check your progress


Checkpoint questions
Pencil and paper planning exercises
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html

HACMP manuals
http://www-03.ibm.com/servers/storage
http://www.redbooks.ibm.com

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Discuss the shared storage concepts that apply within an
HACMP cluster
Describe the capabilities of various disk technologies as they
related to HACMP clusters
Describe the shared storage related facilities of AIX and how
to use them in an HACMP cluster

Copyright IBM Corporation 2008

Figure 3-1. Unit objectives

AU548.0

Notes:

3-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

3.1 Fundamental shared storage concepts

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-3

Student Notebook

Fundamental shared storage concepts


After completing this topic, you should be able to:
Explain the distinction between shared storage and private
storage
Describe how shared storage is used within an HACMP
cluster
Discuss the importance of controlled access to an HACMP
cluster's shared storage
Describe how access to shared storage is controlled in an
HACMP cluster

Copyright IBM Corporation 2008

Figure 3-2. Fundamental shared storage concepts

AU548.0

Notes:

3-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What is shared storage?


SCSI
disks
Node
1

Virtual
SCSI

Node
2

disks
SAN
storage

via
VIO Server

rootvg
rootvg

rootvg
rootvg

Copyright IBM Corporation 2008

Figure 3-3. What is shared storage?

AU548.0

Notes:
Application storage requirements
A computer application always requires at least a certain amount of disk storage space.
For example, even the most minimal application requires disk space to store the
applications binaries. Most applications also require storage space for configuration
files and whatever application data the application is responsible for.
When such an application is placed into a high-availability cluster, any of the
applications data that changes must be stored in a location that is accessible to
whichever node the application is currently running on. This storage that is accessible
to multiple nodes is called shared storage.
Also keep in mind that HACMP does not provide data redundancy. Data must be striped
or mirrored across multiple physical drives (generally presented to AIX as a LUN) and
access to those LUNs from each node should be over multiple paths (generally referred
to as multi-pathing). This most likely will result in the use of a shared storage device that
provides the striping or mirroring and multi-pathing software. These components must
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-5

Student Notebook

be checked for compatibility with HACMP at the level you intend to implement, both AIX
and HACMP levels.

Non-concurrent access
In a non-concurrent access environment, the disks are owned by only one node at a
time. If the owning node fails, the cluster node with the next highest priority in the
resource group node list acquires ownership of the shared disks as part of fallover
processing. This ensures that the data stored on the disks remains accessible to client
applications.
In a non-concurrent access environment, a highly available application potentially runs
on only one node for extended periods of time. Only one disk connection is active at a
time and the shared storage is not shared in any real time sense. Rather, it is storage
that can be associated automatically (without human intervention) with the node where
the application is currently running. Non-concurrent access mode is sometimes called
serial access mode, because only one node has access to the shared storage at a time.
We will focus on non-concurrent shared storage in this unit.

Concurrent access
In concurrent access environments, the shared disks are activated on more than one
node simultaneously. Therefore, when a node fails, disk takeover is not required. In this
case, access to the shared storage must be controlled by some locking mechanism in
the application.

Shared storage physical connection


To associate the storage with whichever node is running the application, the storage
technology must support and the actual configuration must physically connect the
storage to the relevant nodes. This capability is supported by a variety of storage
technologies, including SCSI and Fibre Channel as well see shortly.

Shared resource example


Note: The graphic in the lower right-hand corner is a shared telephone.

3-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What is private storage?


SCSI
disks
Node
1

Virtual
SCSI

Node
2

disks
SAN
storage

via
VIO Server

rootvg
rootvg

rootvg
rootvg

Copyright IBM Corporation 2008

Figure 3-4. What is private storage?

AU548.0

Notes:
Private storage
Private storage is, of course, accessible to only a single cluster node. It might be
physically located within each systems box or externally in a rack or even in an external
storage subsystem. The key point is that private storage is not physically accessible
from more than one cluster node.

Private resource example


Note: The graphic in the lower right-hand corner is a private telephone.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-7

Student Notebook

Access to shared data must be controlled


Consider:
Data is placed in shared storage to facilitate access
to the data from whichever node the application is
running on.
The application is typically running on only one
node at a time.
Updating the shared data from another node (that
is, not the node that the application is running on)
could result in data corruption.
Viewing the shared data from another node could
yield an inconsistent view of the data.
Therefore, only the node actually running the
application should be able to access the data.
Copyright IBM Corporation 2008

Figure 3-5. Access to shared data must be controlled

AU548.0

Notes:
Why?
The shared storage is physically connected to each node that the application might run
on. In a non-concurrent access environment, the application actually runs on only one
node at a time and modification or even access to the data from any other node during
this time could be catastrophic (the data could be corrupted in ways which take days or
even weeks to notice).

Issues for concurrent access


Some clusters have instances of the application active on more than one node at a time
(for example, parallel databases). Such clusters require simultaneous access to the
shared disks and must be designed to carefully control or coordinate their access to the
shared data. This mechanism must be provided by the application.

3-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Who owns the storage?


Node
1

Node
2

ODM
ODM

ODM
ODM

The varyonvg/varyoffvg commands are used to control


ownership.
varyonvg/varyoffvg uses either:
Reserve/release-based shared storage protection
Used with standard volume groups
RSCT-based shared storage protection
Used with enhanced concurrent volume groups
Copyright IBM Corporation 2008

Figure 3-6. Who owns the storage?

AU548.0

Notes:
Introduction
There are two mechanisms to control ownership of shared storage. Although these two
mechanisms do not seem to have formal names, in this unit, we refer to them as the:
- Reserve/release-based shared storage protection mechanism and the
- RSCT-based shared storage protection mechanism
We use the term protection rather than access control both because it is a bit shorter
and because it reminds us that the purpose of the mechanism is to protect the shared
storage.

Reserve/release-based shared storage protection


Prior to HACMP 5.1, the AIX logical volume manager invoked disk-based
reserve/release as the shared storage protection mechanism, which was appropriate
for shared storage which was assigned to a single node for extended periods of time.
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-9

Student Notebook

RSCT-based shared storage protection


AIX V5.1 introduced a new mechanism to be used with enhanced concurrent volume
groups. This mechanism uses an AIX component called Reliable Scalable Cluster
Technology (RSCT). We will be discussing RSCT in greater detail later in the week.
HACMP 5.x uses this mechanism when enhanced concurrent volume groups are in use
(more on enhanced concurrent volume groups later in this unit).

Standard, concurrent, and enhanced concurrent volume groups


History
Concurrent mode volume groups were created to allow multiple nodes to access the
same logical volumes concurrently.
The original concurrent mode volume groups were only supported on Serial DASD and
SSA disks in conjunction with the 32-bit kernel.
Beginning with AIX Version 5.1, the enhanced concurrent mode volume group was
introduced to extend the concurrent mode support to all other disk types and to the
64-bit kernel. Enhanced concurrent volume groups can also be used in non-concurrent
environment access environments to provide RSCT-based shared storage protection.
Concurrent access environment
If you need concurrent access to the data in shared storage, you must use concurrent
volume groups. This implies the use of enhanced concurrent mode volume groups.
Non-concurrent access environment
There are two volume groups that can be used with non-concurrent access
environments.
The first is a standard volume group. This volume group type uses
reserve/release-based shared storage protection.
However, as we shall see, there are a number of advantages to using the RSCT-based
shared storage protection, which requires the use of enhanced concurrent volume
groups.
Support for the classical concurrent volume groups is being removed
- AIX V5.1 introduced enhanced concurrent volume groups, but still allowed you to
create and use the classical concurrent volume groups. When concurrent volume
groups are created on AIX v.5.1 and up, they are created as enhanced concurrent
mode volume groups by default.
- AIX V5.2 does not allow you to create classical concurrent volume groups, but you
can still use them in AIX V5.2.
- AIX V5.3 removes the support for classical concurrent volume groups entirely; only
enhanced concurrent volume groups are supported.

3-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Reserve/release-based protection
A

Node
1

B varyonvg

ODM
ODM

Node
2
ODM

varyonvg

Reserve/release-based shared storage protection relies on


hardware support for disk reservation (SCSI commands)
Disks are physically reserved to a node when varied on
Disks are released when varied off
LVM is unable to vary on a volume group whose disks are reserved to
another node

Not all shared storage systems support disk reservation


Copyright IBM Corporation 2008

Figure 3-7. Reserve/release-based protection

AU548.0

Notes:
Disk reservation
Reserve/release-based shared storage protection relies on the disk technology
supporting a mechanism called disk reservation. Disks which support this mechanism
can be, in effect, told to refuse to accept almost all commands from any node other than
the one which issued the reservation. AIXs LVM automatically issues a reservation
request for each disk in a volume group when the volume group is varied online by the
varyonvg command. The varyonvg command fails for any disks that are currently
reserved by other nodes. If it fails for enough disks, that it almost certainly does since if
one disk is reserved by another node, the others presumably are also, then the varyon
of the volume group fails.

LVM change management: Keeping the ODM and VGDA in sync


When multiple nodes are sharing a volume group using reserve/release-based storage
protection, the volume group is imported, but not varied on for the inactive nodes. There
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-11

Student Notebook

must be some mechanism to ensure that any meta-data VGDA changes made to the
volume group on the active node will be updated in the ODM on the inactive nodes in
the cluster. For example, if you change the size of a logical volume on the active node,
the other nodes ODMs will still list the logical volume at the original size. When an
inactive node is made active and if the volume group were varied on without updating
the ODM, the information in the ODM on the node and the VGDA on the disks would
disagree. This will cause problems.
When using reserve/release-based shared storage protection, HACMP provides a
last-chance mechanism called lazy update to update the ODM on the takeover node at
the time of fallover. This is meant to be a final attempt at synchronizing the VGDA
content with a takeover nodes ODM at fallover time. For obvious reasons (like the fact
that it cant overcome some VGDA/ODM mismatches) relying on lazy update should be
avoided.

Lazy update
Lazy update works by using the volume group timestamp in the ODM. When HACMP
needs to varyon a volume group, it compares the ODM timestamp to the timestamp in
the VGDA. If the timestamps disagree, lazy update does an exportvg/importvg to
recreate the ODM on the node. If the timestamps agree, no extra steps are required.
It is, of course, possible to update the ODM on inactive nodes when the change to the
VGDA meta-data is made. In this way, extra time at fallover is avoided. The ODM can
be updated manually or you can use Cluster Single Point of Control (C-SPOC), which
can automate this task. Lazy update and the various options for updating ODM
information on inactive nodes are discussed in detail in a later unit in this course.

3-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Reserve/release disk takeover: Manual move


Node
1

httpvg
varyonvg

ODM
ODM

ODM

dbvg
C
varyonvg

Node
1

Node
2

ODM
ODM

Node
1

Node
2

ODM
ODM

dbvg
C
varyonvg

httpvg
A
varyonvg

Node
2

ODM
ODM

ODM
ODM

dbvg
C
varyonvg

Node2:
varyoffvg httpvg

Node1:
varyonvg httpvg

Copyright IBM Corporation 2008

Figure 3-8. Reserve/release disk takeover: Manual move

AU548.0

Notes:
Manual takeover
With reserve/release-based shared storage protection, HACMP passes volume groups
between nodes by issuing a varyoffvg command on one node and a varyonvg
command on the other node. The coordination of these commands (ensuring that the
varyoffvg is performed before the varyonvg) is the responsibility of HACMP.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-13

Student Notebook

Reserve/release disk takeover: Failure

Node
1

B varyonvg

ODM
ODM

ODM
ODM
varyonvg

Node
1

Node
2

varyonvg

ODM

Node
2
ODM
ODM

varyonvg

Copyright IBM Corporation 2008

Figure 3-9. Reserve/release disk takeover - failure

AU548.0

Notes:
Disk takeover due to a failure
The right node has failed with the shared disks still reserved to the right node. When
HACMP encounters a reserved disk in this context, it uses a special utility program to
break the disk reservation. It then varies on the volume group which causes the disks to
be reserved to the takeover node.

Implications
Note that if the right node had not really failed then it would lose its reserves on the
shared disks (rather abruptly) when the left node varied them on. This will be seen in
the left nodes error log and should be acted on immediately, because this indicates you
are in a situation where both nodes can access and update the data on the disks (each
believing that it is the only node accessing and updating the data). An failure takeover
isnt possible unless all paths used by HACMP to communicate between the two nodes
have been severed.
3-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

How do we know the other node has failed?


Disk takeover due to failure will only occur when a node believes that the active node
has failed. HACMP uses communication between the nodes to determine if each node
is still active. In other words, ensure that there is sufficient redundancy in these
communication paths to ensure that loss of all communication with another node
implies that the other node has truly failed.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-15

Student Notebook

Reserve/release ghost disks

Node
1

varyonvg

ODM

Node
2
ODM

varyonvg

hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
hdisk7
hdisk8
hdisk9

Not seen with IBM disks


Add time to volume group activation
No need to manually deal with these, leave that to Cluster Manager

Copyright IBM Corporation 2008

Figure 3-10. Reserve/release ghost disks

AU548.0

Notes:
What is a ghost disk?
During the AIX boot sequence, the configuration manager (cfgmgr) accesses all the
shared disks (and all other disks and devices). Each time it accesses a physical volume
at a particular hardware address, it tries to determine if the physical volume is the same
actual physical volume that was last seen at the particular hardware address. It does
this by attempting to read the physical volumes ID (PVID) from the disk. This operation
fails if the disk is currently reserved to another node. Consequently, the configuration
manager is not sure if the physical volume is the one it expects or is a different physical
volume. In order to be safe, it assumes that it is a different physical volume and assigns
it a temporary hdisk name. This temporary hdisk name is called a ghost disk.
When the volume group is eventually brought online by Cluster Services, the question
of whether each physical volume is the expected physical volume is resolved. If it is,
then the ghost disk is deleted. If it isnt, then the ghost disk remains. Whether or not the
online of the volume group ultimately succeeds depends on whether or not the LVM is
3-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

find enough of the volume groups physical volumes (and other factors such as whether
quorum checking is enabled on the volume group).

Ghost disk issues


Time
Dealing with ghost disks takes time with the result that a volume group with ghost disks
takes longer to varyon than one without. For example, in one customer cluster where
ghost disks were found, they added about twenty seconds per ghost disk to the time
required to varyon the volume group. In volume groups that contain a large number of
physical volumes (LUNs), this can result in a significant delay during fallovers.
Dont delete ghost disks
If ghost disks occur, they must be left in the AIX device configuration because their
presence is necessary for the correct operation of the LVM when the volume group is
ultimately brought online by Cluster Services.

Disk technology differences


Note that not all disk technologies result in ghost disks. Most LUNs presented from IBM
disk technology can be uniquely identified regardless of whether the disk is reserved to
another node.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-17

Student Notebook

RSCT-based shared storage protection

Node
1 passive varyon

active varyon

Node
2

ODM
ODM

ODM

active varyon

passive varyon

Requires Enhanced Concurrent Volume Group


Is only used by HACMP
Uses gsclvmd
Independent of disk type
Copyright IBM Corporation 2008

Figure 3-11. RSCT-based shared storage protection

AU548.0

Notes:
Introduction
HACMP 5.x supports the new style of shared storage protection, which relies on AIXs
RSCT component to coordinate the ownership of shared storage when using enhanced
concurrent volume groups in non-concurrent mode.

How HACMP controls RSCT-based Volume Groups


HACMP takes advantage of new parameters on the varyonvg and varyoffvg
commands related to a pair of new concepts called active and passive volume group
varyon states. A volume group being managed by RSCT-based shared storage
protection is varied online in the passive state on all cluster nodes that might need
access to the volume groups data. The volume group is varied online in the active state
by the particular cluster node which needs access to the volume groups data now (in
other words, the node that is running the application has the volume group varied on in

3-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

the active state). The LVM on each node prohibits updates to the volume groups data
unless the node has the volume group varied on in the active state.
It is the responsibility of the RSCT component to ensure that each volume group is
varied online in the active state on not more than one node. Since this mechanism does
not rely on any disk reservation mechanism, it is compatible with all disk technologies
supported by HACMP.

Disk reservation not used


Even disks that support a disk reservation mechanism are not reserved when
RSCT-based shared storage protection is in effect.

Fast disk takeover


Taking over a volume group using RSCT-based shared storage protection is
considerably faster than using reserve/release-based shared storage protection.
Consequently, this style of disk takeover is called fast disk takeover.
During fast disk takeover, HACMP skips the extra processing needed to break the disk
reserves, or update and synchronize the LVM information by running lazy update. As a
result, the disk takeover mechanism used for enhanced concurrent volume groups is
faster than disk takeover used for standard volume groups.

LVM change management: keeping the ODM and VGDA in sync


Beginning in HACMP 5.2, when using enhanced concurrent volume groups
(RSCT-based shared storage protection), the ODMs on the passive nodes are updated
immediately with any VG changes and the new timestamp. At fallover time, because the
timestamp in the ODM on the takeover node agrees with the timestamp in the VGDA,
lazy update does not run. This further improves the speed of fast disk takeover.
Updates to the LVM components for an enhanced concurrent mode volume group
should only be done through C-SPOC, which further ensures that the VGDA and ODM
are synchronized across all nodes participating in the volume group.
Note: All nodes in the cluster must be available before making any LVM changes. This
ensures that all nodes have an accurate view of the state of the volume group. This is
an issue if you are using the forced varyon feature, which will be discussed later in this
unit.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-19

Student Notebook

Enhanced concurrent volume groups


Introduced in AIX 5L V5.1
Supported for all HACMP-supported disk technologies
Allows for Fast Disk Takeover

Supported JFS and JFS2 filesystems


File systems may only be mounted by one node at a time

Enhanced concurrent VGs are required to use:


Heartbeat over disk for a non-IP network
(Covered in the network unit)
Fast disk takeover
Some virtualized configurations (through a VIO server)

Replaced old style classic concurrent volume groups


C-SPOC can be used to convert standard VG to enhanced concurrent VG
C-SPOC can be used to convert classic concurrent VGs to enhanced concurrent VGs
C-SPOC (Cluster Single Point of Control) to be discussed in a later unit
Copyright IBM Corporation 2008

Figure 3-12. Enhanced concurrent volume groups

AU548.0

Notes:
Introduction
Defining an enhanced concurrent volume group allows the LVM to use RSCT to
manage varyonvg and varyoffvg processing.

Concurrent access
In a concurrent access environment, all the nodes will varyon the volume group.

Fast disk takeover (enhanced concurrent VGs in a non-concurrent


resource group environment)
As was described earlier, using enhanced concurrent volume groups can result in
significantly shorter fallover and fallback times (depending on the number of physical
volumes and volume groups involved). In this case, one node will varyon the volume
group in active mode, while all the other nodes will varyon the VG in passive mode.
3-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Heartbeat over disk


Using enhanced concurrent volume groups also provides the capability to do heartbeats
over disk to create a non-IP heartbeat network for HACMP (discussed in the next unit).

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-21

Student Notebook

ECMVG varyon: Active versus passive


Active Varyon (lsvg -o)
Behaves like normal varyon (listed with lsvg -o)
Allows all of the usual operations like:
RSCT responsible for ensuring that only one node has VG actively varied on

Passive Varyon (lsvg <vg_name>)


Volume group is available in a very limited read-only mode
Only certain operations are allowed
Most operations are prohibited

HACMP uses the appropriate varyonvg commands with


enhanced concurrent volume groups
Protecting VG integrity when using fast disk takeover
Use multiple IP networks and disk heartbeating
(discussed in next unit)
Do not make structural changes to VG unless all nodes are online

Copyright IBM Corporation 2008

Figure 3-13. ECMVG varyon - active versus passive

AU548.0

Notes:
Active varyon
If using enhanced concurrent volume groups in a non-concurrent access environment,
only one node will varyon the VG in active mode, allowing full access.

Passive varyon
Other nodes will varyon the VG in passive mode. In passive mode, only very limited
operations are allowed on the volume group. They are:
- Reading volume group configuration information (for example, lsvg)
- Reading logical volume configuration information (for example, lslv)
Most operations are prohibited. They are
- Any operations on filesystems and logical volumes (for example, mounts, open,
create, modify, delete, and so forth)
3-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- Modifying, synchronizing the volume group's configuration


- Any operation that changes the contents or hardware state of the disks

Fast disk takeover


Switching a volume group from active to passive state (or the reverse) is a very fast
operation as it only updates the LVMs internal state of the volume group in an AIX
kernel data structure and does not require any actual disk access operations. This is
what makes fast disk takeover faster than traditional disk-reservation based volume
group takeover.

Protecting volume group integrity using fast disk takeover


When fast disk takeover is used, the SCSI disk reservation function is not used. If the
cluster becomes partitioned, nodes in each partition could accidentally varyon the
volume group in active state. Because active state varyon of the volume group allows
mounting of filesystems and changing physical volumes, this situation can result in
different copies of the same volume group.
To avoid this situation:
- Make sure that there are multiple heartbeat paths to prevent a loss of network
communication from triggering a fallover when the active node is still running. This
protects against a partitioned cluster.
- Avoid making structural changes to the VG (such as adding or removing a logical
volume, changing the size of a logical volume, and so forth) unless all nodes are
online. This ensures that all nodes will have a common view of the volume group
structure.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-23

Student Notebook

ECMVG state: Active versus passive


On active node:
halifax # lsvg
VOLUME GROUP:
e2eaa2d6d
VG STATE:
VG PERMISSION:
Concurrent:

ecmvg
ecmvg

VG IDENTIFIER:

0009314700004c00000000f

active
PP SIZE:
8 MB
read/write
TOTAL PPs:
537 (4296 MB)
...
...
...
Enhanced-Capable
Auto-Concurrent: Disabled

On passive node:
toronto # lsvg
VOLUME GROUP:
e2eaa2d6d
VG STATE:
VG PERMISSION:
...
Concurrent:

ecmvg
ecmvg

VG IDENTIFIER: 0009314700004c00000000f

active
PP SIZE:
8 MB
passive-only TOTAL PPs:
537 (4296 MB)
...
...
Enhanced-Capable
Auto-Concurrent: Disabled

Copyright IBM Corporation 2008

Figure 3-14. ECMVG state: Active versus passive

AU548.0

Notes:
Introduction
The VG PERMISSION field in the output of lsvg shows if a volume group is varied on in
active or passive mode.

3-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

How ECMVGs work

Enhanced concurrent mode volume groups rely


on Group Services.
Protocols (node-to-node communications) are run to
update VGDA/VGSA.
gsclvmd is Group Services component involved.
Each node belongs to two Group Services groups for
each enhanced concurrent mode volume group.
One for VGDA updates (starts with a d)
One for VGSA updates (start with an s)

All active nodes vote/agree on changes and then


update ODM, too.
WARNING: Filesystem changes are not propagated.
Copyright IBM Corporation 2008

Figure 3-15. How ECMVGs work

AU548.0

Notes:
Although the details of the processing of enhanced concurrent mode volume groups are
largely beyond the scope of this class, it is very useful to understand the basics. The Group
Services component of RSCT is used to control the ownership of the volume group, thus
the name were using, RSCT-based.

Group Services
Group Services is a component that allows nodes to participate in groups to control
resources of common interest where each node has a vote in how the resource is
controlled. HACMP belongs to two Group Services groups for the control of the cluster
related resources amongst all the nodes in the cluster (but thats another story for another
class, AU600 - HACMP Internals). When a node would like to effect a change on a
resource, it proposes that change to the group via a protocol (communications with the
other Group Services daemons). The Group Services daemon for HACMP is grpsvcs.

gsclvmd

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-25

Student Notebook

The daemon that controls this group membership is gsclvmd. It is important to understand
that this daemon depends on the Group Services being active and that Group Services is
activated when Cluster Services is started. That should reinforce the point that ECMVGs
are to be used with HACMP only!

Voting on LVM changes and changing the ODM


All members (loosely) have a vote. If all approve, the change is made, to include all that the
code written for that change involves. In the case of ECMVGs, that is a change to the
VGDA / VGSA that results in changes made to each participating nodes ODM.
So, it should be rather obvious that a missing member will be lost to the changes that have
occurred during its absence. For this reason, great care must be taken to ensure that
either, all changes are made with all members present, or any changes made with missing
members are propagated to the missing members very soon after their reactivation.

Warning
Filesystem changes are not handled by this process. This is where C-SPOC is necessary.

3-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Determining ECMVG and Group Services status


How do you know Group Services is controlling your VG?
From the output of ps

rt1s1vlp2 # ps -ef | grep $(lsvg appAvg | grep IDENTIFIER | cut -d":" -f3)
root 294954 405668
0 14:03:15
- 0:00 /usr/sbin/gsclvmd -r 30 -i 300 -t 50 -c
00c0288e00004c0000000116b0b5cf7a -v 0

From the long status of gsclvmd


rt1s1vlp2 # lssrc -ls gsclvmd
Subsystem
Group
gsclvmd
gsclvmd

PID
405668

Active VGs # 1
vgid
00c0288e00004c0000000116b0b5cf7a

Status
active

Match VGID

pid
294954

And always check long status of grpsvcs


rt1s1vlp2 # lssrc -ls grpsvcs
Subsystem
Group
PID
Status
grpsvcs
grpsvcs
491746
active
3 locally-connected clients. Their PIDs:
540702(haemd) 639086(clstrmgr) 294954(gsclvmd)
HA Group Services domain information:
Domain established by node 3
Number of groups known locally: 5
Number of
Number of local
Group name
providers
providers/subscribers
s00O0K8S0009G0000012QOBBJRQ
2
1
ha_em_peers
2
1
0
CLRESMGRD_1196797869
2
1
0
CLSTRMGR_1196797869
2
1
0
d00O0K8S0009G0000012QOBBJRQ
2
1

VGSA Group

VGDA Group

Copyright IBM Corporation 2008

Figure 3-16. Determining ECMVG or Group Services status

AU548.0

Notes:
Now that you have an idea how EMCVGs work, you need to know how to see whats going
on.

Start simple; look at processes


Looking for the VGID of each volume group that is suspected to be an ECMVG in the
process table is a good start. Verify that there is a running gsclvmd daemon for each VGID.
If not, you made a mistake somewhere.

Look at the gsclvmd daemon


Using lssrc -ls gsclvmd you can also see the VGID and associated gslcmvd. This would
be much more interesting in the case where many ECMVGs were defined.

The Group Services daemon provides some information as well


As shown in the visual, the lssrc -ls grpsvcs command gives details on this nodes
groups, including, but certainly not limited to the ECMVG groups. Note the two groups, one
for VGSA changes and one for VGDA changes.
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-27

Student Notebook

RSCT-based fast disk takeover: Manual move


Node
1

passive
varyon

httpvg

active
varyon

ODM

Node
1

ODM

active
varyon

passive
varyon

dbvg

httpvg

passive
varyon

passive
varyon

ODM

active
varyon

dbvg

httpvg
A

passive
varyon

passive
varyon

ODM

dbvg

2. Right node releases


active varyon of httpvg
(varyoffvg).

Node
2
ODM

active
varyon

1. A decision is made to
move httpvg from the
right node to the left.

Node
2
ODM

active
varyon

Node
1

Node
2

passive
varyon

3. Left node obtains active


varyon of httpvg
(varyonvg).

Copyright IBM Corporation 2008

Figure 3-17. RSCT-based fast disk takeover: Manual move

AU548.0

Notes:
Manual movement of RSCT-based volume groups
The fast disk takeover mechanism handles a manual VG takeover by first releasing the
active varyon state of the volume group on the node that is giving up the volume group.
It then sets the active varyon state on the node that is taking over the volume group.
The coordination of these operations is managed by HACMP 5.x and AIX RSCT.

3-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

RSCT-based fast disk takeover: Failure


Node
1

passive
varyon

httpvg
B

active
varyon

ODM

Node
1

ODM

active
varyon

passive
varyon

dbvg

passive
varyon

Node
2

httpvg
B

ODM

ODM

active
varyon

Node
1

Node
2

active
varyon

dbvg

httpvg
B

passive
varyon

passive
varyon

ODM

dbvg

Active varyon state and


passive varyon state are
concepts that do not apply to
failed nodes.

Node
2
ODM

active
varyon

1. Right node fails.


2. Left node realizes that
right node has failed.

passive
varyon

3. Left node obtains active


mode varyon of httpvg.

Copyright IBM Corporation 2008

Figure 3-18. RSCT-based fast disk takeover: Failure

AU548.0

Notes:
Fast disk takeover in a failure scenario
A node has failed. When the remaining node (or nodes) realize that the node has failed,
the takeover node sets the volume groups varyon state to be active.
There is no need to break disk reservations as no disk reservations are in place. The
only action required is that the takeover node ask its local LVM to mark the volume
groups varyon state as active.
If Topology Services fail (that is, no communication between the nodes), then group
services fail and it is not possible to activate the volume group. This makes it very safe
to use. It is recommended, however, to use enhanced volume groups only on systems
running HACMP 5.x.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-29

Student Notebook

Fast disk takeover details


Fast disk takeover is enabled automatically for a Volume
Group if all of the following conditions are true:
The cluster is running AIX 5L on all nodes.
HACMP 5.x is installed on all nodes.
The volume group is an enhanced concurrent mode volume group.
This is RSCT-based storage protection.
An existing volume group can be converted to enhanced concurrent via C-SPOC,
VG must be taken offline to take effect.

Fast disk takeover is faster than reserve/release-based


disk takeover.
Ghost disks do not occur when fast disk takeover is
enabled.
Fast disk takeover is independent of the disk type.
Based on RSCT, disk must only be supported by HACMP.
The gsclvmd subsystem that uses group services provides the protection.
The distinction between active varyon and passive varyon is private to each node
(that is, it is not recorded anywhere on the shared disks).
Copyright IBM Corporation 2008

Figure 3-19. Fast disk takeover details

AU548.0

Notes:
Considerations
As with any technology, the implications of using fast disk takeover must be properly
understood if the full benefits are to be experienced.
Note: If RSCT is not running, it is possible (although it takes some work) to manually
varyon an enhanced concurrent volume group to active mode, while it is varied on in active
mode on another node. Although this is possible, it is an unlikely occurrence. This small
risk can easily be avoided by never varying on your shared volume groups manually.

Requirements
Fast disk takeover is used only if all of the requirements listed previously have been
met.
Because RSCT is independent of disk technology, all disks supported by HACMP can
be used in an enhanced concurrent mode volume group.
3-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review: Topic 1


1. Which of the following statements is true (select all that
apply)?
a. Static application data should always reside on private storage.
b. Dynamic application data should always reside on shared
storage.
c. Shared storage must always be simultaneously accessible in
read-write mode to all cluster nodes.
d. Application binaries should only be placed on shared storage.

2. True or False?

Using RSCT-based shared disk protection results in slower


fallovers.

3. True or False?

Ghost disks must be checked for and eliminated immediately


after every cluster fallover or fallback.

Copyright IBM Corporation 2008

Figure 3-20. Lets review topic 1

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-31

Student Notebook

3-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

3.2 Shared disk technology

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-33

Student Notebook

Shared disk technology


After completing this topic, you should be able to:
Discuss the capabilities of various disk technologies in an
HACMP environment
Discuss the installation considerations of a selected disk
technology when combined with HACMP
Explain the issue of PVID consistency within an HACMP
cluster

Copyright IBM Corporation 2008

Figure 3-21. Shared disk technology

AU548.0

Notes:

3-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Shared disk and HACMP strategies


Two-pronged approach:
Compatibility of chosen disk subsystem with HACMP
Device drivers
Multi-pathing software
Adapter and disk subsystem microcode
OS patches
Reference the HACMP Installation Guide, IBM Flashes, and the hardware
vendor (IBM or non-IBM) for the specifics
Elimination of storage single point of failure
Redundancy of data on the disks
RAID 1 or 10
> In AIX
> In the disk subsystem

RAID 5
> In the disk subsystem

Copyright IBM Corporation 2008

Figure 3-22. Shared disk and HACMP strategies

AU548.0

Notes:
Compatibility
Flashes can be found at
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/Flashes
Hints, Tips, and Technotes can be found at
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/Technotes
HACMP Release Notes
Shipped with the product

Redundancy
Your goal is to eliminate single points of failure. When considering this for storage, it
involves defining more than one disk drive for every piece of data on the storage
subsystem and multiple paths to get to the data from the server. This is referred to as
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-35

Student Notebook

data redundancy. HACMP does not provide data redundancy. In all likelihood, you will
be choosing a storage subsystem to provide the data redundancy. You might choose a
JBOD (Just a Bunch Of Disks) storage device, in which case you will have to provide
the redundancy in AIX. Multiple paths to get to the data from the server is accomplished
through multi-pathing software. That software must be checked for compatibility with
HACMP.
HACMP is oblivious to the storage device and redundancy method chosen. Although
not in the scope of this class, the selected storage subsystem will be affected by the
factors listed as follows (among others). The selected storage subsystem will then
determine what you will look for in terms of compatibility with the chosen HACMP
version and features.
- Data access performance requirements
- Capacity
- Support for multi-pathing
- Price

3-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Virtual storage (VIO) and HACMP


FRAME 1
VIOS 1

HBA
MPIO

hdisk0

vhost0

no_reserve

VIOS 2

HBA
MPIO
HBA

hdisk0

HACMP Node1
Hypervisor

HBA

vscsi0

MPIO

hdisk0

vscsi1

sharedvg

vhost0

hdisk0

FRAME 1

Stg
Dev

VIOS 1

HBA
MPIO

hdisk0

vhost0

no_reserve

VIOS 2

HBA
MPIO
HBA

hdisk0

HACMP Node2
Hypervisor

HBA

vscsi0

MPIO
vscsi1

hdisk0

sharedvg

vhost0

Enhanced concurrent mode volume groups required on HACMP nodes


MPIO or other (supported) multi-pathing software on VIO server
MPIO on HACMP nodes
Copyright IBM Corporation 2008
Figure 3-23. Virtual storage (VIO) and HACMP

AU548.0

Notes:
Overview
This type of configuration is becoming prevalent with the adoption of the Virtualization
capabilities of the POWER5 and later architecture. A full discussion of the
implementation of this configuration is beyond the scope of the class. The intent is to
indicate that this is a supported configuration, some of the terms to learn, requirements,
and a configuration overview. Consult the IBM Sales Manual and IBM Support (and
anyone else you can find who will talk to you about this from an experienced standpoint)
for the latest requirements and considerations.

Legend
Stg Dev - Storage Subsystem providing access to disks, like a DS8300, DS4000, EMC,
HDS, SSA, and so on.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-37

Student Notebook

VIOS - Virtual I/O Server, the special LPAR on a Power5/6 systems that provides
virtualized storage (and networking) devices for use by client LPARs
HBA - Host Bus Adapter, also known as Fibre Channel Adapter, this is the connection
to the SAN, giving the VIOS access to storage in the SAN (LUNs).
MPIO - Multipath I/O, built into AIX since V5.1, creates path devices for each instance of
a disk/LUN that is recognized by AIX, presenting only a single hdisk device from these
multiple paths.
vhost0 - Virtual SCSI (server) adapter on the Virtual IO Server that provides the client
LPARs with access to virtual SCSI disks.
vscsi0 - Virtual SCSI (client) adapter on the client LPAR that provides the client access
to the VIOSs Virtual SCSI (server) adapter and therefore access to the virtual SCSI
disks.
Hypervisor - The Power5/6 component that manages access between the vhost and
vscsi adapters.

Minimum requirements
As of the writing of this version of the course, the minimum requirements for HACMP
with Virtual SCSI (VSCSI) and Virtual LAN (VLAN) on POWER5/6 models were:
HACMP supports the IBM VIO Server V1.4
August 10, 2007
IBM* High Availability Cluster Multiprocessing (HACMP*) for AIX 5L*, Versions 5.2, 5.3,
and 5.4 extends support to include IBM Virtual I/O Server (VIO Server) Version 1.4
virtual SCSI and virtual Ethernet devices on all HACMP supported IBM POWER5* and
POWER6* servers along with IBM BladeCenter JS21. This includes HACMP nodes
running in LPARs on supported IBM System i5* processors.

Refer to the following table for support details.

Note: TL = Technology Level


___________________________________________________________________
IBM VIO Server Version 1.4** on AIX V5.3
____________________________________________________________
HACMP for AIX 5L V5.2HACMP #IY97326
AIX TL3
___________________________________________________________________
HACMP for AIX 5L V5.3 HACMP #IY94307
3-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

AIX TL3
___________________________________________________________________
HACMP for AIX 5L V5.4HACMP #IY87247
AIX TL3
___________________________________________________________________

HACMP supports the IBM VIO Server Versions 1.2 and 1.3
March 8, 2007
IBM* High Availability Cluster Multiprocessing (HACMP*) for AIX 5L*, Version 5.2, V5.3,
and V5.4 extends support to include IBM Virtual I/O Server (VIO Server) Version 1.2
and 1.3 on all HACMP supported IBM POWER5* servers. This includes HACMP nodes
running in LPARs on supported IBM System* i5 processors.

Refer to the following table for support details.

Note: TL = Technology Level


___________________________________________________________________
IBM VIO Server Version 1.2** Version 1.3**
on AIX V5.3 on AIX V5.3
____________________________________________________________
HACMP for AIX 5L, HACMP #IY86296 HACMP #IY86296
V5.2 AIX TL3 AIX TL3
___________________________________________________________________
HACMP for AIX 5L, HACMP #IY94307 HACMP #IY94307
V5.3 AIX TL3 AIX TL3
___________________________________________________________________
HACMP for AIX 5L, HACMP #IY87247 HACMP #IY87247
V5.4 AIX TL3 AIX TL3
___________________________________________________________________

**HACMP and Virtual SCSI (vSCSI)


Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-39

Student Notebook

The volume group must be defined as Enhanced Concurrent Mode. In general,


Enhanced Concurrent Mode is the recommended mode for sharing volume groups in
HACMP clusters because volumes are accessible by multiple HACMP nodes, resulting
in faster failover in the event of a node failure. If file systems are used on the standby
nodes, they are not mounted until the point of failover so accidental use of data on
standby nodes is impossible. If shared volumes are accessed directly (without file
systems) in Enhanced Concurrent Mode, these volumes are accessible from multiple
nodes so access must be controlled at a higher layer such as databases.
If any cluster node accesses shared volumes through vSCSI, all nodes must do so. This
means that disks cannot be shared between an LPAR using vSCSI and a node directly
accessing those disks.
From the point of view of the VIO Server, physical disks (hdisks) are shared, not logical
volumes or volume groups. All volume group construction and maintenance on these
shared disks is done from the HACMP nodes, not from the VIO Server. All volume
group construction and maintenance on these shared disks is done from the HACMP
nodes, not from the VIO server.

**HACMP and Virtual Ethernet


IPAT via Aliasing must be used. IPAT via Replacement and MAC Address Takeover are
not supported. In general, IPAT via Aliasing is recommended for all HACMP networks
that can support it.
HACMP's PCI Hot Plug facility cannot be used. PCI Hot Plug operations are available
through the VIO Server.
- Note that when an HACMP node is using Virtual I/O, HACMP's PCI Hot Plug
facility is not meaningful because the I/O adapters are virtual rather than physical.
All Virtual Ethernet interfaces defined to HACMP should be treated as single-adapter
networks as described in the HACMP Planning and Installation Guide. In particular, the
netmon.cf to include a list of clients to ping must be used to monitor and detect failure of
the network interfaces. Because of the nature of Virtual Ethernet, other mechanisms to
detect the failure of network interfaces are not effective.

**Further configuration-dependent attributes of HACMP with Virtual Ethernet


If the VIO Server has multiple physical interfaces on the same network or if there are
two or more HACMP nodes using VIO Servers in the same frame, HACMP will not be
informed of (and hence will not react to) single physical interface failures. This does not
limit the availability of the entire cluster because VIOS itself routes traffic around the
failure. The VIOS support is analogous to EtherChannel in this regard. Other methods
(not based the VIO Server) must be used for providing notification of individual adapter
failures.

3-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

If the VIO Server has only a single physical interface on a network, then a failure of that
physical interface will be detected by HACMP. However, that failure will isolate the node
from the network.
Although some of these might be viewed as configuration restrictions, many are direct
consequences of I/O Virtualization.
Service can be obtained from the IBM Electronic Fix Distribution site at:
http://www-03.ibm.com/servers/eserver/support/unixservers/aixfixes.html
All the details on requirements and specifications are in this Flash:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10390

Configuration overview
Configuration is mostly performed on the VIOS and Hardware Management Console.
The use of MPIO at the AIX level is also essential to ensuring data availability if access
to a VIOS is lost. Ensure that you reactivate any path in MPIO that was lost after it is
recovered to avoid total loss of access to data on a subsequent path failure. The
HACMP consideration, in addition to the correct software levels as outlined previously is
that enhanced concurrent volume groups are used in this configuration. Otherwise, this
is just another volume group to be managed in a resource group, to the cluster
manager.
On Storage device
Map LUNs to the two corresponding VIO servers
On Hardware Management Console
Define Mappings (vhost & vscsi)
On VIO Server 1
Set no_reserve attribute
$ chdev -dev <hdisk#> -attr reserve_policy=no_reserve
Export the LUNs out to each client
$ mkvdev vdev hdisk# -vadapter vhost0
On VIO Server 2
Set no_reserve attribute
$ chdev -dev <hdisk#> -attr reserve_policy=no_reserve
Export the LUNs out to each client
$ mkvdev vdev hdisk# -vadapter vhost0
On Clients

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-41

Student Notebook

- Configure the MPIO Default PCM to conduct health checks down all paths and
recover when a path is restored. This requires a reboot to take affect.
# chdev -l <hdisk#> -a hcheck_interval=20 -a hcheck_mode=nonactive
-P
- Create the shared volume group as Enhanced Concurrent VG on first Client
(bos.clvm.enh required).
- Varyoffvg on Client 1.
- Import VG onto Client 2.
- Define to HACMP as a shared resource in a resource group.

References
Courses that address this configuration:
- AU620, HACMP System Administration III: Virtualization and Disaster Recovery
- AU730, System p LPAR and Virtualization I: Planning and Configuration
- AU780, System p LPAR and Virtualization II: Implementing Advanced
Configurations
Redbooks (www.redbooks.ibm.com):
- REDP-4027-00: HACMP 5.3, Dynamic LPAR and Virtualization
Provides details later in the document on HACMP and Virtualization along with
failure scenarios in the VIO infrastructure and performance considerations
- SG24-7940-02: Advanced POWER Virtualization on IBM System p5 Servers:
Introduction and Configuration - Chapter 4
- REDP-4194: IBM System p Advanced POWER Virtualization Best Practices

3-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

IBM SAN storage and HACMP


IBM Storage Subsystems currently supported include:
DS8000 / DS6000 families
DS4000 family
SAN Volume Controller (SVC)

IBM Storage Subsystem support with HACMP is announced via Flash.


Consult http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/Flashes

Determine the HACMP compatibility levels for the following items:

HBA device driver


AIX patch levels
Multi-pathing software (SDD, RDAC, MPIO PCM, and so on)
Device microcode/firmware

Contact IBM support.

Copyright IBM Corporation 2008

Figure 3-24. IBM SAN storage and HACMP

AU548.0

Notes:
Overview
Use the pointers already provided to access IBM Flashes to determine if the IBM
hardware that youve chosen is supported with HACMP and the HACMP requirements.
Also read the Release Notes provided with the HACMP product for the latest
information on requirements.

SDD
With most IBM SAN Storage devices, the multi-pathing software will be the Subsystem
Device Driver (SDD). It is supported with HACMP (with appropriate PTFs).
To use C-SPOC with VPATH disks, SDD 1.3.1.3, or later, is required.
For levels and maintenance, check:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S
4000065&loc=en_US&cs=utf-8&lang=en
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-43

Student Notebook

Non-IBM SAN storage and HACMP


As explained in the Student Notes below, IBM does not provide the
requirements for HACMP compatibility with non-IBM storage.
Contact the support organization or online reference materials for the vendor of the nonIBM storage.
Contact IBM support.

Be sure to look into the multi-pathing software version and maintenance:


PowerPath
HDLM
MPIO PCM

For EMC planning, see their support matrix:


http://www.emc.com/interoperability/matrices/EMCSupportMatrix.pdf

Copyright IBM Corporation 2008

Figure 3-25. Non-IBM SAN storage and HACMP

AU548.0

Notes:
IBMs statement on non-IBM storage requirements with HACMP
This FAQ states the HACMP position with respect to non-IBM storage devices.
Question: Does HACMP support EMC or Hitachi storage subsystems when connected
to pSeries servers?
Answer: The storage subsystems supported by HACMP are those documented in the
Sales Manual. New additions are announced via Flash. Current information can be
retrieved from the online Sales Manual. HACMP supports only those IBM devices that
have passed IBM qualification efforts, and for which IBM development and service are
prepared to provide support. There is a group, associated with development, that tests
non-IBM storage subsystems for attachment to AIX systems and HACMP. Also,
cooperative service agreements are in place with certain non-IBM storage vendors.
HACMP also provides a supported interface, documented in the HACMP Planning and
Installation Guide, which allows any storage subsystem to be described in terms of a
standard set of operations: This allows for the invocation of user-provided methods to
3-44 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

accommodate device specific behaviors and operations that might not be automatically
supported by HACMP. If a client has an HACMP cluster containing storage hardware
other than that supported by HACMP, and they report a problem, IBM Service will
address the problem as follows:
If the problem is unrelated to that hardware, it will be addressed the same as any other
problem.
If the problem is related to that hardware, and the hardware is covered by a cooperative
service agreement with the storage vendor, the problem will be forwarded to the storage
vendor.
If the problem is related to hardware for which no cooperative service agreement is in
place, the client will be asked to refer the problem to the hardware manufacturer.

Determining compatibility
When contacting both IBM and non-IBM sources for information, indicate your intent to
configure the non-IBM storage device with HACMP and request driver, patch,
multi-pathing software and microcode requirements, and experiences with this
combination.
Also read the Release Notes provided with the HACMP product for the latest
information on requirements.

EMC
When using the EMC URL listed above to gather EMC information, here is the path to
take to find the HACMP compatibility information.
- Navigate to
http://www.emc.com/interoperability/matrices/EMCSupportMatrix.pdf
- Search for HACMP.
- You will get many hits; look in the sections that apply to your storage devices.
- Then look for the HACMP version that you are installing.
- Finally, look for the device driver, PowerPath, and AIX patch information for your
configuration.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-45

Student Notebook

SCSI technology and HACMP


HACMP-related issues with SCSI disk architecture:
SCSI buses require termination at each end.
In HACMP environments the terminators have to be external to ensure that the bus is still
terminated properly after a failed system unit has been removed.
SCSI buses are ID-based. All devices must have a unique ID number.
Different SCSI bus types have different maximum cable lengths for the buses (maximum is
25 meters for differential SCSI).
Four node limit.
SCSI cables are not hot pluggable (power must be turned off on all devices attached to the
SCSI bus before a SCSI cable connection is made or severed).
Clusters using shared SCSI disks often experience ghost disks.

Maximum 25m
Host
System
SCSI
Controller

Host
System
SCSI

6 Controller
SCSI 4
Module

SCSI 3
Module

SCSI 2
Module

SCSI 1
Module

Disk
Drive

Disk
Drive

Disk
Drive

Disk
Drive

Copyright IBM Corporation 2008

Figure 3-26. SCSI technology and HACMP

AU548.0

Notes:
SCSI termination
In HACMP environments, SCSI terminators must be external so that the bus is still
terminated after a failed system unit has been removed.

Avoid using SCSI ID 7


In an HACMP environment, it is a very good practice to avoid using SCSI ID 7 because
a node booted in service or diagnostic mode, by default, has SCSI controllers set to the
default ID of 7. If you are troubleshooting, boot the failed node into service or diagnostic
mode and the surviving node is using SCSI ID 7, there will be a SCSI ID conflict. This
could result in data corruption.

Devices on a shared bus


Do not connect other SCSI devices, such as CD-ROMs, to a shared SCSI bus.
3-46 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Power supply redundancy


If you mirror your logical volumes across two or more physical disks, the disks should
not be connected to the same power supply; otherwise, loss of a single power supply
can prevent access to all copies. As a result, plan on using multiple disk subsystem
drawers or desk-side units to avoid dependence on a single power supply.

Cable length and number of drives


You can connect up to 16 devices to a SCSI bus. Each SCSI adapter, and each disk, is
considered a separate device with its own SCSI ID. The maximum bus length for most
SCSI devices provides enough length for most cluster configurations to accommodate
the full 16 device connections allowed by the SCSI standard.

Hot swapping SCSI devices


The hot swappability of SCSI devices is generally poorly understood. The rules are
actually quite simple:
- If the documentation that comes with the SCSI device does not explicitly state that a
device is hot swappable, then assume that it is not hot swappable.
- In general, the only hot swappable SCSI devices are certain SCSI disk drive
modules.
- SCSI cable connection points are never hot swappable. Disconnecting or
connecting a SCSI cable while any device on the bus is powered on is a dangerous
activity.

Disconnecting or connecting SCSI cables


Many people have disconnected or connected SCSI cables without causing any
problems. This is, at best, proof that the person has been lucky. The possible
consequences of disconnecting or connecting a SCSI cable when any device on the
bus is powered on can include:
- I/O errors that are seen by the operating system and potentially reflected back to the
application
- I/O errors that result in the operating system crashing or refusing to continue to use
the SCSI bus in question (typically, until the operating system is rebooted)
- Data transfer errors that are not seen by the operation but that result in data
corruption on the disk drive
- Total failure of devices and controllers on the bus (this failure is usually temporary
and can be fixed by replacing a fuse but permanent damage is a real possibility).

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-47

Student Notebook

There are devices that can be inserted into the middle of SCSI buses, which claim to
allow the bus to be severed at the point of insertion. Unless you can get IBM to
specifically state that they support such a device, then you should not use it.

IBM SCSI storage devices


It is most likely you will be using an IBM 2104 Expandable Storage Plus device if you
are attaching via SCSI.

3-48 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Physical volume IDs


# lspv
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4

000206238a9e74d7
00020624ef3fafcc
00206983880a1580
00206983880a1ed7
00206983880a31a7

rootvg
None
None
None
None

Node 1

ODM
C

Copyright IBM Corporation 2008

Figure 3-27. Physical volume IDs

AU548.0

Notes:
PVIDs and their use in AIX
For AIX to use a disk (LUN), it requires that the disk (LUN) be assigned a unique
physical volume ID (PVID). This is stored in the ODM and on the disk (LUN), and linked
to a logical construct in AIX called an hdisk. hdisks are numbered sequentially as
discovered by the configuration manager (cfgmgr). Each AIX system that is sharing a
volume group will need to have access to the same disks (LUNs). This is either done
through zoning and masking in the SAN or via twin-tail cabling for non-SAN
implementations.
If the zoning, masking, and cabling is done correctly, each system will see the same
disks (LUNs).
If a disk (LUN) has no PVID, it is assigned when the disk (LUN) is defined to a volume
group or manually by a user via the chdev command. If a disk (LUN) has a PVID
assigned, it will be recognized by AIX when a cfgmgr runs (manually or at system boot)
and stored in the ODM. Again, for systems to share access to a volume group, all the
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-49

Student Notebook

disks (LUN) that are in the volume group must be defined to each system with common
PVIDs.
Using the previous command on each system to determine which systems see which
PVIDs and the volume group affinity is the first step to ensuring that all systems that will
share a volume group have the necessary disks (LUNs) defined. The example shows
that the system sees four disks (LUNs) that have PVIDs assigned, but none of them are
in a volume group yet. The next logical step would be to check the other systems for
common PVIDs. All PVIDs that are found in common would be the PVIDs (and
therefore hdisks) that could be used to create shared volume groups. C-SPOC uses
this method to list the PVIDs that can be used to create a cluster-wide shared volume
group. If C-SPOC finds no common PVIDs across the selected systems for a shared
volume group, no PVIDs are listed. Knowing the PVID-to-hdisk relationship on all the
cluster nodes is therefore very important when creating a shared volume group. This is
true whether using C-SPOC or not.

Disk name inconsistency


There is no requirement in AIX or HACMP that the hdisk name for a shared disk be the
same on all nodes. However, if the names are different, this be a source of confusion for
humans and a possible source of errors, which could lead to down time.

Creating hdisk name consistency


Think about what you are trying to accomplish before you decide to make disk names
consistent across all sharing systems. Is it really necessary? No, as stated previously,
neither HACMP nor AIX cares about the hdisk naming. Do you really want to rely on the
names only, without consulting PVIDs, when dealing with the disks (LUNs)? No,
because this could lead to confusion and could be a possible source of errors. There is
no substitute for good documentation and double checking at the time you are working
with the disks (LUNs). If you decide to create hdisk naming consistency, you will want to
consider two techniques:
- Ensuring that the shared disk subsystem cabling is organized so that the
configuration manager on each side discovers new shared disks in the same order.
- Defining fake hdisks to occupy hdisk numbers on nodes with fewer disks than other
nodes.
These two techniques are combined with judicious use of rmdev -d -l and cfgmgr to
get the hdisk numbers to be consistent.

3-50 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Support for OEM disks


HACMP enables you to use either IBM disks or OEM disks.
Treat an unknown disk type the same way as a known type.
/etc/cluster/disktype.lst
/etc/cluster/lunreset.lst
/etc/cluster/conraid.dat

Use custom disk processing methods:


Identifying ghost disks
Determining whether a disk reserve is being held by another node in
the cluster
Breaking a disk reserve
Making a disk available for use by another node

Enhanced concurrent VGs


Additional considerations

Copyright IBM Corporation 2008

Figure 3-28. Support for OEM disks

AU548.0

Notes:
Introduction
HACMP enables you to use either physical storage disks manufactured by IBM or by an
Original Equipment Manufacturer (OEM) as part of a highly available infrastructure.
Depending on the type of OEM disk, custom methods enable you (or an OEM disk
vendor) to either:
- Tell HACMP that an unknown disk should be treated the same way as a known and
supported disk type, or
- Specify the custom methods that provide the low-level disk processing functions
supported by HACMP for that particular disk type

Treat an unknown disk the same way as a known type


HACMP provides mechanisms that will allow you, while configuring a cluster, to direct
HACMP to treat an unknown disk exactly the same way as another disk it supports. The
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-51

Student Notebook

following three files can be edited to perform this configuration. (There is no SMIT menu
to edit these files.)
/etc/cluster/disktype.lst
This file is referenced by HACMP during disk takeover.
You can use this file to tell HACMP that it can process a particular type of disk the same
way it processes a disk type that it supports. The file contains a series of lines of the
following form:
<PdDvLn field of the hdisk><tab><supported disk type>
To determine the value of the PdDvLn field for a particular hdisk, enter the following
command:
# lsdev -Cc disk -l <hdisk name> -F PdDvLn
The known and supported disk types are:
Disk Name in HACMP
SCSIDISK
SSA
FCPARRAY
ARRAY
FSCSI

Disk Type
SCSI -2 Disk
IBM Serial Storage Architecture
Fibre Attached Disk Array
SCSI Disk Array
Fibre Attached SCSI Disk

For example, to have a disk whose PdDvLn field was disk/fcal/HAL9000 be treated
the same as IBM fibre SCSI disks, a line would be added that read:
disk/fcal/HAL9000

FSCSI

A sample disktype.lst file, which contains comments, is provided.


/etc/cluster/lunreset.lst
This file is referenced by HACMP during disk takeover.
HACMP will use either a target ID reset or a LUN reset for parallel SCSI devices based
on whether a SCSI inquiry of the device returns a 2 or a 3. Normally, only SCSI-3
devices support LUN reset. However, some SCSI-2 devices will support an LUN reset.
So, HACMP will check the Vendor Identification returned by a SCSI inquiry against
the lines of this file. If the device is listed in this file, then a LUN reset is used. This file is
intended to be customer modifiable.
For example, if the HAL 9000" disk subsystem returned an ANSI level of '2' to inquiry,
but supported LUN reset, and its Vendor ID was HAL and its Product ID was 9000,
then this file should be modified to add a line which was either:
HAL
or
HAL

9000

3-52 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

depending on whether vendor or vendor plus product match was desired. Note the
use of padding of Vendor ID to 8 characters.
A sample /etc/cluster/lunreset.lst file, which contains comments, is provided.
/etc/cluster/conraid.dat
This file is referenced by HACMP during varyon of a concurrent volume group.
You can use this file to tell HACMP that a particular disk is a RAID disk that can be used
in classical concurrent mode. The file contains a list of disk types, one disk type per line.
The value of the Disk Type field for a particular hdisk is returned by the following
command:
# lsdev -Cc disk -l <hdisk name> -F type
Note: This file only applies to classical concurrent volume groups. Thus this file has no
effect in AIX V5.3 or greater, which does not support classical concurrent VGs.
HACMP does not include a sample conraid.dat file. The file is referenced by the
/usr/sbin/cluster/events/utils/cl_raid_vg script, which does include some
comments.

Additional considerations
The previously described files in /etc/cluster are not modified by HACMP after they
have been configured and are not removed if the product is uninstalled. This ensures
that customized modifications are unaffected by the changes in HACMP. By default, the
files initially contain comments explaining their format and usage.
Remember that the entries in these files are classified by disk type, not by the number
of disks of the same type. If several disks of the same type are attached to a cluster,
there should be only one file entry for that disk type.
Finally, unlike other configuration information, HACMP does not automatically
propagate these files across nodes in a cluster. It is your responsibility to ensure that
these files contain the appropriate content on all cluster nodes. You can use the
HACMP File Collections facility to propagate this information to all cluster nodes.

Use custom disk processing methods


Some disks might behave sufficiently differently from those supported by HACMP so
that it is not possible to achieve proper results by telling HACMP to process these disks
exactly the same way as supported disk types. For these cases, HACMP provides finer
control.
While doing cluster configuration, you can either
- Select one of the specific methods to be used for the steps in disk processing
- Specify a custom method

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-53

Student Notebook

HACMP supports the following disk processing steps:


-

Identifying ghost disks


Determining whether a disk reserve is being held by another node in the cluster
Breaking a disk reserve Disk Name in HACMP
Making a disk available for use by another node

HACMP enables you to specify any of its own methods for each step in disk processing,
or to use a customized method, which you define.
Using SMIT, you can perform the following functions for OEM disks:
- Add Custom Disk Methods
- Change/Show Custom Disk Methods
- Remove Custom Disk Methods

Additional considerations for custom methods


The custom disk processing method that you add, change or delete for a particular
OEM disk is added only to the local node. This information is not propagated to other
nodes; you must copy this custom disk processing method to each node manually or
use the HACMP File Collections facility.

OEM disks and enhanced concurrent volume groups


OEM disks can be used in enhanced concurrent volume groups, either for concurrent
access mode or, in non-concurrent access mode, for fast disk takeover. In this case,
you would need to edit the /etc/cluster/disktype.lst file and associate the OEM disk
with a supported disk type.

More information
For detailed information about configuring OEM disks for use with HACMP, see:
SC23-5209-01

3-54 HACMP Implementation

HACMP for AIX, Version 5.4.1: Installation Guide


Appendix B: OEM Disk, Volume Group, and Filesystems
Accommodation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review: Topic 2


1. Which of the following disk technologies are supported by
HACMP?
a.
b.
c.
d.

SCSI
SSA
FC
All of the above

2. True or False?

SSA disk subsystems can support RAID5 (cache-enabled) with HACMP.

3. True or False?

Compatibility must be checked when using different SSA adapters in the


same loop.

4. True or False?

No special considerations are required when using SAN based storage units
(DS8000, ESS, EMC HDS, and so forth).

5. True or False?

hdisk numbers must map to the same PVIDs across an entire HACMP
cluster.
Copyright IBM Corporation 2008

Figure 3-29. Lets review topic 2

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-55

Student Notebook

3-56 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

3.3 Shared storage from the AIX perspective

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-57

Student Notebook

Shared storage from the AIX perspective


After completing this topic, you should be able to:
Discuss how LVM aids cluster availability
Describe the quorum issues associated with HACMP
Set up LVM for maximum availability
Configure a new shared volume group, filesystem, and jfslog

Copyright IBM Corporation 2008

Figure 3-30. Topic 3 objectives: Shared storage from the AIX perspective

AU548.0

Notes:
This topic discusses shared storage from the AIX perspective.

3-58 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Logical Volume Manager review


LVM is one of the major enhancements that AIX brings to traditional UNIX disk
management. LVM's capabilities are exploited by HACMP
Physical disk volumes are:
Organized into VGs (volume groups)
Identified by a unique physical volume ID (PVID)
Divided into physical partitions which are mapped to logical partitions in logical volumes (LVs)

Applications (such as file systems and databases) use logical volumes

Physical
Partitions

Logical
Partitions

PVID

Physical
Volumes

hdisk0

Logical
Volume

PVID

hdisk1

Volume
Group

Copyright IBM Corporation 2008

Figure 3-31. Logical Volume Manager review

AU548.0

Notes:
LVM review
The set of operating system commands, library subroutines, and other tools that allow
the user to establish and control logical volume storage is called the logical volume
manager.
LVM controls disk resources by mapping data between a simple and flexible logical
view of storage space and the actual physical disks. The logical volume manager does
this by using a layer of device driver code that runs above the traditional physical device
drivers.

Logical volumes
This logical view of the disk storage, which is called a logical volume (LV), is provided to
applications and is independent of the underlying physical disk structure. The LV is
made up of logical partitions.
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-59

Student Notebook

Physical volumes
Each individual disk drive is called a physical volume (PV). It has a physical volume ID
(PVID) associated with it and an AIX name, usually /dev/hdiskx (where x is a
unique integer on the system). Every physical volume in use belongs to a volume group
(VG) unless it is being used as a raw storage device or a readily available spare (often
called a hot spare). Each physical volume is divided into physical partitions (PPs) of a
fixed size for that physical volume. A logical partition is mapped to one or more physical
partitions.

Volume groups
Physical volumes and their associated logical volumes are grouped into volume group.
Operating system files are stored in the rootvg volume group. Application data are
usually stored in one or more additional volume groups.

3-60 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

LVM relationships
LVM manages the components of the disk subsystem. Applications talk to the
disks through LVM.
This example shows an application writing to a filesystem, which has its LVs
mirrored in a volume group physically residing on separate hdisks.

LVM

Volume Group

Physical
Partitions

Logical
Partitions

write to
/filesystem
Mirrored
Logical
Volume

Application

Copyright IBM Corporation 2008

Figure 3-32. LVM relationships

AU548.0

Notes:
LVM relationships
An application writes to a file system. A file system provides the directory structure and
is used to map the application data to logical partitions of a logical volume. Because an
LVM exists, the application is isolated from the physical disks. The LVM can be
configured to map a logical partition to up to three physical partitions and have each
physical partition (copy) reside on a different disk.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-61

Student Notebook

ODM-LVM relationships
LVM information is kept in two places:
ODM (Object Data Manager)
VGDA (Volume Group Descriptor Area)

The ODM is in the system


The VGDA is on the disk (LUN)
Thus, creating a volume group involves updating both
ODM / VGDA in sync on system where VGDA created

Problem: What about the ODM in other sharing systems?


Solution: Create volume group and ensure that it is imported on sharing systems
Manually
Using C-SPOC (well see this later)

NOTE: This applies to changes made to the LVM constructs, not the data within

Copyright IBM Corporation 2008

Figure 3-33. ODM-LVM relationships

AU548.0

Notes:
Before going too far, its important to understand that the LVM data weve discussed is kept
in both the VGDA of all the disks in the volume group AND in the ODM of the system
making changes to the volume group (or creating it). This creates a rather obvious
problem. How do you keep the ODM up-to-date in every system other than the system that
is making a change to the volume group.
Understand that this is only a consideration when changes are made to the LVM constructs
themselves; for example, adding a filesystem/logical volume, increasing the size of a
logical volume, adding/removing a disk, and so forth.

3-62 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Creating a shared volume group: Manually


Node
1

Node
2

Disk

VGDA
ODM

ODM

#4
#3
varyoffvg
cfgmgr
importvg
chvg

#1
#2
mkvg
unmount
chvg
varyoffvg
mklv (log)
logform
mklv (data)
crfs
#5
Start Cluster Services
Copyright IBM Corporation 2008

Figure 3-34. Creating a shared volume group: Manually

AU548.0

Notes:
Introduction
The steps to add a shared volume groups are:
1. Ensure consistent PVIDs on all nodes where VG to be defined
2. Create a new VG and its contents
3. Varyoff VG on Node1
4. Import VG on Node2 and set VG characteristics correctly
5. Varyoff VG on Node2
6. Start Cluster Services
Note that the slide presents only a high-level view of the commands required to perform
these steps. More details are provided as follows.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-63

Student Notebook

0. Ensure common PVIDs across all nodes that will share volume group
As discussed earlier, HACMP has no requirement that hdisk names on all the nodes are
consistent, but that all the nodes have access to the same disks and have discovered
the PVIDs.
a. Ensure disks are cabled/zoned/masked so that the disks will be seen by both nodes.
b. Add the shared disk(s) to AIX on the primary node (Node1 in the example):
cfgmgr
c. Assign a PVID to the disk(s)
chdev -a pv=yes -l disk_name
where disk_name is hdisk#, hdiskpower# or vpath#.
d. Add the disks to AIX on the secondary node (Node2)
cfgmgr
e. Using the PVIDs, verify that the necessary PVIDs are seen on both nodes. If not,
correct them.
lspv

1. Create a new VG on Node1


a. Create the shared volume group
Use smit mkvg or C-SPOC, remember to pick a unique Major number for the VG
and set Create VG Concurrent Capable to yes for Fast Disk Takeover.
b. Change the auto vary on flag using:
chvg -an <vgname>
(C-SPOC does this automatically. Also, this step is unnecessary if you are using an
enhanced concurrent VG)
c. Create and Initialize the jfslog using:
mklv or smit mklv
logform <jfslogname>
(C-SPOC handles this automatically)
d. Create the logical volume
use smit mklv or C-SPOC
e. Create the file system using one of the following options:
crfs or smit jfs or C-SPOC
using SMIT, select
Add a Journaled File System on a previously defined logical volume

2. Varyoff VG from Node1


a. umount <File_System> any file systems that are part of the VG which was just
created.
b. varyoffvg <vgname>, the new volume group created in step 1.

3-64 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

3. Import VG on Node2 and set VG characteristics correctly


a. On the second cluster node perform the following commands:
importvg -V <major#> -y <vgname> <hdisk#>
chvg -an <vgname>
If using C-SPOC, you can skip this step as it will do this automatically for you.

4. Varyoff the VG on Node2


a. varyoffvg <vgname>
If using C-SPOC or if you created an enhanced concurrent mode volume group, you
can skip this step since C-SPOC will do this automatically for you and enhanced
concurrent mode volume groups dont varyon at creation.

5. Start Cluster Services


a. Restart Cluster Services, which varies on the VG and mounts the filesystems and
you can then resume processing.

C-SPOC
Fortunately, there is an easier way.
These steps will be done automatically if the cluster is active and C-SPOC is used.
Otherwise, you can use the commands listed here in the notes.
Unfortunately, we are not looking at the easier way until we get to the C-SPOC unit.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-65

Student Notebook

LVM mirroring
As mentioned in an earlier topic, HACMP does not provide data redundancy
AIX LVM mirroring is a method that can be used to provide data redundancy
LVM mirroring has some key advantages over other types of mirroring:
Up to three-way mirroring of all logical volume types, including concurrent logical volumes,
sysdumpdev, paging space, and raw logical volumes
Disk type and disk bus independence
Optional parameters for maximizing speed or reliability
Changes to most LVM parameters can be done while the affected components are in use
The splitlvcopy command can be used to perform online backups

Volume Group

LVM
Physical
Partitions

Logical
Partitions

write to
/filesystem
Mirrored
Logical
Volume

Application

Copyright IBM Corporation 2008

Figure 3-35. LVM mirroring

AU548.0

Notes:
Introduction
Reliable storage is essential for a highly available cluster. LVM mirroring is one option to
achieve this. Other options are a hardware RAID disk array configured in RAID-5 mode
or some other solution which provides sufficient redundancy such as an external
storage subsystem like the ESS (DS6000/DS8000), EMC, and so forth.

LVM mirroring
Some of the features of LVM mirroring are:
- Data can be mirrored on three disks rather than having just two copies of data. This
provides higher availability in the case of multiple failures, but does require more
disks for the three copies.
- The disks used in the physical volumes could be of mixed attachment types.

3-66 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- Instead of entire disks, individual logical volumes are mirrored. This provides
somewhat more flexibility in how the mirrors are organized. It also allows for an odd
number of disks to be used and provides protection for disk failures when more than
one disk is used.
- The disks can be configured so that mirrored pairs are in separate sites or in
different power domains. In this case, after a total power failure on one site,
operations can continue using the disks on the other site that still has power. No
information is displayed on the physical location of each disk when mirrored logical
volumes are being created, unlike when creating RAID 1 or RAID 0+1 arrays; so
allocating disks on different sites requires considerable care and attention.
- Mirrored pairs can be on different adapters.
- Read performance is good for short length operations as data can be read from
either of two disks; so the one with the shortest queue of commands can be used.
Write performance requires a write to two disks.
- Extra mirrored copies can be created and then split off for backup purposes.
- Data can be striped across several mirrored disks, an approach that avoids hot
spots caused by excessive activity on a few disks by distributing the I/O operations
across all the member disks.
- There are parameters, such as Mirror Write Consistency, Scheduling Policy, and
Enable Write Verify, which can help maximize performance and reliability.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-67

Student Notebook

Steps to create a mirrored filesystem: Manually


These are the steps to creating a properly mirrored filesystem for HA
environments:
Step

Description

Options

create shared volume group

Name the VG something meaningful like shared_vg1

change the autovary on flag

chvg -an shared_vg1

create a jfs2log lv
"sharedlvlog"

Type=jfs2log, size=1pp, separate physical volumes


= yes, scheduling = sequential, copies=2

initialize the jfslog

logform -V jfs2log /dev/sharedlvlog

create a data lv "sharedlv"

create a filesystem on a
previously created lv

verify the log file is in use

type= jfs2, size=??, separate physical volumes= yes,


copies=2, scheduling = sequential, write verify = ??
pick the lv = sharedlv to create the file system on,
automount = no, assign desired mount point
mount filesystem, lsvg -l shared_vg1 should show 1
lv type jfslog, 1 lp, 2pp.

Copyright IBM Corporation 2008

Figure 3-36. Steps to create a mirrored file system - manually

AU548.0

Notes:
Introduction
This visual describes a procedure for creating a shared volume group and a mirrored
file system. There is an easier-to-use method provided by an HACMP facility called
C-SPOC, which is discussed later in the course. The C-SPOC method cannot be used
until the HACMP clusters topology and at least one resource group have been
configured.
The procedure described in the visual permits the creation of shared file systems before
performing any HACMP related configuration (an approach favored by some cluster
configurators).
It is also valuable to notice that unique names are being used for all of the LVM
components, including JFS Log logical volumes. Pay very close attention to that when
creating LVM components manually. If a JFS Log is not specified when creating a
filesystem, one will be created (if one doesnt exist, that is) with a system generated

3-68 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

name. This could conflict with one that already exists on a system that will be sharing
this volume group.

Detailed procedure
Here are the steps in somewhat more detail:
a. Use the smit mkvg fastpath to create the volume group.
b. Make sure that the volume group is created with the Activate volume group
AUTOMATICALLY at system restart parameter set to no (or use smit chvg to
set it to no). This gives HACMP control over when the volume group is brought
online. It is also necessary to prevent, for example, a backup node from attempting
to online the volume group at a point in time when it is already online on a primary
node.
c. Use the smit mklv fastpath to create a logical volume for the jfs2log with the
parameters indicated in the figure above (make sure that you specify a type of
jfs2log or AIX ignores the logical volume and creates a new one, which has a
system generated name, when you create file system below).
d. Use the logform -V jfs2log <lvname>command to initialize a logical volume for
use as a JFS2 log device.
e. Use the smit mklv fastpath again to create a logical volume for the file system with
the parameters indicated in the figure above.
f. Use the smit crjfs2lvstd fastpath to create a JFS file system in the now existing
logical volume.
Verify by mounting the file system and using the lsvg command. Notice that if copies
were set to 2, then the number for PPs should be twice the number for LPs and that if
you specified separate physical volumes then the values for PVs should be 2 (the
number of copies).

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-69

Student Notebook

Mirroring? Lets talk quorum checking


AIX performs quorum checking on volume groups to ensure that the volume group
remains consistent
The quorum rules are intended to ensure that structural changes to the volume group (for
example, adding or deleting a logical volume) are consistent across an arbitrary number of
varyon-varyoff cycles

When mirroring in AIX, quorum checking is an issue because losing access to 50% of the
disks in a volume group takes the volume group offline
How can you lose access to 50% of the disks?
Logical Volumes are mirrored across two things
The two things can be two disk enclosures or two sites
One of the two things goes away

VG status

Quorum checking
Enabled for
volume group
(# of VGDAs required)

Running (to
stay running)
To
bring online
(varyonvg)

Quorum checking Disabled for


volume group
(# of VGDAs required)

>50% VGDAs

>1

VGDAs

100% VGDAs
>50% VGDAs

or
Forced Varyon set

Copyright IBM Corporation 2008

Figure 3-37. MIrroring? Lets talk quorum checking

AU548.0

Notes:
Introduction
If you plan to mirror your data at the AIX level to provide redundancy, you will need to
consider AIX quorum checking on a volume group. If you arent mirroring your data at
the AIX level, quorum isnt an issue.

Quorum
Quorum is the check used by the LVM at the volume group level to resolve possible
data conflicts and to prevent data corruption. Quorum is a method by which >50% of
VGDAs must be available in a volume group before any LVM actions can continue.
Note: For a VG with three or more disks, there is one copy of the VGDA on each disk.
For a one disk VG, there are two copies of the VGDA. For a 2-disk VG, the first disk has
two copies and the second has one copy of the VGDA. The VGDA is identical for all
disks in the VG.
3-70 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Quorum is especially important in an HA cluster. If LVM can varyon a volume group with
half or less of the disks, it might be possible for two nodes to varyon the same VG at the
same time, using different subsets of the disks in the VG. This is a very bad situation
which we will discuss in the next visual.
Normally, LVM verifies quorum when the VG is varied on and continuously while the VG
is varied on.

Fifty percent of the disks go away


This is the reason you worry about quorum. As the visual indicates, the loss of access
to 50% of the disks will cause quorum checking to take the volume group offline. This is
not good when you consider that you are buying extra hardware to provide greater
availability for the end-user. But what does it mean to lose access to 50% of the disks?
If youre mirroring within a site, this will happen if youre mirroring across disk
enclosures. If one enclosure loses power or the adapter that the AIX system is using to
access the enclosure goes offline, you have lost access to 50% of the disks. If youre
mirroring cross-site, losing access to 50% of the disks means losing access to the other
sites storage subsystem. This could be a problem with just the storage subsystem at
the other site, a problem with the communications to the other site, or the other site is
entirely down.
In the case where you are dealing within a site, consider disabling quorum. In the case
where you are dealing with cross-site LVM mirroring, consider using HACMP to handle
the loss of access and ensure you enable the volume group for cross-site mirroring
verification (when adding the volume group via C-SPOC), add the disks in the volume
group to the list of cross-site mirrored disks (Add Disk/Site Definition for Cross-Site
LVM Mirroring, via smitty cl_xslvmm) and set the forced varyon flag in the resource
group that contains all cross-site mirrored volume groups. On recovery, if the stale
partition synchronization encounters a problem, you may have to use the manual
process of synchronizing the mirrors (C-SPOC menu item Synchronize Shared LVM
Mirrors).

AIX errlog entry for quorum loss


If quorum is lost the following is an example of an AIX errlog entry:
Id
91F9700D

Label
LVM_SA_QUORCLOSE

Type CL
UNKN H

Description
QUORUM LOST, VOLUME GROUP CLOSING

How HACMP reacts to quorum loss


HACMP 4.5 and up automatically reacts to a loss of quorum (LVM_SA_QUORCLOSE)
error associated with a volume group going offline on a cluster node. In response to this
error, a non-concurrent resource group goes offline on the node where the error
occurred. If the AIX Logical Volume Manager takes a volume group in the resource

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-71

Student Notebook

group offline due to a loss of quorum for the volume group on the node, HACMP
selectively moves the resource group to another node.
You can change this default behavior by customizing resource recovery to use a notify
method instead of fallover. For more information, see Chapter 3: Configuring HACMP
Cluster Topology and Resources (Extended) in the Administration Guide.
Note: HACMP launches selective fallover and moves the affected resource group only
in the case of the LVM_SA_QUORCLOSE error. This error can occur if you use mirrored
volume groups with quorum enabled. However, other types of volume group failure
errors could occur. HACMP does not react to any other type of volume group errors
automatically. In these cases, you still need to configure customized error notification
methods, or use AIX Automatic Error Notification methods to react to volume group
failures.

3-72 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Elimination of quorum issues


You can eliminate the loss of quorum issue (loss of volume group
when quorum is enabled and 50% of disks are lost).
Do not mirror in the AIX node
Mirror with quorum disabled

Do not mirror in the AIX node


Use external storage subsystem (DS8000/DS6000, EMC, etc) or RAID arrays

Mirror with quorum disabled


It may be possible for each side of a 2-node cluster to have different parts of the same
volume group vary'd online
Care must be taken in this case to avoid data corruption

Overall considerations
Distribute hard disks across more than one bus
Use different power sources

Copyright IBM Corporation 2008

Figure 3-38. Elimination of quorum issues

AU548.0

Notes:
Introduction
Eliminating quorum issues is done either by mirroring with quorum disabled, or by not
mirroring at the AIX level.

Eliminating quorum problems


To enhance the availability of a volume group, think about the following points:
- Using more than one disk adapter prevents the loss of access to the disks if a single
adapter fails. This can be used with an external disk subsystem to provide multiple
path (using multipathing software) to the LUNs, or with mirroring so that different
copies of the data are accessed through different adapters.
- For higher availability, use two external power sources.
- If there are only two disks in the volume group then you lose access to the volume
group if the disk with two VGDAs is lost.
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-73

Student Notebook

- If you are mirrored across two disk subsystems, consider a quorum buster disk to
prevent loss if quorum if you lose access to one subsystem. This is discussed in the
later in the notes.
Distribute hard disks across more than one bus
Use multipathing software and two Fibre Channel adapters.
Use three adapters per node in SCSI.
Use two adapters per node, per loop in SSA.
Use different power sources
Connect each power supply in the storage device to a different power source.

Dont mirror at the AIX level


This is the option most configurations use today. The data redundancy is provided in the
external storage subsystem. Quorum is not an issue in this case.

Disabling quorum: Nonquorum volume groups


Quorum checking can be disabled on a per-volume group basis. If quorum checking is
disabled, LVM will not varyoff a volume group if quorum is lost while the VG is running.
However, in this case, 100% of the VGDAs must be available when the volume group is
varied on.
Why disable quorum checking?
Disabling quorum checking may seem like a good idea from an availability point of view.
For example, consider a volume group mirrored across two storage subsystems (for
example, in two different buildings across campus). If access to one storage subsystem
is lost, only half of the VGDAs are available. With quorum checking enabled, quorum is
lost and the VG is varied off. This would seem to defeat the purpose of mirroring.
However, there are real risks associated with disabling quorum. We will discuss ways to
handle the quorum problem in the next few visuals.
Risks of disabling quorum checking
Disabling quorum checking is an option; however, considerable care must be taken to
ensure that a consistent set of VGDAs is used on an ongoing basis. In addition,
exceptional care must be taken to ensure that one half of the cluster isnt running with
one half of all the mirrored logical volumes while the other node is running with the other
half of all the mirrored logical volumes as this leads to a phenomenon known as data
divergence.
Sometimes it might be necessary to disable quorum in a cluster. In this case, take care
that you do not end up with data divergence. The primary strategy for avoiding data
divergence is to avoid partitioned clusters, although careful design of the clusters
shared storage is also important.
3-74 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Allow HACMP to handle it: Forced varyon


Or you can leave quorum on and allow HACMP to handle it
Involves downtime when a mirror copy is lost (reducing availability)

HACMP 5.x provides a per resource group forced varyon:


Each resource group has a flag which can be set to cause HACMP to perform a careful
forced varyon of the resource group's VGs
If normal varyonvg fails and this flag is set:
HACMP verifies that at least one complete copy of each logical volume is available
If verification succeeds, HACMP forces the volume group online

This is not a complete and perfect solution to quorum issues:


If the cluster is partitioned then the rest of the volume group might still be online on a
node in the other partition

HACMP 4.5 introduced forced varyon for all shared VGs:


Still available in HACMP 5.x
If the HACMP_MIRROR_VARYON environment variable is set to TRUE, forced varyon is
enabled for all shared VGs in the cluster
If set, HACMP_MIRROR_VARYON overrides the per resource group forced varyon flag
Copyright IBM Corporation 2008

Figure 3-39. Allow HACMP to handle it: Forced varyon

AU548.0

Notes:
Introduction
If you decide to mirror at the AIX level and to leave quorum checking on, you will want to
have HACMP handle the loss of access to a volume group if half the disks are lost. Be
sure you understand what youre deciding to do, though. If you allow HACMP to handle
the loss of access to the volume group, this means that the loss of half the disks (only
one of your two copies of the data) will result in the users loss of access to the
application until it can be taken by another cluster node. Youve purchased the
additional hardware and set up the mirroring precisely to avoid downtime if you lose
access to part of the hardware, but this strategy will result in downtime. You make the
call (see disabling quorum in the previous visual).

varyonvg -f
AIX provides the ability to varyon a volume group if a quorum of disks is not available.
This is called forced varyon. The varyonvg -f command allows a volume group to be
Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-75

Student Notebook

made active that does not currently have a quorum of available disks. All disks that
cannot be brought to an active state will be put in a removed state. At least one disk
must be available for use in the volume group.

Per resource group forced varyon


HACMP 5.x provides a flag in each resource group which allows you to enable forced
varyon of the VGs in that resource group, as described in the visual.

Forced varyon of all shared volume groups


The HACMP_MIRROR_VARYON environment variable, introduced in HACMP 4.5, when set
to TRUE, enables the forced varyon mechanism for all shared volume groups in the
cluster.
In contrast, the HACMP 5.x forced varyon mechanism applies to specific resource
groups volume groups.
The HACMP_MIRROR_VARYON variable is still supported by HACMP 5.x and, if set to TRUE,
overrides any per-resource group settings for the forced varyon feature.
If the HACMP_MIRROR_VARYON variable is used, it should probably be defined by inserting
the following line into /etc/environments on each cluster node:
HACMP_MIRROR_VARYON=TRUE

MISSINGPV_VARYON environment variable


An approach commonly used in the past to deal with quorum-related issues involves
the use of the MISSINGPV_VARYON environment variable. This AIX provided environment
variable, if set to TRUE in /etc/environment, enables the forced varyon of any VGs
which are missing disks.
Clusters that use the MISSINGPV_VARYON variable should be updated to use either the
forced varyon feature in the Resource Group.

3-76 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Recommendations for forced varyon


Before enabling HACMP's forced varyon feature for a volume
group or the HACMP_MIRROR_VARYON variable for the entire
cluster, ensure that:
The affected volume groups are mirrored across disk enclosures
The affected volume groups are set to super-strict allocation
There are redundant heartbeat networks between all nodes
Administrative policies are in effect to prevent volume group structural changes when
the cluster is running degraded (that is, failed over or with disks missing)

Copyright IBM Corporation 2008

Figure 3-40. Recommendations for forced varyon

AU548.0

Notes:
Be careful when using forced varyon
Failure to follow each and every one of these recommendations could result in either
data divergence or inconsistent VGDAs. Either problem can be very difficult if not
impossible to resolve in any sort of satisfactory way; so be careful!

More information
Refer to the HACMP for AIX Administration Guide Version 5.4.1 (Chapter 15) and the
HACMP for AIX Planning Guide Version 5.4.1 (Chapter 5) for more information about
forced varyon and quorum issues.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-77

Student Notebook

LVM and HACMP considerations


Following these simple guidelines helps keep the configuration
easier to administer:
All LVM constructs must have unique names in the cluster.
For example, httplv, httploglv, httpfs and httpvg
Mirror or otherwise provide redundancy for critical logical volumes.
Remember the jfslog
If it is not worth mirroring, then consider deleting it now rather than having to wait to
lose the data when the wrong disk fails someday
Even data that is truly temporary is worth mirroring because it avoids an application
crash when the wrong disk fails
External disk subsystems (like the DS8000 or EMC Symmetrix) or RAID-5 storage
devices are alternative ways to provide redundancy
The VG major device numbers should be the same
Mandatory for clusters exporting NFS filesystems, but it is a good habit for any cluster
Shared data on internal disks is a bad idea
Focus on the elimination of single points of failure

Copyright IBM Corporation 2008

Figure 3-41. LVM and HACMP considerations

AU548.0

Notes:
Unique names
Because your LVM definitions are used on multiple nodes in the cluster, you must make
sure that the names created on one node are not in use on another node. The safest
way to do this is to use C-SPOC. If creating the LVM components outside C-SPOC, you
must explicitly create and name each entity [do not forget to explicitly create, name and
format (using logform) the jfslog logical volumes] with a name known to be unique
across the nodes in the cluster.

Provide data redundancy via an external storage device or


mirroring/RAID
For availability, use an external storage device that provides data redundancy across
multiple disks or mirror (or use hardware RAID) for all your shared logical volumes,
including the jfslog logical volume.

3-78 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- If it is worth keeping, then it is worth making redundant. If it is not worth making


redundant, then it is not worth keeping and should be deleted.
The mirrorvg command provides an easy way to mirror all the logical volumes on a
given volume group. This same functionality may also be accomplished manually if you
execute the mklvcopy command for each individual logical volume in a volume group.

Volume group major numbers


If you are using NFS, be sure to use the same major number on all nodes. Even if not
using NFS, this is good practice, and makes it easy to begin using NFS with this volume
group in the future.
Use the lvlstmajor command on each node to determine a free major number
common to all nodes.

Use external disks for shared data


External disks should be used for shared volume groups. If internal disks were
configured for shared volume groups and the owning node needed to be powered down
for any reason, it would render the shared volume groups unavailable--clearly a bad
idea.

Eliminate single points of failure


The focus of cluster design must always be eliminating single points of failure.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-79

Student Notebook

Support for OEM volume groups


OEM volume groups can be used with HACMP
HACMP 5.3 and later automatically detects and provides the
methods for Veritas volume groups (VxVM)
Configuring custom volume group processing methods using
SMIT

List volume groups of a specified type


List physical and logical disks in a volume group
Bring a volume group online and offline
Determine a volume group status
Verify volume groups configuration
Provide a location of log files and other debugging information. View
using the AIX 5L snap -e command.

Limitations and more information

Copyright IBM Corporation 2008

Figure 3-42. Support for OEM volume groups

AU548.0

Notes:
Introduction
You can configure OEM volume groups in AIX and use HACMP as an IBM High
Availability solution to manage such volume groups.
Note: Different OEMs can use different terminology to refer to similar constructs. For
example, the Veritas Volume Manage (VxVM) term Disk Group is analogous to the AIX
LVM term Volume Group. We will use the term volume groups to refer to OEM and
Veritas volume groups.

Veritas volume manager


Among other OEM volume groups and filesystems, HACMP 5.3 and later supports
volume groups and filesystems created with VxVM in Veritas Foundation Suite v.4.0. To
make it easier for you to accommodate Veritas volume groups in the HACMP cluster,
the methods for Veritas volume groups support are predefined in HACMP and are used
3-80 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

automatically. After you add Veritas volume groups to HACMP resource groups, you
can select the methods for the volume groups from the pick lists in HACMP SMIT
menus for OEM volume groups support.
Note: Veritas Foundation Suite is also referred to as Veritas Storage Foundation (VSF).

Configuring custom volume group processing methods using SMIT


When HACMP identifies OEM volume groups of a particular type, it can be configured
to provide the volume group processing functions shown in the visual.
You can add, change, and remove custom volume groups processing methods for a
specific OEM volume group using SMIT. You can select existing custom volume group
methods that are supported by HACMP, or you can use your own custom methods.
Using SMIT, you can perform the following functions for OEM disks:
- Add Custom Volume Group Methods
- Change/Show Custom Volume Group Methods
- Remove Custom Volume Group Methods

Additional considerations
The custom volume group processing methods that you specify for a particular OEM
volume group is added to the local node only. This information is not propagated to
other nodes; you must copy this custom volume group processing method to each node
manually. Alternatively, you can use the HACMP File Collections facility to make the
disk, volume, and file system methods available on all nodes.

Limitations and more information


There are some limitations to using OEM volume groups with HACMP. For example,
HACMP supports a number of extended functions for LVM volume groups that are not
available for OEM volume groups, such as enhanced concurrent mode, active and
passive varyon process, heartbeating over disk, selective fallover upon volume group
loss and others. In addition, there are several other limitations.
For complete details on using OEM volume groups with HACMP, see Appendix B in the
HACMP for AIX Installation Guide.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-81

Student Notebook

Support for OEM file systems


OEM file systems can be used with HACMP
HACMP 5.3 and later automatically detects and provides the
methods for Veritas file systems (VxFS)
Configuring custom file systems processing methods using
SMIT

List file systems of a specified type


List volume groups hosting a specified file system type
Bring a file system online and offline
Determine a file systems status
Verify file system configuration
Provide a location of log files and other debugging information. View
using the AIX 5L snap -e command.

Limitations and more information

Copyright IBM Corporation 2008

Figure 3-43. Support for OEM file systems

AU548.0

Notes:
Introduction
You can configure OEM file systems in AIX and use HACMP as an IBM High Availability
solution to manage such file systems.

Veritas file systems


Among other OEM volume groups and filesystems, HACMP 5.3 and later supports
volume groups and filesystems created with VxVM in Veritas Foundation Suite v.4.0. To
make it easier for you to accommodate Veritas filesystems in the HACMP cluster, the
methods for Veritas filesystems support are predefined in HACMP. After you add Veritas
filesystems to HACMP resource groups, you can select the methods for the filesystems
from the pick lists in HACMP SMIT menus for OEM filesystems support.
Note: Veritas Foundation Suite is also referred to as Veritas Storage Foundation (VSF).

3-82 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Configuring custom volume group processing methods using SMIT


When HACMP identifies OEM file systems of a particular type, it can be configured to
provide the file system processing functions shown in the visual.
You can add, change, and remove custom volume groups processing methods for a
specific OEM volume group using SMIT. You can select existing custom file system
methods that are supported by HACMP, or you can use your own custom methods.
Using SMIT, you can perform the following functions for OEM disks:
- Add Custom Filesystem Methods
- Change/Show Custom Filesystem Methods
- Remove Custom Filesystem Methods

Additional considerations
The custom file system processing methods that you specify for a particular OEM file
system is added to the local node only. This information is not propagated to other
nodes; you must copy this custom file system processing method to each node
manually. Alternatively, you can use the HACMP File Collections facility to make the
disk, volume, and filesystem methods available on all nodes.

Limitations and more information


There are some limitations to using OEM file systems with HACMP.
For complete details on using OEM file systems with HACMP, see Appendix B in the
HACMP for AIX Installation Guide.

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-83

Student Notebook

Checkpoint
1.True or False?
Lazy update attempts to keep VGDA constructs in sync between
cluster nodes (reserve/release-based shared storage protection).

2.Which of the following commands will bring a volume group


online?
a.getvtg <vgname>
b.mountvg <vgname>
c.attachvg <vgname>
d.varyonvg <vgname>

3.True or False?
Quorum should always be disabled on shared volume groups.

4.True or False?
Filesystem and logical volume attributes cannot be changed while
the cluster is operational.

5.True or False?
An enhanced concurrent volume group is required for the heartbeat
over disk feature.
Copyright IBM Corporation 2008

Figure 3-44. Checkpoint

AU548.0

Notes:

3-84 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Key points from this unit:
Access to shared storage must be controlled
Non-concurrent (serial) access
Reserve/release-based protection:
Slower and may result in ghost disks
RSCT-based protection (fast disk takeover):
Faster, no ghost disks, and some risk of partitioned cluster in the event of
communication failure
Careful planning is needed for both methods of shared storage protection to
prevent fallover due to communication failures
Concurrent access
Access must be managed by the parallel application

HACMP supports several disk technologies


Must be well understood to eliminate single points of failure

Shared storage should be protected with redundancy


LVM mirroring
LVM configuration options must be understood to ensure availability
LVM quorum checking and forced varyon must be understood to ensure
availability
Hardware RAID
Copyright IBM Corporation 2008

Figure 3-45. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 3. Shared storage considerations for high availability

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

3-85

Student Notebook

3-86 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 4. Planning for applications and resource


groups
What this unit is about
This unit describes the considerations for making an application highly
available in an HACMP environment

What you should be able to do


After completing this unit, you should be able to:
List and explain the requirements for an application to be
supported in an HACMP environment
Describe the HACMP start and stop scripts
Describe the resource group behavior policies supported by
HACMP
Enter the configuration information into the Planning Worksheets

How you will check your progress


Accountability:
Checkpoint questions

References
SC23-5209-01 HACMP for AIX, Version 5.4.1 Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1 Planning Guide
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
List and explain the requirements for an application to be
supported in an HACMP environment
Describe the HACMP start and stop scripts
Describe the resource group behavior policies supported by
HACMP
Enter the configuration information into the Planning
Worksheets

Copyright IBM Corporation 2008

Figure 4-1. Unit objectives

AU548.0

Notes:

4-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

How to define an application to HACMP


The two steps to define an application to HACMP are:
Step 1. Create resource.
Application Server: defines start and stop scripts

Step 2. Create resource group.


Resource Group
Node 1

Node 2

Node 3

Shared
Disk
List of Nodes
Policies: Where to run
Resources
Application Server
Service Address
Volume Group
Copyright IBM Corporation 2008

Figure 4-2. How to define an application to HACMP

AU548.0

Notes:
Two steps to define an application to HACMP
To have HACMP manage an application, you must do two things:
1. Create an HACMP resource called an application server. The
application server defines a start and a stop script for the
application
2. Create an HACMP resource group. This in turn will require two
steps:
a. The basic resource group definition:
i.

Defines a list of nodes where the application can run. The


default priority is the order in the list. The first node listed is
called the home node.

ii. Names which policies to use that will control which node the
application actually runs on.
Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-3

Student Notebook

b. Add resources to the Resource Group. These are the resources


that HACMP will move during a fallover.
i.

4-4

HACMP Implementation

Application server name, Service address, and Volume


group

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Application considerations
Automation
No intervention

Dependencies
Using names unique to one node
Other applications

Interference
Conflicts with HACMP

Robustness
Application can withstand problems

Implementation
Other aspects to plan for

Monitoring using HACMP


This is critical
Used to be overlooked
Nearly mandatory for
Unmanaged resource groups
Non-disruptive Startup/Upgrade
Copyright IBM Corporation 2008

Figure 4-3. Application considerations

AU548.0

Notes:
Introduction
Many applications can be put under the control of HACMP but there are some
considerations that should be taken into account.

Automation
One key requirement for an application to function successfully under HACMP is that
the application must be able to start and stop without any manual intervention. Because
the cluster daemons call the start and stop scripts, there is no option for interaction.
Additionally, upon an HACMP fallover, the recovery process calls the start script to bring
the application online on a standby node. This allows for a fully automated recovery
Other requirements for start and stop scripts will be covered on the next visual.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-5

Student Notebook

Dependencies
Dependencies to be careful of when coding the scripts include:

Referring to a locally attached device.


Hard coding, such as /dev/tty0, which might not be the same on another node.
Using a hostname that is not the same on other nodes.
Software licensing: Software can be licensed to a particular CPU ID. If this is the
case with your application, realize that a fallover of the software will not
successfully restart. You might be able to avoid this problem by having a copy of
the software resident on all cluster nodes. Know whether your application uses
software that is licensed to a particular CPU ID.

Application dependencies:
Dependencies that in the past you had to worry about but now you may not have to:
One application must be up before another one.
Applications must both run on the same node.
These can now be handled by Runtime Dependency options. An overview of these
is given later in this unit.

Interference
An application can execute properly on both the primary and standby nodes. However,
when HACMP is started, a conflict with the application or environment might occur that
prevents HACMP from functioning successfully. Two areas to look out for are using
IPX/SPX Protocol and Manipulating Network Routes.

Robustness
Beyond basic stability, an application under HACMP should meet other robustness
characteristics, such as successful start after hardware failure and survival of real
memory loss. It should also be able to survive the loss of the kernel or processor state.

Implementation
There are several aspects of an application to consider as you plan for implementing it
under HACMP. Consider characteristics, such as time to start, time to restart after
failure, and time to stop. Also consider:
Writing effective scripts.
Consider file storage locations.
Using inittab and cron Table: Inittab is processed before HACMP is started. Cron
table is local to a each node. Time/date should be synchronized.
We will look at writing scripts and data locations in the following visuals.

4-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Monitoring using HACMP


HACMP provides another runtime option called application monitoring. With monitoring,
failure of the application can generate a fallover. This capability is also used to ensure
that multiple instances of an application arent spawned when returning a resource
group to the online state from unmanaged on the same node or when using
non-disruptive startup or upgrade. Consider creating an application monitor. An
availability analysis tool is also provided. These topics are covered in detail in the
HACMP Administration II Administration and Problem Determination course.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-7

Student Notebook

Writing start and stop scripts


Check these items:

Environment is what is expected


Multiple instances issue
Location of scripts
Handle errors from previous termination
Correct coding

Use assists

Copyright IBM Corporation 2008

Figure 4-4. Writing start and stop scripts

AU548.0

Notes:
Introduction:
Application start scripts should not assume the state of the environment; defensive
programming can correct any irregular conditions that occur. Remember that cluster
manager spawns these scripts off a separate job in the background and carries on
processing. The application start scripts must be able to handle an unknown previous
shutdown state.

Items to check
- Environment:
Verify the environment. Are the prerequisite conditions satisfied? These might
include access to a file system, adequate paging space, IP labels and free file
system space. The start script should exit and run a command to notify system
administrators if the requirements are not met.

4-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- Multiple instances issue:


When starting an application with multiple instances, only start the instances
applicable for each node. Certain database startup commands read a configuration
file and start all known databases at the same time. This may not be a desired
configuration for all environments.
- Location:
Scripts must be available and executable on all nodes of the resource group.
- Handle any previous state:
Was previous termination successful? Is data recovery needed? Always assume the
database is in an unknown state since the conditions that occurred to cause the
takeover cannot be assumed.
- Correct Coding:
Scripts should start with declaring a shell (that is, #!/bin/usr/ksh).
Scripts should not kill an HACMP process. Be careful when using the grep
command that only what is to be stopped is killed.
Scripts should exit with RC=0.
The stop script should make sure the application is really stopped.

Using assists
IBM provides a priced feature for HACMP that provides all the code and monitoring for
three applications: WebSphere, DB2, and Oracle Real Application Server (RAC). In
these cases you would not have to write the scripts yourself.
There are also plug-in filesets that provide help for integrating print server, DHCP, and
DNS. These filesets are part of the base HACMP product.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-9

Student Notebook

Where should data go?


Private storage:
Operating system components
Shared storage:
Dynamic data
Web server content
Application log files
Files updated by application
It depends:
Configuration files
Application binaries
License files

Copyright IBM Corporation 2008

Figure 4-5. Where should data go?

AU548.0

Notes:
Introduction
Deciding where data should go should be thought out well. For some data, the answer
is clear. For other cases, it depends. Putting data on shared storage allows for only one
copy but may not be available when needed. Putting data on private storage is subject
to having different copies but upgrades can be done easier.

Private storage
Private storage must be used for the operating system components. It can also be used
for configuration files, license files, and application binaries subject to the trade-offs
mentioned in the introduction.

4-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Shared storage
Shared storage must be used for dynamic data, Web server content, data that is
updated by the application and application log files (be sure time is same on the nodes).
Again configuration files, application binaries, and license files could go here subject to
the trade-offs mentioned in the introduction above.

It depends
License files deserves a special mention. If using node locked, then you should use
private storage. In any case, you must learn the license requirements of the application
to make a proper determination.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-11

Student Notebook

Resource group policies


Three initial policies:
Startup Policy
Fallover Policy
Fallback Policy

Additional run-time options


Settling time (Startup)
Delayed Fallback (Fallback)

Copyright IBM Corporation 2008

Figure 4-6. Resource group policies

AU548.0

Notes:
Three initial policies
In HACMP, you specify in the resource group definition three policies that control which
node a resource group (application) runs on:
1. Startup (of Cluster Services)
When Cluster Services starts up on a node, each resource group
definition is read to determine if this node is listed, and if so,
whether that resource group has already been started on another
node. If the resource group hasnt been started elsewhere, then
the startup policy is examined to further determine if Cluster
Services should activate the resource group and start the
application.
2. Fallover

4-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

If there is node failure, then the Fallover policy determines which


other node should takeover and activate the resource group and
start the application there.
3. Fallback
If a node earlier in the list of nodes (that is, higher priority) for the
resource group is started after a fallover, then the Fallback policy
determines if the resource group should be stopped and started
back up on the higher priority node.

Additional runtime options


In addition to the policies, there are two runtime options that affect these policies:
1. Settling time affects one of the Startup policies.
2. Delayed fallback timer affects how the Fallback policy works.
Runtime options are covered in more detail in the HACMP Administration II
Administration and Problem Determination course.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-13

Student Notebook

Startup policy
Online on home node only
Online on first available node
Run-time Settling Time may be set

Using distribution policy


On all available nodes

Copyright IBM Corporation 2008

Figure 4-7. Startup policy

AU548.0

Notes:
Online on home node only
When starting Cluster Services on the nodes, only the Cluster Services on the home
node (first node listed in the resource group definition) will activate the resource group
(and start the application). This policy requires the home node to be available.

Online on first available node


When starting Cluster Services on the nodes, the first Cluster Services up on a node
that is in the list of nodes for the resource group will activate the resource group and
start the application.

4-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Online using node distribution policy


Similar to the Online on first available node except that only one resource group can
be active on a given node. If the first node in the resource groups list of nodes already
has another resource group started on it then the next node in the list of nodes is tried.

Online on all available nodes


Cluster Services on every node will activate the resource group and start the
application. This is equivalent to the concurrent resource group behavior in previous
releases of HACMP. If you select this option for the resource group, ensure that
resources in this group can be brought online on multiple nodes simultaneously.

Runtime settling time


A Settling Time value can be set for the Online on first available node policy. If you set
the settling time, Cluster Services will wait up to the duration of the settling time interval
to see if the home node joins the cluster or at the end of the interval choose the highest
priority node rather than simply activating the resource group on the first possible node
that reintegrates into the cluster. This minimizes the resource group from bouncing.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-15

Student Notebook

Online on all available nodes


Application runs on all available nodes concurrently
No fallover/fallback just less/more nodes running the application

Resource group restrictions:


No JFS or JFS2 filesystems (only raw logical volumes)
No service IP Labels / Addresses (which means no IPAT)
Application must provide own lock manager

Potential to provide essentially zero downtime


The only Startup Policy that supports multi-node disk heartbeat
networks

Copyright IBM Corporation 2008

Figure 4-8. Online on all available nodes

AU548.0

Notes:
Application runs on all available nodes concurrently
If a node belongs to a resource group with this startup policy, when Cluster Services
start on the node, Cluster Services will start the application and make all the resources
mentioned available on this node. In this case, it does not matter if the resource group is
already active on another node so the application ends up being started on all nodes
where Cluster Services are started.
This policy is also referred to as concurrent mode or access.

Resource group restrictions


There are restrictions when defining a resource group that will use this policy. The data
can not be part of a JFS type logical volume; it must be defined as a raw logical volume.
You can not include a service address in the resource group definition. Finally, it is up to
the application to provide a lock manager to ensure that data isnt being updated
4-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

simultaneously from multiple nodes. Oracle Real Application Server (RAC) is an


application that uses this type of startup policy.

Potential to provide essentially zero downtime


Because the application is running on multiple nodes, the loss of a node does not result
in the loss of the application.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-17

Student Notebook

Fallover policy
Fallover to next priority node in the list
Fallover using dynamic node priority
Bring offline (on error node)

Copyright IBM Corporation 2008

Figure 4-9. Fallover policy

AU548.0

Notes:
Fallover to next priority node in the list
In the case of fallover, a resource group that is online on only one node at a time follows
the list in the resource groups definition to find the next highest priority node currently
available.

Fallover using dynamic node priority


If you select this option for the resource group, you can choose one of the following
three methods to have HACMP choose the fallover node dynamically:
1. highest_mem_free (most available memory)
2. highest_idle_cpu (most available processor time)
3. lowest_disk_busy (least disk activity)
Dynamic node priority is useful in a cluster that has more than two nodes.
4-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Bring offline (on error node only)


Select this option to bring a resource group offline on a node during an error condition.
This option represents the behavior of a concurrent resource group and ensures that if
a particular node fails, the resource group goes offline on that node only, but remains
online on other nodes. Selecting this option as the fallover preference when the startup
preference is not Online On All Available Nodes might allow resources to become
unavailable during error conditions. If you do so, HACMP issues an error.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-19

Student Notebook

Fallback policy
Fallback to higher priority node in the list
Can use a run time Delayed Fallback Timer preference

Never fallback

Copyright IBM Corporation 2008

Figure 4-10. Fallback policy

AU548.0

Notes:
Fallback to higher priority node
When HACMP Cluster Services start on a node, HACMP looks to see if there is a
resource group with this node in the list and which is currently active on another node. If
this node is higher in the list than the node the resource group is currently running on
and this policy has been chosen, the resource group is moved and the application is
started on this node.

Runtime delayed fallback timer


A runtime fallback timer policy can be set to a time in the future when the fallback
should happen. The following example describes a case when configuring a delayed
fallback timer would be beneficial. If a node in a cluster failed, and was later repaired,
you might want to integrate the node into a cluster during off-peak hours. Rather than
writing a script or a cron job to do the work, which are both time-consuming and prone
to error, you could set the delayed fallback timer for a specified resource group to the
4-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

appropriate time. After starting the node, HACMP automatically starts the resource
group fallover at the specified time.
Runtime policies will be covered in more detail in the HACMP Administration II
Administration and Problem Determination course.

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-21

Student Notebook

Valid combinations of policies

Copyright IBM Corporation 2008

Figure 4-11. Valid combinations of policies

AU548.0

Notes:
Valid combinations
HACMP enables you to configure only valid combinations of startup, fallover, and
fallback behaviors for resource groups.

Preferences are not the only factor in determining node


In addition to the node preferences described in the previous table, other issues can
determine the resource groups that a node acquires. We will look at other issues in the
administration and event units later in this course.

4-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Dependent applications and resource groups


Node 1
Parent RG
Node 1
Child/Parent
RG

Node 2
Parent/Child
RG

Child RG

Parent/Child Dependency
One resource group can be the parent of another resource group

Location Dependency
A resource group may be on the same node/site or on a different node/site than
another resource group

Implemented as Run-Time Policy


Copyright IBM Corporation 2008

Figure 4-12. Dependent applications/resource groups

AU548.0

Notes:
One resource group can be a parent of another resource group
In HACMP 5.2 and higher, you can have cluster-wide resource group online and offline
dependencies.

Parent will be brought online before child.


Parent will be brought offline after child.
Parent/child can be on different nodes.
Three levels of dependencies are supported.

In HACMP 5.3 and higher, you can specify resource location dependencies:
Online on same node
Online on different nodes
Online on same site

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-23

Student Notebook

Implemented as runtime policy


Runtime policies will be covered in more detail in the HACMP Administration II
Administration and Problem Determination course.

4-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Checkpoint
1. True or False
Applications are defined to HACMP in a configuration file that lists what
binary to use.

2. What policies would be the best to use for a 2-node active-active


cluster using IPAT to minimize both applications running on the
same node?
a.
b.
c.
d.
e.

home, next, never


first, next, higher
distribution, next, never
all, error, never
home, next, higher

3. Which type of data should not be placed in private data storage?


a.
b.
c.
d.

Application log data


License file
Configuration files
Application binaries

4. Which policy is not a Run-time policy?


a. Settling
b. Delayed Fallback Timer
c. Dynamic Node Priority

Copyright IBM Corporation 2008

Figure 4-13. Checkpoint

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 4. Planning for applications and resource groups

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

4-25

Student Notebook

Unit summary
Key points from this unit:
To define an application to HACMP, you must:
Create an application server resource (start and stop scripts)
Create a resource group (node list, policies, resources)

Considerations for putting an application under HACMP control

Automation
Dependencies
Interference
Robustness
Implementation details
Monitoring
Shared storage requirements

Considerations for start and stop scripts:

Environment
Multiple instances
Script location
Error handling
Coding issues

Resource group policies control how HACMP manages the application


Startup policy (with optional Settling timer)
Fallover policy
Fallback policy (with optional Delayed fallback)

Copyright IBM Corporation 2008

Figure 4-14. Unit summary

AU548.0

Notes:

4-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 5. HACMP installation


What this unit is about
This unit describes the installation process for HACMP 5.4.1 for AIX
5L.

What you should be able to do


After completing this unit, you should be able to:

State where installation fits in the implementation process


Describe how to install HACMP 5.4.1
List the prerequisites for HACMP 5.4.1
List and explain the purpose of the major HACMP 5.4.1
components

How you will check your progress


Accountability:
Checkpoint
Machine exercise

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html

HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
State where installation fits in the implementation process
Describe how to install HACMP 5.4.1
List the prerequisites for HACMP 5.4.1
List and explain the purpose of the major HACMP 5.4.1
components

Copyright IBM Corporation 2008

Figure 5-1. Unit objectives

AU548.0

Notes:
What this unit covers
This unit discusses the installation and the code components of HACMP 5.4.1.

5-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

5.1 Installing the HACMP 5.4.1 software

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-3

Student Notebook

Installing the HACMP software


After completing this topic, you should be able to:
Explain where the installation fits in the implementation
process
Describe how to install HACMP 5.4.1
Discuss the prerequisites for HACMP 5.4.1

Copyright IBM Corporation 2008

Figure 5-2. Installing the HACMP software

AU548.0

Notes:
This topic covers the installation of the HACMP 5.4.1 filesets.

5-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Steps for successful implementation


Proper planning is critical to a successful implementation
Special care should be taken when installing HACMP on a system that is in production

Step
1
2
3
4
5
6
7
8
9
10
11
12

Step Description
Plan
Assemble hardware
Install AIX
Configure networks
Configure shared storage
Install HACMP
Define/discover the cluster
topology
Configure application
servers
Configure cluster resources
Synchronize the cluster
Start Cluster Services
Test the cluster

Comments
Use planning worksheets and documentation.
Install adapters, connect shared disk and network.
Ensure you update to the latest maintenance level.
Requires detailed planning.
Set up shared volume groups and filesystems.
Install on all nodes in the cluster (don't forget to install
latest fixes).
Review what you end up with to make sure that it is
what you expected.
You will need to write your start and stop scripts.
Refer to your planning worksheets.
Ensure you "actually" do this.
Watch the logs for messages.
Document your tests and results.

Copyright IBM Corporation 2008

Figure 5-3. Steps for successful implementation

AU548.0

Notes:
Steps to building a cluster
Here are the steps to building a successful cluster. Okay, so we could have included
more steps or combined a few steps, but the principle is that you should plan and follow
a methodical process, which includes eventual testing and documentation of the cluster.
It is often best to configure the clusters resources iteratively. Get basic resource groups
working first, and then add the remaining resources gradually, testing as you go, until
the cluster does everything that it is supposed to do.

Different opinions
Different people have different ideas about the exact order in which a cluster should be
configured. For example, some people prefer to leave the configuration of the shared
storage (step 5 above) until after theyve synchronized the clusters topology (step 7) as

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-5

Student Notebook

this allows them to take advantage of HACMPs C-SPOC facility to configure the shared
storage.
One other area where different views are common is exactly when to install and
configure the application. If the application is installed, configured and tested
reasonably thoroughly prior to installing and configuring HACMP then most issues
which arise during later cluster testing are probably HACMP issues rather than
application issues. The other common perspective is that HACMP should be installed
and configured prior to installing and configuring the applications as this allows the
applications to be installed into the exact context that they will ultimately run in. There is
no correct answer to this issue. When to install and configure the applications is just
one more point that will have to be resolved during the cluster planning process.

Where there is agreement


There is general agreement among the experts that the first step in configuring a
successful cluster is to plan the cluster carefully. For a more comprehensive discussion
of the process of planning and implementing a cluster, refer to:
- SC23-5209-01HACMP for AIX, Version 5.4.1: Installation Guide
- SC23-4864-10HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
- SC23-4861-10HACMP for AIX, Version 5.4.1: Planning Guide
- SC23-4862-10HACMP for AIX, Version 5.4.1: Administration Guide
- SC23-5177-04HACMP for AIX, Version 5.4.1: Troubleshooting Guide
- SC23-4867-09HACMP for AIX, Version 5.4.1: Master Glossary
Or get the latest at http://www-03.ibm.com/systems/p/library/hacmp_docs.html

5-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Where are we in the implementation?


9Plan for network, storage, and application resource groups
Eliminate single points of failure

9Define and configure the AIX environment


Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP networks, and devices)
Application start and stop scripts

Install the HACMP filesets


Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks

Resources:
Application Server
Service labels

Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem

Synchronize, then start Cluster Services

Copyright IBM Corporation 2008

Figure 5-4. Where are we in the implementation?

AU548.0

Notes:
What we have done so far
In the units 2, 3, and 4 we planned and built the storage, network, and application
environments for our cluster. So we are now ready to install the HACMP filesets.

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-7

Student Notebook

First steps in planning


Study the appropriate HACMP manuals:
HACMP for AIX Planning Guide, V5.4.1 SC23-4861-10
Contains Planning Worksheets in Appendix A
Can be installed from the CD

HACMP for AIX Installation Guide, V5.4.1 SC23-5209-01


Can be installed from the CD

Online Planning Worksheets


Can be installed from the CD

Release notes:
On the CD as release_notes
Installed as /usr/es/sbin/cluster/release_notes

Copyright IBM Corporation 2008

Figure 5-5. First steps in planning

AU548.0

Notes:
There are other references
Other HACMP manuals are available which might prove useful. Check out the
references at the start of this unit for a complete list.

5-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What is on the CD?


release_notes
Directories
AIX52, AIX61
RSCT filesets for these AIX versions (AIX 5.3 versions listed as follows)

pubs
in pdf only

Installp/ppc, usr/sys/inst.images
cluster.adt.es
cluster.doc.en_US.es
cluster.es
cluster.es.cfs
cluster.es.cspoc
cluster.es.nfs
cluster.es.plugins
cluster.es.worksheets
cluster.hativoli
cluster.haview
cluster.license
cluster.man.en_US.es.data

cluster.msg.<lang>.cspoc
cluster.msg.<lang>.es
cluster.msg.<lang>.hativoli
cluster.msg.<lang>.haview
rsct.basic.hacmp.2.4.5.2.bff
rsct.basic.rte.2.4.5.2.bff
rsct.core.errm.2.4.5.1.bff
rsct.core.gui.2.4.5.1.bff
rsct.core.hostrm.2.4.5.1.bff
rsct.core.rmc.2.4.5.2.bff
rsct.core.sec.2.4.5.1.bff
rsct.core.utils.2.4.5.2.bff
rsct.opt.storagerm.2.4.5.2.bff

Copyright IBM Corporation 2008

Figure 5-6. What is on the CD?

AU548.0

Notes:
Files on the CD
This visual shows the files that are on the CD. They will be expanded to show the table
of contents when using SMIT to do the install. The AIX 5.2 and 6.1 directories contain
the required rsct filesets for implementing HACMP V5.4.1 with AIX V5.2 and 6.1,
respectively. The pubs directory contains the PDF and HTML versions of the HACMP
documentation at the time the CD was created.

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-9

Student Notebook

Install the HACMP filesets


Here are some of the HACMP 5.4.1 filesets:
cluster.adt.es
+ 5.4.1.0 ES Client CLINFO Samples
+ 5.4.1.0 ES Client Clstat Samples
+ 5.4.1.0 ES Client Include Files
+ 5.4.1.0 ES Client LIBCL Samples
+ 5.4.1.0 ES Web Based Monitor Demo
cluster.doc.en_US.es
+ 5.4.1.0 HAES PDF Documentation - U.S. English
+ 5.4.1.0 HAES Web-based HTML Documentation U.S. English
cluster.es
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0
+ 5.4.1.0

ES Base Server Runtime


ES Client Libraries
ES Client Runtime
ES Client Utilities
ES Cluster Simulator
ES Cluster Test Tool
ES Server Diags
ES Server Events
ES Server Utilities
ES Two-Node Configuration Assistant
Web based Smit

cluster.es.cfs
+ 5.4.1.0 ES Cluster File System Support
cluster.es.cspoc
+ 5.4.1.0 ES CSPOC Commands
+ 5.4.1.0 ES CSPOC Runtime Commands
+ 5.4.1.0 ES CSPOC dsh
cluster.es.nfs
+ 5.4.1.0 ES NFS Support
cluster.es.plugins
+ 5.4.1.0 ES Plugins - Name Server
+ 5.4.1.0 ES Plugins - Print Server
+ 5.4.1.0 ES Plugins - dhcp
cluster.es.worksheets
+ 5.4.1.0 Online Planning Worksheets
cluster.hativoli
+ 5.4.1.0 HACMP Tivoli Client
+ 5.4.1.0 HACMP Tivoli Server
cluster.license
+ 5.4.1.0 HACMP Electronic License
cluster.man.en_US.es
+ 5.4.1.0 ES Man Pages - U.S. English
cluster.msg.en_US.cspoc
+ 5.4.1.0 HACMP CSPOC Messages - U.S.
English

Your requirements will determine what you install


Copyright IBM Corporation 2008

Figure 5-7. Install the HACMP filesets

AU548.0

Notes:
Fileset considerations
Listed are some of the filesets that you see when doing smit install_all in HACMP 5.4.1.
Using smit install_latest will not show the msg filesets so you should use install_all and
select the filesets.
Notice that cluster.es contains both client and server components. You can install either
or both depending on what the systems HACMP function will be.
When you install cluster.es.server you will get cluster.es.cspoc as well.
The same filesets should be installed on all nodes or Verify will give warnings every
time it executes.
You should install the documentation filesets on at least one non-cluster node (ensuring
that the HACMP PDF-based documentation is available even if none of the cluster
nodes will boot could prove really useful someday).

5-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Notice that some of the filesets require other products such as Tivoli or NetView. You
should not install these filesets unless you have these products. HAView is never
installed on the cluster node, it is installed on the NetView server.
The cluster.es.cfs fileset can only be used if GPFS is installed. You might not need the
plug-ins.
The Web-based smit is not to be confused with WebSM. Web-based smit is Web
application that allows you to see the HACMP smit configuration screens and to see
status.
The cluster.es.clvm fileset, which was formerly required for Enhanced Concurrent Mode
volume group support and concurrent mode resource group support, has been
removed. The function required for Enhanced Concurrent Mode volume groups and
concurrent mode resource groups have been built into the HACMP base code. The
license key for concurrent mode resource groups is no longer required.

Example of basic install (will be used in the lab)


-

cluster.adt.es
cluster.doc.en_US.es
cluster.es
cluster.es.cspoc
cluster.license
cluster.man.en_US.es
cluster.msg.en_US.cspoc
cluster.msg.en_US.es

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-11

Student Notebook

Remember the prerequisites


Minimum levels of AIX:

AIX 5L V5.2 Technology Level (TL) 8


AIX 5L V5.3 TL4
AIX 6.1

Minimum levels of RSCT:

AIX 5L V5.2: RSCT version 2.3.9.2 (APAR IY84921)


Make sure RSCT filesets are at base level 2.3.9.0

AIX 5L V5.3: RSCT version 2.4.5.1 (APAR IY84920)


Make sure RSCT filesets are at base level 2.4.5.0
If HACMP node through VIOS 1.5, 2.4.5.4 or higher

AIX 6.1: RSCT version 2.5.0.0 or higher


rsct.compat.basic.hacmp
rsct.compat.client.hacmp

Otherwise optional AIX filesets, see student notes for details


Other prerequisites
Enhanced concurrent mode:

bos.rte.lvm (at required TL version)


bos.clvm.enh (at required TL version)

CSPOC with vpath


SDD 1.3.1.3 or later

SDDPCM
V2.1.1.0 or later, see student notes for details
Copyright IBM Corporation 2008

Figure 5-8. Dont forget the prerequisites

AU548.0

Notes:
Installation suggestions
Listed above are the minimum prerequisites. As time goes by, these will almost
certainly be superseded by later levels. The point is that these are the components that
must be considered when preparing your environment for HACMP.
Before you try to install, look at the following for the latest prerequisites:
- HACMP for AIX 5L Installation Guide, Version 5.4.1
- Release notes / README on the HACMP for AIX 5L, Version 5.4.1 CDs
- The HACMP for AIX 5L, Version 5.4.1 Announcement Letter
Go to the HACMP web site
http://www-03.ibm.com/systems/p/advantages/ha/ and click the
Announcement Letter link under the heading Learn more on the right side.

5-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Because you are unlikely to want to upgrade a new cluster anytime soon, it is generally
wisest to start with the latest available AIX and HACMP patches. The URL for checking
on the latest patches is:
http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/home.html
Finally, its always a good idea to call IBM support and ask if there are any known issues
with the versions of AIX and HACMP that you plan to install/upgrade. Indicate that you
intend to install the latest HACMP PTF (fix pack or whatever it may be called a the time)
and ask if its known to be stable. Depending on the timing of your installation, it might
be advisable to either stay one maintenance level behind on AIX and HACMP or both,
or it might be wise to wait for an imminent maintenance level for AIX and HACMP or
both.

Other AIX filesets


bos.adt.lib
bos.rte.libc
bos.adt.libm
bos.adt.syscalls

bos.data
bos.net.tcp.client
bos.net.tcp.server
bos.rte.libcur

bos.rte.libpthreads
bos.rte.odm
bos.rte.SRC

Those listed in bold are the ones that needed to be added to a base AIX image. Ensure
that when you install these on a system that has been updated to a Technology Level /
Service Pack (TL / SP), that you update these newly installed HACMP prerequisites to
the same TL / SP.
The base levels needed for AIX 5.3/6.1 are:
bos.adt.libm 5.2.0.85
bos.adt.syscalls 5.2.0.50
bos.data 5.1.0.0
In addition, the following is needed for AIX 6.1 only:
bos.net.nfs.server 5.3.7.0
For the RSCT prerequisites on AIX 6.1, the three filesets that are on the CD are
required in addition to the base RSCT filesets with the AIX 6.1 installation. Place the
RSCT filesets that are on the CD in the same directory as the prerequisites listed on the
slide.

HACMP and SVC details (as of June 18, 2007)


HACMP and SVC 4.1
June 18, 2007

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-13

Student Notebook

IBM High Availability Cluster Multiprocessing (HACMP*) for AIX 5L*, V5.3 and V5.4
updates support for the IBM System Storage SAN Volume Controller (SVC) Storage
Software V4.1.
Please refer to the following information for support details. Note: TL = Technology Level
Table 1: HACMP APARs
HACMP 5.3
HACMP APARs: IY94307, IZ00051+

HACMP 5.4
HACMP APARs: IY87247, IZ00050+

Table 2: Multipathing Drivers APARs for AIX


Multi-pathing
AIX 5.2 TL9
AIX 5.3 TL4
Driver
CSP
SDDPCM
v2.1.1.0

IY95174

<not supported>

SDD v1.6.2.1

IY98568++

IY98751++

AIX 5.3 TL5


IY95080

AIX 5.3 TL5


CSP or higher
<no APARs
required

IY91487
IY95080
IY98751++

IY98751++

+ These APARs are not yet generally available. Contact IBM Support to obtain fix
packages for these APARs.
++ These APARs will not be made generally available for AIX 5.2 TL 9 and AIX 5.3 TL 5.
Contact IBM Support to obtain Ifix packages for these APARs.
Although it is not required for correct operation with SDDPCM, the configuration of Fast I/O
Failure on Fibre Channel devices is highly recommended. For information on configuring
this feature, refer to Storage Multipath Subsystem Device Driver User's Guide, page 143
at:
http://www-1.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1
Additional requirement:
The AIX OS error daemon parameters should be tuned to avoid lost log entries (See
documentation APAR IY75323 at
http://www-1.ibm.com/support/docview.wss?uid=isg1IY75323) HACMP requires the buffer
size be set to at least 1 MB and the log size to 10 MB. Use the following command:
errdemon -B 1048576 -s 10485760"
Restriction notes for Metro Mirror:
An HACMP/XD and SVC Metro Mirror configuration with VIO is not currently supported.
Although SVC Host Name Aliases are arbitrary, for HACMP's support of Metro Mirror
they must match the node names used in the defined HACMP sites.
Resource Groups to be managed by HACMP cannot contain volume groups with both
Metro Mirror-protected and non-Metro Mirror-protected disks.
5-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP does not support Global Mirror functions of SVC Copy Services.
HACMP V5.3 does not support moving resource groups across sites.
For specific HACMP C-SPOC restrictions, refer to the HACMP/XD for Metro Mirror:
Planning and Administration Guide.
SDDPCM requires the configuration of Enhanced Concurrent Mode volume groups.
Other notes:
SDD supports both Shared or Enhanced Concurrent Mode volume groups.
Ensure that your SVC is properly configured to support SDD/SDDPCM host
multipathing. This involves adding all the WWWNs for a host's WWPN's into a single
Host object on the SVC. For example, for an HACMP node name Node_A with two
WWPNs of WWNN_1 and WWNN_2, run svctask mkhost -name Node_A -hbawwpn
WWNN_1 WWNN_2

General SDDPCM details


HACMP V5 supports use of SDDPCM V2.1.1.0, or later, configured to access the
shared disks with the no-reserve reserve policy. The shared disks must be defined as
being in an Enhanced Concurrent Mode (ECM) volume group. Persistent reserve policy
is not supported in an HACMP environment.

General VIOS/p6 details


HACMP can be used with versions of the VIO server dating back to VIOS 1.2. The
latest version as of the writing of this course is VIOS 1.5. Check the IBM Techdocs Web
site for published Flashes indicating support for the version of VIOS that you intend to
implement.
The same requirements exist for HACMP implementation on the Power 6 p520 and
p550 systems as for VIOS 1.5.
Following are these requirements.
Table 3: VIOS 1.5 Requirements
AIX 5.3
HACMP 5.3 w/
TL7

AIX 6.1
SP2

APAR IZ07791
HACMP 5.4.1 w/

RSCT 2.4.5.4
TL7

RSCT 2.5.0.0
SP2

APAR IZ02602

RSCT 2.4.5.4

RSCT 2.5.0.0

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-15

Student Notebook

Some final things to check


Code installation
Correct filesets and prerequisites have been installed
Documentation is installed and accessible

Network setup

/etc/hosts file is configured on all nodes correctly


Name resolution works
IP and non-IP networks are configured
Subnets configured correctly
The subnet mask identical.
All interfaces on different subnets
Routing configured correctly
Test connectivity

Shared storage configured correctly


You have a written plan describing configuration and testing
procedures!
Copyright IBM Corporation 2008

Figure 5-9. Some final things to check

AU548.0

Notes:
Description of checklist
This is a checklist of items that you should verify before starting to configure an HACMP
cluster. It is not a complete list because each situation is different. It would probably be
wise to develop your own checklist during the cluster planning process, and then verify
it just before embarking on the actual HACMP configuration of the cluster.

Code installation
Correct filesets includes making sure that the same HACMP filesets are installed on
each node. Documentation can be installed before installing HACMP. The
documentation is delivered as either pdf only for HACMP 5.4.1, previous versions
provided an html version too.

5-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Network setup
The /etc/hosts file should have entries for all IP labels and all nodes. The file should be
the same on all nodes. Name resolution should be tested on all labels and nodes. To do
this you can use the host command. You should test address to name and name to
address and verify that they are the same on all nodes. You should ensure that a route
exists to all logical networks from all nodes. Finally, you should test connectivity by
pinging all nodes from all nodes on all interfaces.

Shared storage
Check to see that the disks are configured and recognized the same (if possible) and
can be accessed from all nodes that will share it.

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-17

Student Notebook

Install HACMP client machine


Set up network
Configure network interface to reach cluster server node
Same subnet as service address
/etc/hosts file updated everywhere

Install prerequisites:

bos.adt.libm
bos.adt.syscalls
bos.data
rsct.compat.clients.hacmp, rsct.compat.basic.hacmp

Install HACMP client filesets:


cluster.adt.es
cluster.es
ES Client Libraries
ES Client Runtime
ES Client Utilities
cluster.msg.en_US.es
cluster.man.en_US.es

Configure /usr/es/sbin/cluster/clhosts
Can copy /usr/es/sbin/cluster/etc/clhosts.client

Test connectivity
Copyright IBM Corporation 2008

Figure 5-10. Install HACMP client machine

AU548.0

Notes:
Client machine properties
A client machine is a node running AIX and only the client filesets from HACMP. It can
be used to monitor the cluster nodes as well as to test connectivity to an application
during fallover or to be a machine that is used to access a highly available application

Installing and setting up the client machine


Make sure the network is setup so that the client machine can access the cluster nodes.
If the client machine is on the same LAN then choose an address that is in the same
subnet as the service address of the application that you want to monitor.
Also make sure clinfo is setup (clhosts file) to be able to find the cluster node(s). A
clhosts file is generated for clients when you synchronize the cluster. The name of the
file is /usr/es/sbin/cluster/etc/clhosts.client

5-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review
1. What is the first step in implementing a cluster?
a.
b.
c.
d.
e.

Order the hardware


Plan the cluster
Install AIX and HACMP
Install the applications
Take a long nap

2. True or False?
HACMP 5.4.1 is compatible with any version of AIX V5.x.

3. True or False?
Each cluster node must be rebooted after the HACMP software is
installed.

4. True or False?
You should take careful notes while you install and configure HACMP
so that you know what to test when you are done.

Copyright IBM Corporation 2008

Figure 5-11. Lets review

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-19

Student Notebook

5-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

5.2 What was installed?

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-21

Student Notebook

What was installed


After completing this topic, you should be able to:
Describe the purpose of the major HACMP 5.4.1 components

Copyright IBM Corporation 2008

Figure 5-12. What was installed

AU548.0

Notes:

5-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

The layered look


Here are the layers of software on an HACMP 5.4.1 cluster node:
Application Layer
Contains the highly available applications
that use HACMP services
HACMP Layer
Provides highly available services to
applications
RSCT, RMC Layer
Provides monitoring, event management and
coordination of subsystems for HACMP clusters
AIX Layer
Provides operating system services (SRC, snmpd)
LVM Layer
Manages disk space
at the logical level

TCP/IP Layer
Manages communication
at the logical level

Copyright IBM Corporation 2008

Figure 5-13. The layered look

AU548.0

Notes:
The application layer
The top most layer of the software stack is the application layer. Any application or
service that the cluster node is making highly available is considered to be running at
the application layer (in a sense, this includes rather low-level AIX facilities, such as
NFS, when the cluster is acting as a highly available NFS server).

The HACMP layer


The next layer is the HACMP layer. This layer is responsible for providing a number of
services to the application layer, including:
- Tracking the state of the cluster in cooperation with the other cluster nodes
- Initiating fallovers and other recovery actions as required
- (Optionally) monitoring the applications and initiating recovery procedures when
they fail
Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-23

Student Notebook

- Doing whatever else it takes to make the applications highly available


Note that because most applications are not really aware of how they are started and
stopped or if they are being monitored and recovered or even if they are being made
highly available, the applications running within the application layer, as a rule, are
blissfully unaware of the existence of the HACMP layer or even the RSCT layer.
To make the applications highly available and to know when to start and stop, and, if
configured, monitor and recover the applications, the HACMP layer must be aware of
the overall status of the cluster including the state of the topology (which nodes,
networks and network interfaces are in working order) and the resources (which
resources are being made available where).
The HACMP layer relies upon the RSCT layer to provide a number of key services
including topology status information and a reliable messaging service.

The RSCT layer


The RSCT layer includes daemons responsible for monitoring the state of the clusters
topology, recognizing when the state of the cluster changes (for example, a node
crashes), coordinating the response to these events, and keeping RSCT-aware clients
informed as to the state of the cluster (HACMP is itself an RSCT-aware client). RSCT
itself is distributed with AIX.

The AIX and below layers


The AIX layer represents all of the operating system services provided by AIX to
programs running on the operating system. These programs include, of course, the
programs at the application layer, the programs at the HACMP layer and the programs
at the RSCT layer.
The AIX layer takes advantage of all sorts of facilities provided by the AIX kernel. The
two that are highlighted in the diagram are the Logical Volume Manager or storage
management facility and the TCP/IP or IP networking facility. As should be clear from
the rather heavy emphasis given to storage and networking in this course so far, these
are cornerstone facilities from the perspective of HACMP.
Finally, please keep in mind that the layers in the software stack illustrated above are,
in many respects, more apparent than real. All of the layers above the AIX layer tend to
interact heavily, and directly with the AIX layer regardless of whether there are layers
between them and the AIX layer. The same can be said in many respects about the
LVM and the TCP/IP components: All of the layers above them tend to interact heavily
although usually not quite as directly with the LVM and TCP/IP components.

5-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP components and features


The HACMP software has the following components:
Cluster Manager
Cluster Secure Communication Subsystem
Reliable Scalable Cluster Technology, and Resource Monitoring and Control
(RSCT and RMC)
snmpd monitoring programs
Cluster Information Program
Highly Available NFS Server
Shared External Disk Access

Copyright IBM Corporation 2008

Figure 5-14. HACMP components and features

AU548.0

Notes:
HACMP components
HACMP consists of the following components:
A cluster manager (recovery driver and resource manager)
RSCT
SNMP related facilities
The Cluster Information Program
A highly available NFS server
Shared external disk access
Cluster Secure Communication Subsystem

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-25

Student Notebook

Cluster Manager
Is a subsystem/daemon that runs on each cluster node
Is primarily responsible for responding to unplanned events:
Recover from software and hardware failures
Respond to user-initiated events:
Request to online/offline a node
Request to move/online/offline a resource group
And so forth

Is a client to RSCT
Provides snmp retrievable status information
Is implemented by the subsystem clstrmgrES
Started in /etc/inittab and always running

Copyright IBM Corporation 2008

Figure 5-15. Cluster manager

AU548.0

Notes:
The cluster managers role
The cluster manager is, in essence, the heart of the HACMP product. Its primary
responsibility is to respond to unplanned events. From this responsibility flows most of
the features and facilities of HACMP. For example, to respond to unexpected events, it
is necessary to know when they occur. This is the job of the RSCT component to
monitor for certain failures.
In HACMP 5.3 and later, the clstrmgrES subsystem is always running.

5-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Cluster secure communication subsystem


It provides communication infrastructure for HACMP
HACMP provides two authentication security options:
Connection Authentication
Standard
Uses /usr/es/sbin/cluster/rhosts file and HACMP ODM files

Kerberos (SP only)


Kerberos used with PSSP.

Virtual Private Networks (VPN) using persistent labels.


VPNs are configured within AIX.
HACMP is then configured to use VPNs

Message Authentication and/or Message Encryption


HACMP provides methods for key distribution

It is implemented using the clcomdES subsystem

Copyright IBM Corporation 2008

Figure 5-16. Cluster secure communication subsystem

AU548.0

Notes:
Introduction to the cluster communication subsystem
The cluster secure communication subsystem is part of HACMP 5.1 and later systems.
It provides connection level security for all HACMP-related communication, eliminating
the need for either /.rhosts files or a Kerberos configuration on each cluster node.
Although only necessary when the configuration of the cluster was being changed, the
need for these /.rhosts files was a source of concern for many customers.
This facility goes beyond eliminating the need for /.rhosts files by providing the ability to
send all cluster communication through a Virtual Private Network (VPN) using
persistent labels. Although unlikely to be necessary in most clusters, this capability will
allow HACMP to operate securely in hostile environments.
In addition, you can use Message-level authentication and Message Encryption or both
in HACMP 5.2 and later. You can have HACMP generate and distribute keys.

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-27

Student Notebook

Leaving clcomd running


As you can see, you have options to make clcomd quite secure.You might be tempted
to stop the clcomd subsystem (especially at the encouragement of the audit/security
group) but remember that the facility makes more than just verification and
synchronization possible. With the advent of the File Collections and Automatic
Verification functions in HACMP, which rely on clcomd services, it is recommended that
you leave it running. It is quite likely that the benefit of these services outweighs the
potential security risk.

Only supported for HACMP generated requests


Finally, this subsystem is not supported for use by user commands outside of the
cluster manager and CSPOC. For these commands the administrator must configure
their own remote command method.

5-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Cluster communication daemon (clcomd)


Provides secure node-to-node communications without use of
/.rhosts
Caches coherent copies of other nodes' ODMs
Establishes long-term socket connections on TCP port 6191
Implements the principle of least privilege:
Nodes no longer require root access to each other

Starts out of the /etc/inittab


Is managed by the SRC
startsrc, stopsrc, refresh

Copyright IBM Corporation 2008

Figure 5-17. Cluster communication daemon (clcomd)

AU548.0

Notes:
clcomd basics
The most obvious part of the cluster secure communication facility is the cluster
communication daemon (clcomd). This daemon replaces a number of ad hoc
communication mechanisms with a single facility thus funneling all cluster
communication through one point. This funneling, in turn, makes it feasible to then use
a VPN to actually send the traffic between nodes and to be sure that all the traffic is
going through the VPN.

Efficient node-to-node communications and data gathering


The clcomds approach to supporting the verification and synchronization of cluster
configuration changes has an important additional benefit. By eliminating numerous rsh
calls across the cluster during the verification and synchronization operation and
replacing them with a purpose-built facility, the verification and synchronization

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-29

Student Notebook

processes are very efficient. These processes might still take a matter of minutes to
complete as comparison processing and resource manipulation may be occurring.
Other aspects of clcomds implementation which further improve performance include:
- Caching coherent copies of each nodes ODMs, which reduces the amount of
information which must be transmitted across the cluster during a verification
operation
- Maintaining long-term socket connections between nodes avoids the necessity to
constantly create and destroy the short term sessions, which are a natural result of
using rsh and other similar mechanisms

5-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

clcomd standard connection authentication


Look for source IP address in:
Special rhosts file: /usr/es/sbin/cluster/etc/rhosts
HACMP adapter ODM

Take the following actions:


Block communication if the special rhosts file is missing
Assume new cluster if the special rhosts file is empty
Else, ensure that the rhosts file exists; then check HACMP Adapter
ODM file for authentication

Authentication is done as follows:


Connect back and ask for the hostname
Connection is considered authentic if the hostname matches, otherwise
connection is rejected

First-time pass at initial configuration time, rhosts file must


exist and be empty
More secure installations should populate the rhosts file with only
current cluster node IP addresses
Copyright IBM Corporation 2008

Figure 5-18. clcomd standard connection authentication

AU548.0

Notes:
How clcomd authentication works
If the source node of the communication is not in the HACMPadapter and
HACMPnode ODM files on the target node, the target clcomd daemon authenticates
the in-bound session by checking the sessions source IP address against a list of
addresses in /usr/es/sbin/cluster/etc/rhosts and the addresses configured into the
cluster itself (in other words, in the previously mentioned ODM files). To defeat any
attempt at IP-spoofing (a very timing-dependent technique which involves faking a
sessions source IP address), each non-callback session is checked by connecting
back to the source IP address and verifying who the sender is. If the source node is in
the HACMPadapter and HACMPnode ODM files on the target node, the target clcomd
daemon only uses the information about the source node from these ODM files to
conduct the authentication.
The action taken to a request depends on the state of the
/usr/es/sbin/cluster/etc/rhosts file as shown in the visual. If a cluster node is being
Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-31

Student Notebook

moved to a new cluster or if the entire cluster configuration is being redone from
scratch, it might be necessary to empty /usr/es/sbin/cluster/etc/rhosts or manually
populate it with the IP addresses of the source node. Subsequently, the file can be
emptied, because all clcomd communications will be authenticated based on the
HACMP ODM files. The file must exist, or clcomd will fail to allow any inbound
communications. In fact, testing has shown that once the nodes are established in the
HACMP ODM, you can put anything in the /usr/sbin/cluster/etc/rhosts file and it will
be ignored. Again, the key thing is that the file exists and that the HACMP ODM
contains the node/adapter information for the source of the clcomd session.

First-time pass at initial configuration time


The empty /usr/es/sbin/cluster/etc/rhosts file provides a window of opportunity
between installation and when HACMP is configured. To further reduce this window,
you can edit this file just after the installation if it is felt that this window will be a
problem. If an entry in the /usr/es/sbin/cluster/etc/rhosts file with no HACMP ODM
files is populated, it is the deciding factor on whether communications with another
clcomd daemon will be accepted (using the addresses in the file). Therefore, if you want
to close the hole, put the addresses of any node that would initiate a clcomd session
with any non-configured system that has HACMP server code installed in that
non-configured nodes /usr/es/sbin/cluster/etc/rhosts file.

5-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

RSCT
Is included with AIX
Provides:
Scalability to large clusters
Cluster failure notification
Coordination of changes

Includes key components:


Topology Services
Heartbeat services

Group Services
Coordinates and monitors state changes of an application in the cluster

RMC: Resource Monitoring and Control


Provides process monitoring, dynamic node priority variables and userdefined events

Works with HACMP's Cluster Manager which is an RSCT


(group services) client

Copyright IBM Corporation 2008

Figure 5-19. RSCT

AU548.0

Notes:
What RSCT provides
RSCTs role in an HACMP cluster is to provide:
- Failure detection and diagnosis for topology components (nodes, networks, and
network adapters)
- Notification to the cluster manager of events that it has expressed an interest in primarily events related to the failure and recover of topology components
- Coordination of the recovery actions involved in dealing with the failure and recovery
of topology components (in other words, fallovers, fallbacks and dealing with
individual NIC failures by moving or swapping IP addresses)

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-33

Student Notebook

HACMP from an RSCT perspective


AIX
Process
Monitor
HACMP

Database
Resource
Monitor

RSCT
RMC
(ctrmc)

HA Recovery Driver
~
Cluster
Manager

RSCT
Topology
Services

RSCT
Group
Services

Recovery
Programs

Switch
Resource
Monitor

heartbeats

messages

To/from other nodes

Recovery
Commands
~
HACMP Event
Scripts

Group Membership
Event Subscription
Voting Protocols between nodes

Copyright IBM Corporation 2008

Figure 5-20. HACMP from an RSCT perspective

AU548.0

Notes:
The RSCT environment
This diagram includes all of the major RSCT components plus the HACMP cluster
manager and event scripts. It also illustrates how they communicate with each other.

Topology services
Responsible for building heartbeat rings for the purpose of detecting, diagnosing, and
reporting state changes to the RSCT Group Services component, which in turn reports
them to the Cluster Manger. Topology Services is also responsible for the transmission
of any RSCT-related messages between cluster nodes.

Group services
Associated with RSCT Topology Services is the RSCT Group Services daemon which
is responsible for coordinating and monitoring changes to the state of an application
5-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

running on multiple nodes. In the HACMP context, the application running on multiple
nodes is the HACMP cluster manager. Group Services reports failures to the Cluster
Manager as it becomes aware of them from Topology Services. The Cluster Manager
then drives cluster-wide coordinated responses to the failure through the use of Group
Services voting protocols.

Monitors
The monitors in the upper left of the diagram monitor various aspects of the local nodes
state, including the status of certain processes (for example, the application if
application monitoring has been configured), database resources, and the SP Switch (if
one is configured on the node). These monitors report state changes related to
monitored entities to the RSCT RMC Manager.

RMC manager
The RSCT RMC Manager receives notification of events from the monitors. It analyzes
these events and notifies RSCT clients of those events which they have expressed an
interest in.
The HACMP cluster manager, an RSCT client, registers itself with both the RSCT RMC
Manager and the RSCT Group Services components.

Cluster manager
After an event has been reported to the HACMP Cluster Manager, it responds to this
event the use of HACMPs recovery commands and event scripts to respond to the
event. The scripts are coordinated via the RSCT group services component.

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-35

Student Notebook

Heartbeat rings

Heartbeat
25.8.60.6

25.8.60.5

25.8.60.2

25.8.60.4

25.8.60.3

Heartbeat one way in order of high to low IP address.


Copyright IBM Corporation 2008

Figure 5-21. Heartbeat rings

AU548.0

Notes:
RSCT topology services functions
The RSCT Topology Services component is responsible for the detection and diagnosis
of topology component failures. As discussed in the networking unit, the mechanism
used to detect failures is to send heartbeat packets between interfaces. Rather than
send heartbeat packets between all combinations of interfaces, the RSCT Topology
Services component sorts the IP addresses of the interfaces on a given logical IP
subnet and then arranges to send heartbeats in a round robin fashion from high to low
IP addresses in the sorted list. For non-IP networks (like rs-232 or Heartbeat on Disk),
addresses are assigned to the adapters that form the endpoints of the network and
are used by Topology Services like IP addresses for routing/monitoring the heartbeat
packets.

5-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Example
For example, the IP addresses in the foil can be sorted as 25.8.60.6, 25.8.60.5,
25.8.60.4, 25.8.60.3 and 25.8.60.2. This ordering results in the following heartbeat path:
25.8.60.6 --> 25.8.60.5-->25.8.60.4-->25.8.60.3-->25.8.60.2-->25.8.60.6

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-37

Student Notebook

HACMP's SNMP support


HACMP uses SNMP to provide:
Notification of cluster events
Cluster configuration/state information

Support in HACMP 5.3 and later is provided by Cluster


Manager
A client (smux peer) of AIX's snmpdv3

Support consists of:


Maintaining a management information base (MIB)
Responding to SNMP queries for HACMP information
Generating SNMP traps

ClinfoES and HAView use SNMP


Available to any snmp manager and the snmpinfo command

Copyright IBM Corporation 2008

Figure 5-22. HACMPs SNMP support

AU548.0

Notes:
HACMP support of SNMP
In HACMP 5.3 and later SNMP manager support is provided by the cluster manager
component. This SNMP manager allows the cluster to be monitored via SNMP queries
and SNMP traps. In addition, HACMP includes an extension to the Tivoli NetView
product called HAView. This extension can be used to make Tivoli NetView
HACMP-aware. The clinfo daemon as well as any SNMP manager and the snmpinfo
command can interface to this SNMP manager. This is discussed in more detail in the
course HACMP Administration II: HACMP Administration and Problem Determination.

5-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Cluster information daemon (clinfo)


Is an SNMP-aware client to Cluster Manager
Provides:
A cluster information API to the HACMP SNMP manager
Focused on providing HACMP cluster information
Easier to work with than the SNMP APIs

Support for ARP cache issues

Is used by:
The clstat command
Customer written utility/monitoring tools

Implemented as the clinfoES subsystem

Copyright IBM Corporation 2008

Figure 5-23. Cluster information daemon (clinfo)

AU548.0

Notes:
What the clinfo daemon provides
The clinfo daemon provides an interface (covered in Unit 3) for dealing with ARP cache
related issues as well as an Application Program Interface (API) which can be used to
write C and C++ programs, which meet customer-specific needs related to monitoring
the cluster.

Where clinfo runs


The clinfo daemon can run on HACMP cluster server nodes or on any machine which
has the clinfo code installed.

Clinfo is required for some status commands


Clinfo must be running on a node or client machine to use any of the clstat related
commands (clstat, xclstat, clstat.cgi)
Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-39

Student Notebook

Starting clinfo
Starting clinfo on an HACMP server node:
The clinfo daemon can be started in a number of ways (see the HACMP
Administration Guide) but probably the best way is to start it along with the rest of
the HACMP daemons by setting the Startup Cluster Information Daemon? field to
true when using the smit Start Cluster Services screen (which will be discussed in
the next unit).
Note that an option exists in HACMP 5.4.1 and later to start clinfo for consistency
groups. This support is for HACMP/XD with Metro Mirror replication.
Starting clinfo on a Client:
Use the /usr/es/sbin/cluster/etc/rc.cluster script or the startsrc command to start
clinfo on a client.
You can also use the standard AIX startsrc command startsrc -s clinfoES

5-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Highly available NFS server support


Cluster administrator can:
Define NFS exports at directory level to all clients
Define NFS mounts and network to HACMP nodes
Specify export options for HACMP to set

NFS V2/V3
HACMP preserves file locks and dupcache across fallovers
Limitations
Lock support is limited to two node clusters
Resource group is only active on one node at a time

NFS V4

It requires Stable Storage location accessible from all nodes in the resource group
Resource Group can have more than two nodes
NFSv4 application server and monitor are automatically added
It requires a new fileset be installed

Combination of V2/V3 + V4 supported

Copyright IBM Corporation 2008

Figure 5-24. Highly available NFS server support

AU548.0

Notes:
HACMP NFS V2/V3 support
The HACMP software provides the following availability enhancements to NFS V2/V3
operations:
- Reliable NFS server capability that allows a backup processor to recover current
NFS activity should the primary NFS server fail, preserving the locks on NFS
filesystems and the duplicate request cache
- Ability to specify a network for NFS mounting
- Ability to define NFS exports and mounts at the directory level
- Ability to specify export options for NFS-exported directories and filesystems

NFS V2/V3 Limitations


- The locking function is available only for 2-node clusters
- The resource group must behave as non-concurrent--active on one node at a time
Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-41

Student Notebook

HACMP NFS V4 support


NFS V4 is included in AIX 5.3/6.1. NFS V4 with HACMP 5.4.1 and later provides a SMIT
path to configure NFS V4 exports, rather than having to edit /etc/exports. Configuring both
NFS V2/V3 and NFS V4 filesystems in the same Resource Group is supported. The
configuration of the NFS V4 filesystems into Resource Groups is made simple through the
use of a Configuration Assistant. The HACMP support builds an application monitor
automatically for monitoring the NFS V4 daemons, exports and cross-mounts and provides
for enhanced verification methods to catch know configuration issues.
New fileset
To use NFS V4 filesystems in a Resource Group, the fileset cluster.es.nfs.rte must be
installed.

5-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Shared external disk access


It provides two types of shared disk support:
Serially reusable shared disks:
Varied on by one node at a time under the control of HACMP
LVM or RSCT ensures no access by two nodes at once
Two types of volume groups:
non-concurrent mode
Enhanced Concurrent Mode running in non-concurrent mode

Concurrent access shared disks:


Used by concurrent applications writing to raw logical volumes
One type of volume group
Enhanced Concurrent Mode running in concurrent mode

bos.clvm.enh fileset required for Concurrent/Enhanced


Concurrent

Copyright IBM Corporation 2008

Figure 5-25. Shared external disk access

AU548.0

Notes:
Shared disk support
As you know by now, HACMP supports shared disks. See the shared storage unit for
more information on HACMPs shared external disk support. Recall that enhanced
concurrent mode can be used in a non-concurrent mode to provide heartbeat over disk
and fast disk takeover for resource group policies where the resource group is active on
only one node at a time.
Note that the bos.clvm.enh fileset is required for enhanced concurrent support even if
using it in non-concurrent mode.

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-43

Student Notebook

Checkpoint
1. Which component detects an adapter failure?
a.
b.
c.
d.

Cluster Manager
RSCT
clcomd
clinfo

2. Which component provides SNMP information?


a.
b.
c.
d.

Cluster Manager
RSCT
clsmuxpd
clinfo

3. Which component is required for clstat to work?


a.
b.
c.
d.

Cluster Manager
RSCT
clcomd
clinfo

4. Which component removes requirement for the /.rhosts file?


a.
b.
c.
d.

Cluster Manager
RSCT
clcomd
clinfo
Copyright IBM Corporation 2008

Figure 5-26. Checkpoint

AU548.0

Notes:

5-44 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Having completed this unit, you should be able to:
Explain where installation fits in the implementation process
Describe how to install HACMP 5.4.1
List the prerequisites for HACMP 5.4.1
Describe the installation process for HACMP 5.4.1
List and explain the purpose of the major HACMP 5.4.1
components

Copyright IBM Corporation 2008

Figure 5-27. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 5. HACMP installation

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

5-45

Student Notebook

5-46 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 6. Initial cluster configuration


What this unit is about
In this unit, you will learn how to configure a cluster using the SMIT
HACMP interface. You will learn how to perform simple and more
advanced cluster configuration. You will also learn how and when to
verify and synchronize your cluster.

What you should be able to do


After completing this unit, you should be able to:
Configure a Mutual Takeover HACMP 5.4.1 cluster
- Use the standard path
Configure a standby HACMP 5.4.1 cluster
- Use the 2 node Configuration Assistant
Configure Topology to include:
- IP Address Takeover via alias
- Non-IP networks (rs232, diskhb)
- Persistent address
Verify, synchronize, and test a cluster
Start and stop cluster services
Save a cluster configuration

How you will check your progress


Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1 Installation Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1 Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1 Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1 Troubleshooting Guide
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:

Configure a mutual takeover HACMP 5.4.1 cluster


Use the Initialization and Standard Configuration path
(Standard Path)
Use the Two-Node Cluster Configuration Assistant
(Two-node Assistant)

Configure HACMP topology to include:


IP address takeover via alias
This is the default in the Standard Path

Non-IP networks (rs232, diskhb)


Persistent address

Verify, synchronize, and test a cluster


Start cluster services
Save the cluster configuration
Copyright IBM Corporation 2008

Figure 6-1. Unit objectives

AU548.0

Notes:
Objectives
This unit will show how to configure a 2-node hot-standby or mutual takeover cluster
with a heartbeat over disk non-IP network using the standard configuration menus.
Follow the markers at the bottom of the screens to see the steps to extend the basic
hot-standby to a mutual takeover. It will then demonstrate how to start up and shut
down Cluster Services. It will then show the steps necessary to modify the configuration
of the cluster to add a persistent IP label, add a heartbeat on disk non-IP network and
synchronize the changes. The final step is making a snapshot backup of the cluster
configuration.
You will be walked through the methods of configuring the cluster, using the Initialization
and Standard Configuration path. You will make the above mentioned extensions using
the Extended Configuration path. You will also see the simplest, most limited, method;
that is, the Two-Node Configuration Assistant.

6-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What we are going to achieve


Either: A two node hot standby configuration (active / passive)
Resource group xwebgroup with usa as its home (primary) node and uk as its backup
node

uk
ukadm

usa
usaadm

Two non-IP:
heartbeat on disk

rs-232

Look for the


to
signify a mutual
takeover task (repeat
the step) in the slides
that follow

Or: A two node mutual takeover configuration (active / active)


Second resource group with uk as its home (primary) node and usa as its backup
node
Copyright IBM Corporation 2008

Figure 6-2. What we are going to achieve

AU548.0

Notes:
Configuring either a standby or a mutual takeover configuration
During this course, you and your team will configure a two-node cluster. You will be
guided through the process of creating a mutual takeover cluster using the standard
path. To adapt this to a hot-standby cluster, omit the steps that involve creating the
second resource group and its content. The standard path is ideal for creating a cluster
because it gives you the ability to use the pick lists and it automates some steps. It
requires that you have a solid understanding of your environment and the way HACMP
works to successfully configure the cluster.
The two-node assistant is mentioned in the lecture. It can be used to create a simple
hot-standby cluster, with one resource group only. That one resource group will contain
all the non-rootvg volume groups present on the node where the configuration is being
done.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-3

Student Notebook

The X in the figure represents the application xwebserver and the arrow represents
what happens on a fallover. The persistent addresses and both non-IP networks that
will be added in this unit are also shown.
The cluster will be tested for reaction to node, network, and network adapter failure, and
later in the week, we will also configure additional features, including NFS export and
cross-mount.

6-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Where are we in the implementation?


9Plan for network, storage, and application
Eliminate single points of failure

9Define and configure the AIX environment


Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP)
Application start and stop scripts

9Install the HACMP filesets and reboot


Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks

Resources, resource group, attributes:


Resources: Application Server, service label
Resource group: Identify name, nodes, policies
Attributes: Application Server, service label, VG, filesystem

Synchronize

Start Cluster Services


Test configuration
Save configuration
Copyright IBM Corporation 2008

Figure 6-3. Where are we in the implementation?

AU548.0

Notes:
Ready for configuration
Now that the HACMP filesets are installed, we can start to configure HACMP.

Where do we go from here?


As mentioned on the previous visual, we will configure a mutual takeover configuration
with two applications and two resource groups using the Standard Configuration
method.
Finally, in this topic, we will use the extended path to deal with some initial configuration
choices that cannot be done with the standard path.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-5

Student Notebook

The topology configuration


Here's the key portion of the /etc/hosts file used in this unit:
192.168.15.29
192.168.16.29
192.168.5.29
192.168.15.31
192.168.16.31
192.168.5.31
192.168.5.92

usaboot1
usaboot2
usaadm
ukboot1
ukboot2
ukadm
xweb

#
#
#
#
#
#
#
#

usa's first interface IP label


usa's second interface IP label
persistent node IP label on usa
uk's first interface IP label
uk's second interface IP label
persistent node IP label on uk
the IP label for the application
normally resident on usa

Hostnames: usa, uk
usa's network configuration (defined via smit chinet):
en0 - 192.168.15.29
en1 - 192.168.16.29

uk's network configuration:


en0 - 192.168.15.31
en1 - 192.168.16.31

These network interfaces are all connected to the same


physical network
The subnet mask is 255.255.255.0 on all networks/NICs
An enhanced concurrent mode volume group xwebvg" has been created to
support the xweb application and will be used for a disk non-IP heartbeat
network
Copyright IBM Corporation 2008

Figure 6-4. The topology configuration

AU548.0

Notes:
A sample network configuration
Every discussion must occur within a particular context. The above network
configuration is the context within which the first phase of this unit will occur. Refer back
to this page as required over the coming visuals.
Note that the addresses are set to support IPAT via Aliasing. The service address would
have been on the same subnet as one of the boot adapters if IPAT via Replacement
was to be used.
Also note that an understanding of the physical layout of the adapters in each system is
critical to ensure that the cable attachments are going to the correct enX in AIX. This is
true whether youre dealing with a standalone system or an LPAR with adapters in
drawers. It is obviously not an issue with virtual adapters.

6-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Configuration methods
HACMP provides two menu paths with three methods to
configure topology and resources:
Initialization and Standard Configuration
Two-node cluster configuration assistant
Limited configuration
> Only supports two-node hot standby cluster

Builds cluster configuration based on AIX configuration


> All adapter addresses treated as boot addresses
> All volume groups assigned to one resource group

Creates everything needed for simple cluster (Topology, Resources,


Resource Group)
> No persistent addresses
> No non-IP network other than Heartbeat on Disk (only if enhanced concurrent
mode volume group present)

Standard configuration
Topology done in one step
> Based on IP addresses configured

You then must configure resource groups and synchronize


Desirable method to create more than two-node hot standby cluster

Extended Configuration
More steps, but provides access to all the options
Copyright IBM Corporation 2008

Figure 6-5. Configuration methods

AU548.0

Notes:
Configuration methods
- Standard Configuration
With this method, you must do the following tasks:
i. Topology (simplified via Configure an HACMP Cluster and Nodes)
ii. Configure Resources and Resource Groups
iii. Verify and Synchronize
- Two-Node Cluster Configuration Assistant
With this method, all the steps of Standard Configuration are done at once, including
adding a non-IP disk heartbeat network if you created an enhanced concurrent
volume group. Note that this is a simple two-node configuration with one resource
group containing all configured volume groups. This can be a starting point for
creating a more robust cluster but should not be viewed as a shortcut to creating a
cluster without a thorough understanding of how HACMP works.
Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-7

Student Notebook

- Extended Configuration
With this method you follow similar steps as the Standard Configuration but
Topology has more steps and there are many more options. Some options can only
be done using this method, such as adding a non-IP network.

6-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Planning and base configuration


Configure your boot addresses to the interfaces on all systems
Put all boot, service, and persistent addresses in /etc/hosts on all
systems
Create the volume groups to be used by the applications

Enhanced Concurrent Mode Volume groups recommended

Plan configuration path for both nodes


usaboot1 and ukboot1 in our example

Plan Application Server name = xwebserver


Plan Application Server name = xwebgroup
Ensure that Application Server start and stop scripts exist and
are put on usa
Plan service IP Label = xweb
Copyright IBM Corporation 2008

Figure 6-6. Planning and base configuration

AU548.0

Notes:
Base configuration
Prior to using any of the methods to configure the cluster, there are basic AIX
configuration steps that must be performed. As described in the unit on networking
considerations, you chose IP addresses and subnets to match your IP Address
Takeover method. Now you must ensure that those boot addresses are configured on
each of the cluster nodes network adapters. Take care to ensure that you have
configured these addresses correctly, including the subnet mask. When using either the
Two-node assistant or the Standard path, the addresses on the adapters for all the
systems being configured into the cluster are used to create the adapter and network
objects in the HACMP ODM. A simple mistake here will result in incorrect network
configurations in HACMP.
Next, you must ensure that all the addresses, boot, service, and persistent, are in the
/etc/host files for all the systems in the cluster. Check for resolution, forward and
reverse, by address and by name on all the systems in the cluster. Then verify that you
Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-9

Student Notebook

can reach all the boot addresses from each system via ping (including the local
addresses).
Now switch to your storage configuration. To instruct HACMP to manage your
applications volume groups, you must add those volume groups to resource groups. To
minimize risk of error in data entry, add the volume groups to the resource groups using
a pick list. To do that, the volume groups must be configured prior to the resource group
configuration (and an HACMP discovery must be done). If you use the two-node
assistant, all volume groups (other than rootvg) will be picked up and used in the
resource group that is configured. Take caution here. If you use the Standard path, you
choose the volume groups to place in the resource groups. Choosing them from a pick
list is the right approach. Configure at least one Enhanced Concurrent Mode Volume
Group for use in a heartbeat on disk non-IP network.
You would ensure that the start and stop scripts were placed on all the systems in the
cluster and that you specify interface name/address for all the other systems when
configuring the cluster. In our example youd ensure that the scripts were on usa and
that you chose an interface name/address for the other node (uk).
You will choose application server and resource group names when you configure them
using Initialization and Standard Configuration. As you will see a little later, if you use
the Two-node Configuration Assistant, the application server name will be used to
generate the HACMP names for the cluster and resource group.

6-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

The top-level HACMP smit menu


# smit hacmp
HACMP for AIX
Move cursor to desired item and press Enter.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools
Cluster Simulator

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-7. The top-level HACMP smit menu

AU548.0

Notes:
The main HACMP smit menu
This is the top level HACMP smit menu. Youll find it often simplest to get here using the
smit fastpath shown above.
As implied by the # prompt, there is little point in being here if you dont have root
privileges!

Starting at the main smit menu


If youre interested in starting at the top level smit screen, which everyone familiar with
AIX would know, HACMP is under Communications Applications and Services.
Look for HACMP for AIX in that menu.
More often, this menu would be skipped by entering the command smit hacmp or
smitty hacmp but for the sake of completeness, we will start at the beginning of smit.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-11

Student Notebook

The standard configuration method


Initialization and Standard Configuration
Move cursor to desired item and press Enter.
Configuration Assistants
Configure an HACMP Cluster and Nodes
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
Display HACMP Configuration
HACMP Cluster Test Tool

What you will see, step-by-step


Start with Configure an HACMP Cluster and Nodes
F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-8. The standard configuration method

AU548.0

Notes:
The initialization and standard configuration menu
This method is preferred for all initial cluster configurations except for the most simple
two-node, hot-standby, one resource group, one volume group configuration. For those
simpler configurations, you can use the Two-node Configuration Assistant. Regardless
of your cluster complexity, it is good practice to use the Initialization and Standard
Configuration path (referred to as the Standard path) for all your cluster configurations
because it requires you to be aware of the details of your configuration. The importance
of this cant be underestimated.
Configuration changes made using the HACMP Standard path smit screens do not take
effect until they are verified and synchronized (see the third from the bottom selection in
this menu). Instead, they are managed on the node from which the configuration work is
performed. During synchronization, the files are propagated to the other nodes and will
cause HACMP to be dynamically reconfigured if there are active cluster nodes. More
about dynamic reconfiguration in a later unit.
6-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Note however that the Two-node Configuration Assistant does do the synchronization
step. More on the Two-node Configuration Assistant later.

The method
The menu shows the tasks as they are to be performed. Each follows the other with
each having submenus to be traversed.
You will start by configuring the cluster itself. This is done via the Configure an HACMP
Cluster and Nodes option. This will build the cluster, nodes, adapters and network
objects for IP based networks. Non-IP networks will be added later.
That is followed by the configuration of the resources that will be made highly available.
This includes the service addresses, application servers (specifying your application
start and stop script names) and the option to use C-SPOC to create your shared LVM
structures. This is done via the Configure Resources to Make Highly Available
option.
To make the resources available to HACMP for management, you must put them in
resource groups. You will create resource group(s) objects and then fill them with the
resources that were defined above. There will be nodes listed in a specific order for
acquiring the resources and the service addresses and volume groups that support the
application. This is done via the Configure HACMP Resource Groups option.

Caution
If changes are made on one node but not synchronized and then more changes are
made on a second node and then synchronized, the changes made on the first node
are lost.
If you want to avoid losing work, make sure that you dont flip back and forth between
nodes while doing configuration work (that is, work on only one node at least until
youve synchronized your changes).

Recommendation
Pick one of your cluster nodes to be the one node that you use to make changes.

Configuration assistants
Besides the Two-node Configuration Assistant, HACMP provides, via an additional
feature, configuration assistants for WebSphere, Oracle, and DB2, called Smart
Assistants.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-13

Student Notebook

Add nodes to an HACMP cluster


Configure Nodes to an HACMP Cluster (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[ibmcluster]

* Cluster Name

New Nodes (via selected communication paths)[usaboot1 ukboot1]


Currently Configured Node(s)

Add cluster name and resolvable names to be used to


communicate to the nodes.

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-9. Add nodes to an HACMP cluster

AU548.0

Notes:
Input for the standard configuration method
Assuming your network planning and setup was done correctly, you need only decide
on a name for the cluster and choose one IP address/hostname for each node that will
be in the cluster, including the node where you see this screen. This is not necessarily
the HACMP node name that will be assigned to the node, it is only a
resolvable/reachable address that can be used to gather information for the creation of
the HACMP topology configuration. The hostname of each node that is found is used as
the HACMP node name.
Notice that you can select the interfaces from a pick list (from the local /etc/hosts file)
and at this point in time, the Currently Configured Node(s) field is empty.

6-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What did we get?


# /usr/es/sbin/cluster/utilities/cltopinfo
Cluster Name: ibmcluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 1 network(s) defined
NODE usa:
Network net_ether_01
usaboot1 192.168.15.29
usaboot2 192.168.16.29
NODE uk:
Network net_ether_01
ukboot1 192.168.15.31
ukboot2 192.168.16.31

No resource groups defined


Copyright IBM Corporation 2008

Figure 6-10. What did we get?

AU548.0

Notes:
Output from standard configuration
This step has created the cluster, an IP network and non-service IP labels (boot
addresses). The network objects are created based on the addresses/subnet masks
that are configured on the adapters in the nodes specified in the previous screen. This
exists only on the node where the command was run. Later we will see the
synchronization process.
Notice that there is no non-IP network and there are no resources and no resource
groups yet when using the standard configuration method.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-15

Student Notebook

Now define highly available resources


Initialization and Standard Configuration
Move cursor to desired item and press Enter.
Configuration Assistants
Configure an HACMP Cluster and Nodes

Configure Resources to Make Highly Available


Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
Display HACMP Configuration
HACMP Cluster Test Tool

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-11. Now define highly available resources

AU548.0

Notes:
Not done yet
Because Configure an HACMP Cluster and Nodes does the topology, only there is
more to do using the Standard path:
- Application Server and Service Address are Resources must be created using the
Configure Resources to Make Highly Available.
- Resource group with policies and attributes must be created using the Configure
HACMP Resource Groups.
- Extended Configuration method must be used to add non-IP heartbeat networks.
- The cluster definitions must be propagated to the other nodes using Verify and
Synchronize.

6-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Game plan
These steps will follow, starting with the Configure Resources to Make Highly
Available option.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-17

Student Notebook

Start with service addresses


smit hacmp -> Initialization and Standard Configuration

Configure Resources to Make Highly Available


Move cursor to desired item and press Enter.
Configure
Configure
Configure
Configure

Service IP Labels/Addresses
Application Servers
Volume Groups, Logical Volumes and Filesystems
Concurrent Volume Groups and Logical Volumes

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-12. Start with service addresses

AU548.0

Notes:
The first step in definition of highly available resources
Again, the process will be to follow the menus. The first step is to define the Service IP
labels.

6-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding service IP labels


Configure Service IP Labels/Addresses
Move cursor to desired item and press Enter.
Add a Service IP Label/Address
Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-13. Adding service IP labels

AU548.0

Notes:
The Configure Service IP Labels/Addresses menu
This is the menu for managing service IP labels and addresses within the standard
configuration path. To define a new service label, choose Add a Service IP
Label/Address.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-19

Student Notebook

Add xweb service label (1 of 2)


Add a Service IP Label/Address (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[]
[]

* IP Label/Address
* Network Name

+
+

+--------------------------------------------------------------------------+

IP Label/Address

Move cursor to desired item and press Enter.

(none)
((none))

usaadm
(192.168.5.29)

ukadm
(192.168.5.31)

yweb
(192.168.5.70)

xweb
(192.168.5.92)

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

/=Find
n=Find Next

+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 6-14. Add xweb service label (1 of 2)

AU548.0

Notes:
Selecting the service label
This is the HACMP smit screen for adding a service IP label in the standard
configuration path.
The popup for the IP Label/Address field gives us a list of the IP labels that were found
in /etc/hosts but not associated with NICs.This could be quite a long list depending on
how many entries there are in the /etc/hosts file. Although, in practice, the list is fairly
short as /etc/hosts on cluster nodes tends to only include IP labels which are important
to the cluster.
The service IP label that we intend to associate with the xweb resource groups
application is xweb.

6-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Add xweb service label (2 of 2)


Add a Service IP Label/Address (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[xweb]
[net_ether_01]

* IP Label/Address
* Network Name

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

+
+

F4=List
F8=Image

Repeat the process for every service address to be


configured if mutual takeover cluster, defining yweb
Copyright IBM Corporation 2008

Figure 6-15. Add xweb service label (2 of 2)

AU548.0

Notes:
Choosing the network name
Although not shown, another menu will display prompting you to choose the network to
which this Service IP label belongs. The automatically generated network names are a
bother to type so weve used the popup list which contains the only IP network defined
on this cluster.
Notice that the popup list entry names the network and indicates the IP subnets
associated with each network. This is potentially useful information at this point as we
must specify a service IP label, which is not in either of these subnets to satisfy the
rules for IPAT via IP aliasing.

Menu filled in
This screen shows the parameters for the xweb resource groups service IP label.
When were sure that this is what we intend to do, press Enter to define the service IP
Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-21

Student Notebook

label. The label is then available from a pick list when you add resources to a resource
group later.

Mutual takeover configuration


At this point you would repeat the step to define all the service IP labels for all the
applications. If creating a mutual takeover configuration, you would specify the service
IP label for the application that will run on the other node. In our case that means
defining the yweb interface for the second resource group.

6-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Continue with application servers


smit hacmp -> Initialization and Standard Configuration
Configure Resources to Make Highly Available
Move cursor to desired item and press Enter.
Configure
Configure
Configure
Configure

F1=Help
F9=Shell

Service IP Labels/Addresses
Application Servers
Volume Groups, Logical Volumes and Filesystems
Concurrent Volume Groups and Logical Volumes

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-16. Continue with application servers

AU548.0

Notes:
The next step is definition of highly available resources
Continuing to follow the menus, the next step is to define the application servers.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-23

Student Notebook

Add xwebserver application server (1 of 2)

Configure Application Servers


Move cursor to desired item and press Enter.
Add an Application Server
Change/Show an Application Server
Remove an Application Server

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-17. Add xwebserver application server (1 of 2)

AU548.0

Notes:
Configuring the application server resource
Weve now got to define an Application Server for the xweb resource group. This
Configure Application Servers menu displays under the Configure Resources to
Make Highly Available menu in the standard configuration path.

6-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Add xwebserver application server (2 of 2)


Add Application Server
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[xwebserver]
[/usr/local/scripts/startxweb]
[/usr/local/scripts/stopxweb]

* Server Name
* Start Script
* Stop Script

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Repeat the process for every Application Server to be


configured if mutual takeover cluster, creating a ywebserver
Copyright IBM Corporation 2008

Figure 6-18. Add xweb application server (2 of 2)

AU548.0

Notes:
Filling out the add application server menu
An application server has a name and consists of a start script and a stop script. Use
full path for the script names. The server name is then available from a pick list when
adding resources to a resource group later.

Review of start and stop scripts


The start script is invoked by HACMP when it needs to start the application. The stop
script is invoked when HACMP needs to stop the application (typically during a stop of
cluster services or as part of a fallback to a higher priority node).
The start script should first verify that all the required resources are actually available
and log a clear and useful message if it detects a problem. If the start script doesnt
check for the required resources, then the application might seem to function for quite
sometime before someone realizes that a critical resource isnt available.
Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-25

Student Notebook

The start script should then start the application. It is a good idea to then wait until it is
sure that the application has completely started. The cluster manager doesnt verify that
the application has started or that the start script exits with a 0 return code. Of course if
you configure an application monitor, the cluster manager will monitor the startup and/or
the continuous running of the application. Application monitors are not covered in this
class. They are covered in detail in the HACMP System Administration II class, AU610.
The stop scripts responsibility is to stop the application. It must not exit until the
application is totally stopped as HACMP will start to unmount filesystems and release
other resources as soon as the stop script terminates. The attempt to release these
resources might fail if there are remnants of the application still running.
The start and stop scripts must exist and be executable on all cluster nodes defined in
the resource group (that is, they must reside on a local non-shared filesystem) or you
will not be able to verify and synchronize the cluster. If you are using the auto-correction
facility of verification, the start/stop scripts from the node where they exist will be copied
to all other nodes.
HACMP 5.2 and later provides a file collection facility to help keep the start and stop
scripts in synch. Be sure this is what you want. In most cases this is acceptable.

Mutual takeover configuration


At this point you would repeat the step to define all the application servers for all the
applications. If creating a mutual takeover configuration, you would specify the
application server for the application that will run on the other node. In our case we will
create the ywebserver.

6-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Configure volume groups (optional)


At this point you can proceed to the next item in the menu to
make your Volume Groups via C-SPOC
You may have done this earlier when you configured the basics
(Planning and Base Configuration)

If you choose to, follow the menus


Configure Resources to Make Highly Available
Move cursor to desired item and press Enter.
Configure
Configure
Configure
Configure

F1=Help
F9=Shell

Service IP Labels/Addresses
Application Servers
Volume Groups, Logical Volumes and Filesystems
Concurrent Volume Groups and Logical Volumes

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Create volume groups for every application, regardless of which


system the application will normally run if mutual takeover cluster
Copyright IBM Corporation 2008

Figure 6-19. Configure volume groups (optional)

AU548.0

Notes:
Volume group creation
Planning your volume groups is critical as weve discussed in a previous unit. Creating
the volume groups can be done outside of the cluster configuration process or
integrated. The process used when following along the Standard path is through
C-SPOC. Well be discussing C-SPOC a little later. It is recommended that you use
C-SPOC to create your volume group definitions whether you do it at this point or
independent of the cluster configuration process. We will learn much more about the
process youd use if you chose to define your volume groups here, later in the course.

Mutual takeover configuration


At this point you would repeat the step to define all the volume groups for all the
applications. If creating a mutual takeover configuration, you would create from one
node all the volume groups for all the applications, regardless of which node theyll run
on, specifying only the nodes that the application may run on.
Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-27

Student Notebook

Discover the volume groups for pick-lists


Run discovery if you created Volume Groups in the previous step
Our first look at the Extended Configuration Path
Extended Configuration
Move cursor to desired item and press Enter.
Discover HACMP-related Information from Configured Nodes
Extended Topology Configuration
Extended Resource Configuration
Extended Cluster Service Settings
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets
Import Cluster Configuration from Online Planning Worksheets File
Extended Verification and Synchronization
HACMP Cluster Test Tool

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-20. Discover the volume groups for pick-lists

AU548.0

Notes:
Now run discovery
If you chose to create groups, then you need to re-generate the pick lists. This requires
using the Extended Configuration menu shown above. This applies to network objects
as well as LVM objects.
Pick list information is kept in flat files. The volume group information is in
/usr/es/sbin/cluster/etc/config/clvg_config. The IP information is kept in
/usr/es/sbin/cluster/etc/config/clip_config.

6-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding the xwebgroup resource group


smit hacmp -> Initialization and Standard Configuration ->
Configure HACMP Resource Groups
Configure HACMP Resource Groups
Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Remove a Resource Group
Change/Show Resources for a Resource Group (standard)

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-21. Adding the xwebgroup resource group

AU548.0

Notes:
Menu to add a resource group
Now, we are ready to create the xwebgroup Resource Group definition.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-29

Student Notebook

Setting name, nodes, and policies


Add a Resource Group
*Resource Group Name
[xwebgroup]
*Participating Nodes(Default Node Priority)
[usa uk]
Startup Policy
Online On Home Node O> +
Fallover Policy
Fallover To NextPrio> +
Fallback Policy
Fallback To Higher Pr> +

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Repeat the process for every Resource Group to be configured


if mutual takeover cluster, creating a ywebgroup with uk,usa
Copyright IBM Corporation 2008

Figure 6-22. Setting name, nodes, and policies

AU548.0

Notes:
Filling out the Add a Resource Group menu
Well call this resource group xwebgroup. It will be defined to operate on two nodes:
usa and uk. The order is important, with usa being the home or highest priority node.
The policies will be chosen as listed in the visual. Depending on the type of resource
group and how it is configured, the relative priority of nodes within the resource group
might be quite important.

Mutual takeover configuration


Another resource group would be defined, for example, ywebgroup, with the order of
the participating nodes reversed.

6-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding resources to the xwebgroup RG (1 of 2)


Configure HACMP Resource Groups

Move cursor to desired item and press Enter.


Add a Resource Group
Change/Show a Resource Group
Remove a Resource Group
Change/Show Resources for a Resource Group
(standard)

+----------------------------------------------------------------------+

Select a Resource Group

Move cursor to desired item and press Enter.

xwebgroup

|
ywebgroup

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

/=Find
n=Find Next

+----------------------------------------------------------------------+
Copyright IBM Corporation 2008

Figure 6-23. Adding resources to the xwebgroup RG (1)

AU548.0

Notes:
Selecting the resource group
Heres the Configure HACMP Resource Groups menu in the standard configuration
path. This menu is found under the standard configuration paths top level menu.
Select the Change/Show Resources for a Resource Group (standard) to get
started. When the Select a Resource Group popup appears, select which resource
group you want to work with and press Enter.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-31

Student Notebook

Adding resources to the xwebgroup RG (2 of 2)


Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
Custom Resource Group Name
Participating Node Names (Default Node Priority)
Startup Behavior
Fallover Behavior
Fallback Behavior

Online On First Avail>


Fallover To Next Prio>
Fallback To Higher Pr>

Service IP Labels/Addresses
Application Servers
Volume Groups
Use forced varyon of volume groups, if necessary
Filesystems (empty is ALL for VGs specified)

F1=Help
F5=Reset
F9=Shell

[Entry Fields]
xwebgroup
usa uk

F2=Refresh
F6=Command
F10=Exit

[xweb]
[xwebserver]
[xwebvg]
false
[]

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+
+

F4=List
F8=Image

Repeat previous two steps for every configured Resource Group


if mutual takeover cluster, in our case, ywebgroup
Copyright IBM Corporation 2008

Figure 6-24. Adding resources to the xwebgroup RG (2 of 2)

AU548.0

Notes:
Filling out the Change/Show All Resources and Attributes for a
Resource Group menu
This is the screen for showing/changing resources in a resource group within the
standard configuration path. There really arent a lot of choices to be made: xweb is the
service IP label we created earlier and xwebserver is the application server that we just
defined. xwebvg is a shared volume group containing a the filesystems needed by the
xwebserver application. We could specify the list of filesystems in the Filesystems field
but the default is to mount/unmount all filesystems in the volume group. Not only is this
what we want, but very practical because its easier to maintain over time. This way you
dont have to continue to update the resource group as you add filesystems to the
volume group.
Remember to press Enter to actually add the resources to the resource group.

Using Extended path to configure resources in the Resource Group


6-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Although the Extended Path hasnt been covered in detail, the configuring of resources
in a Resource Group can be done through that path. When using the Extended Path,
there are many more options. Make note of this as you may want to check this in the lab
or may need to know this when configuring your cluster at home.

Mutual takeover configuration


At this point you would repeat the step to define all the resource groups and associated
resources for all the applications. If creating a mutual takeover configuration, you would
specify the resource groups and associated resources for the application that will run on
the other node.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-33

Student Notebook

Synchronize and test the changes


Initialization and Standard Configuration
Move cursor to desired item and press Enter.
Configuration Assistants
Configure an HACMP Cluster and Nodes
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-25. Synchronize and test the changes

AU548.0

Notes:
Using the standard configuration to synchronize and test
After youve defined or changed the clusters topology or resources or both, you need
to:
- Verify and synchronize your changes
- Test your configuration

Verify and synchronize


These menu choices act immediately in the Standard Configuration. Their actions can
be customized in the Extended Configuration menus which we will not cover here.
The verification process collects AIX configuration information from each cluster node
and uses this information, the clusters current configuration (if there is one) and the
proposed configuration to verify that the proposed configuration (and the change it
represents if this is not the first synchronization) is valid. It is possible to override
6-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

verification errors but only if you are using Extended Configuration. Deciding to do so is
a decision that must be approached with the greatest of care, because it is very unusual
for a verification error to occur that can be safely overridden.
Also, remember the earlier discussion about synchronization--any HACMP
configuration changes made on any other cluster node will be lost if you complete a
synchronization on this cluster node.
Log files are created to show progress and problems. Check /var/adm/hacmp/clverify
for the logs. These log files have been vastly improved over the years with more details
on the commands being run during verify to help in determining the problems
encountered during verify.

Testing your cluster


You must test your newly configured cluster for proper functioning. You also must test
your cluster on a regular basis to ensure that it will continue to function properly. Finally,
you must test your cluster after every change to the environment, whether directly
related to the cluster or not.
It is highly recommended that you create a comprehensive test plan prior to configuring
your cluster to be used during the test phase. The test plan should be made up of a list
of test procedures. The test procedure should include (but not be limited to) the
following:
- Description of the what the procedure is testing (for example, node crash)
- Description of the expected results of the test (for example, application will fallover
to node b)
- Description of the method by which the test will be conducted (for example, node a
will be powered off)
- Space for comments on test results

HACMP cluster test tool


This test facility is disruptive to the cluster so you want to run it when not running cluster
services. Thus application downtime is required.
The Standard Configuration automated test procedure performs four sets of tests in the
following order:
1. General topology tests
2. Resource group tests on non-concurrent resource groups
3. Resource group tests on concurrent resource groups
4. Catastrophic failure test

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-35

Student Notebook

The Cluster Test Tool discovers information about the cluster


configuration, and randomly selects cluster components, such as
nodes and networks, to be used in the testing.
See the Administration Guide, Chapter 7, for more details.

6-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What do you have at this point?


You have a cluster configured as follows:
Two nodes defined
One network defined containing the boot and service addresses
Application Server objects defined containing the start and stop scripts for
all the applications to be made highly available
Volume groups defined to contain the data for the applications
Resource Groups defined that dictate the application ownership priority
and contain the service labels, application servers and volume groups for
the applications
All this has been synchronized

But to make this cluster complete, it needs:


A non-IP network
Persistent IP addresses

In addition, a snapshot of your work would be prudent


Now extend the configuration
Copyright IBM Corporation 2008

Figure 6-26. What do we have at this point?

AU548.0

Notes:
Its a start
We have accomplished a large portion of the cluster configuration. The nodes, IP
addresses (service and boot), networks, application servers, volume groups, and
resource groups have been configured. This configuration has been synchronized
across the cluster nodes. We indicate that some level of testing could be performed at
this point. You can wait until after we do the rest of the configuration to test everything,
or break it up as we have it here.

Whats left?
Recall the strong recommendation to include at least one non-IP network in your
cluster? Well, we havent done that yet. And what about access to the cluster nodes
using a reliable non-service, non-boot IP address? We can accomplish that with a
persistent address. Finally, it is always a good idea to create backups after producing

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-37

Student Notebook

this much good work. It is advisable to create both a snapshot of the cluster
configuration and a mksysb of the systems.

6-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Extending the configuration


Extended Configuration
Move cursor to desired item and press Enter.
Discover HACMP-related Information from Configured Nodes
Extended Topology Configuration
Extended Resource Configuration
Extended Cluster Service Settings
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning worksheets
Import Cluster Configuration from Online Planning Worksheets File
Extended Verification and Synchronization
HACMP Cluster Test Tool
F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-27. Extending the configuration

AU548.0

Notes:
Reasons to use extended path
Heres the top-level extended configuration path menu. We need to pop over to this
path in order to perform some steps that cannot be done using the Standard
Configuration such as defining a non-IP network, adding a persistent label and saving
the configuration data. We will explore these steps in this unit.
Extended Configuration is also required for configuring IPAT via Replacement and
Hardware Address Takeover as well as defining an SSA heartbeat network. These are
not discussed in this course. Appendix C covers IPAT via Replacement and Hardware
Address Takeover, and Appendix D covers SSA heartbeat networks.
Finally, other reasons for using the Extended Path will be covered in the course HACMP
Administration II: Administration and Problem Determination (AU61).

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-39

Student Notebook

Extended topology configuration menu


Extended Topology Configuration
Move cursor to desired item and press Enter.
Configure an HACMP Cluster
Configure HACMP Nodes
Configure HACMP Sites
Configure HACMP Networks
Configure HACMP Communication Interfaces/Devices
Configure HACMP Persistent Node IP Label/Address
Configure HACMP Global Networks
Configure HACMP Network Modules
Configure Topology Services and Group Services
Show HACMP Topology

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-28. Extended topology configuration menu

AU548.0

Notes:
Getting to the non-IP network configuration menus
Non-IP networks are elements of the clusters topology; so were in the topology section
of the extended configuration paths menu hierarchy.
A non-IP network is defined by specifying the networks end-points. These end-points
are called communication devices; so we have to head down into the communication
Interfaces/devices part of the extended topology screens.

6-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Communication interfaces and devices


Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System
Settings

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-29. Communication interfaces and devices

AU548.0

Notes:
The communication interfaces and devices menu
This is the communication and devices part of the extended configuration path. We will
select the Add Communication Interfaces/Devices option.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-41

Student Notebook

Defining a non-IP network (1 of 3)


Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System
Settings

+--------------------------------------------------------------------------+
Select a category

Move cursor to desired item and press Enter.

Add Discovered Communication Interface and Devices

Add Predefined Communication Interfaces and Devices

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 6-30. Defining a non-IP network (1 of 3)

AU548.0

Notes:
Deciding which Add to choose
The first question we encounter is whether we want to add discovered or pre-defined
communication interfaces and devices. The automatic discovery that was done when
the added the cluster nodes earlier would have found the rs232/hdisk devices; so we
pick the Discovered option.

6-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Defining a non-IP network (2 of 3)


Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System
Settings

+--------------------------------------------------------------------------+
Select a category

Move cursor to desired item and press Enter.

# Discovery last performed: (Feb 12 18:20)

Communication Interfaces

Communication Devices

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 6-31. Defining a non-IP network (2 of 3)

AU548.0

Notes:
Is it an interface or a device?
Now we need to indicate whether we are adding a communication interface or a
communication device. Non-IP networks use communication devices as end-points
(dev/tty, for example); so select Communication Devices to continue.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-43

Student Notebook

Defining a non-IP network (3 of 3)


Press Enter and HACMP defines a new non-IP network with these communication
devices. Choose the ttys for a serial network too, if possible.
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings
+--------------------------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7.

ONE OR MORE items can be selected.

Press Enter AFTER making all selections.

# Node
Device
Pvid

>
usa
hdisk5
000b4a7cd10c73d78

uk
hdisk5
000b4a7cd10c73d78

>
usa
/dev/tty1

uk
/dev/tty1

usa
/dev/tmssa1

uk
/dev/tmssa2

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

n=Find Next

F1 /=Find
F9+--------------------------------------------------------------------------------------------+
Copyright IBM Corporation 2008

Figure 6-32. Defining a non-IP network (3 of 3)

AU548.0

Notes:
Were now presented with a list of the discovered communication devices.
You can either choose to add an rs232 (using the /dev/tty entries) network or a diskhb
network (using the /dev/hdisk entries). If youre interested, we cover SSA in Appendix D.

rs232 networks
The steps to follow to create and test the rs232 network:
a. /dev/tty1 on usa is connected to /dev/tty1 on uk using a fully wired rs232 null-modem
cable (dont risk a potentially catastrophic partitioned cluster by failing to configure a
non-IP network or by using cheap cables). Select these two devices, and press
Enter to define the network.
b. Before you use this smit screen to define the non-IP network, make sure that you
verify that the link between the two nodes is actually working.
c. For our example, the non-IP rs232 network connecting usa to uk can be tested as
follows:
6-44 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

i.

Issue the command stty < /dev/tty1 on one node. The command should
hang.
ii. Issue the command stty < /dev/tty1 on the other node. The command should
immediately report the ttys status and the command that was hung on the first
node should also immediately report its ttys status.
iii. These commands should not be run while HACMP is using the tty.
iv. If you get the behavior described above (especially including the hang in the first
step that recovers in the second step), then the ports are probably connected
together properly (check the HACMP log files when the cluster is up to be sure).
If you get any other behavior then you probably are using the wrong cable or the
rs232 cable isnt connected the way that you think it is).

diskhb networks
The steps to follow to configure and test a Heartbeat on Disk network:
a. Make sure you choose a pair of entries (such as /dev/hdisk5 shown in the figure),
one for each of two nodes. Note that it is actually the pvids that must match since
this is the same disk.
b. You can test the connection using the command /usr/sbin/rsct/bin/dhb_read as
follows:
- On Node A, enter dhb_read -p hdisk5 -r
- On Node B, enter dhb_read -p hdisk5 -t
- You should then see on both nodes: Link operating normally.

Handling more than two nodes


In a cluster with more than two nodes, the serial network must form a loop. For
example, in a three-node cluster, the RS232 cables might run from:
- Serial port 0 on node A to serial port 1on node B, then from
- Serial port 0 on node B to serial port 1on node C, then from
- Serial port 0 on node C to serial port 1 on node A
Such a configuration would require the definition of three serial networks, because each
serial network can only connect between two nodes.
Heartbeat on disk is not supported across multiple nodes, unless there is a concurrent
(online on all nodes as startup policy in resource group) is used. This requires the
implementation of Multi-node Disk Heartbeat which is beyond the scope of this class.
This feature is discussed in more detail in HACMP System Administration II, AU61.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-45

Student Notebook

Defining persistent node IP labels (1 of 3)


Configure HACMP Persistent Node IP Label/Addresses
Move cursor to desired item and press Enter.
Add a Persistent Node IP Label/Address
Change / Show a Persistent Node IP Label/Address
Remove a Persistent Node IP Label/Address

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-33. Defining persistent node IP labels (1 of 3)

AU548.0

Notes:
Benefits/risks on using persistent IP labels
Defining a persistent node IP label on each cluster node allows the cluster
administrators to contact specific cluster nodes (or write scripts which access specific
cluster nodes) without needing to worry about whether the service IP address is
currently available or which node it is associated with.
The (slight) risk associated with persistent node IP labels is that users might start using
them to access applications within the cluster. You should discourage this practice as
the application might move to another node. Instead, users should be encouraged to
use the IP address associated with the application (that is, the service IP label that you
configure into the applications resource group). Also, be careful if you decide to put the
persistent address on the same subnet as the service address for an application that
might be hosted. This could cause some application traffic to use the persistent
address/interface causing unpredictable behavior.

6-46 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Defining persistent node IP labels (2 of 3)

Configure HACMP Persistent Node IP Label/Addresses


Move cursor to desired item and press Enter.
Add a Persistent Node IP Label/Address
Change / Show a Persistent Node IP Label/Address
Remove a Persistent Node IP Label/Address

+-------------------------------------------------------------------------------------+

Select a Node

Move cursor to desired item and press Enter.

usa

uk

F2=Refresh
F3=Cancel

F1=Help
F8=Image
F10=Exit
Enter=Do

/=Find
n=Find Next

+-------------------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 6-34. Defining persistent node IP labels (2 of 3)

AU548.0

Notes:
First, you select a node
Selecting the Add a Persistent Node IP Label/Address choice displays this prompt for
which node wed like to define the address on.
One Persistent Address is supported per network. Each node can have a Persistent
Address or Addresses defined, but it isnt required.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-47

Student Notebook

Defining persistent node IP labels (3 of 3)


Press Enter and then repeat for the uk persistent IP label.
Add a Persistent Node IP Label/Address
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Node Name
* Network Name
* Node IP Label/Address

[Entry Fields]
usa
[net_ether_01] +
[usaadm]
+

F1=Help
F5=Reset
F9=Shell

F3=Cancel
F7=Edit
Enter=Do

F2=Refresh
F6=Command
F10=Exit

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 6-35. Defining persistent node IP labels (3 of 3)

AU548.0

Notes:
Filling out the Add a Persistent Node IP Label/Address menu
When youre on this screen, select the appropriate IP network from the Network Name
and IP Label/Address that you want to use from the pick lists.
You can repeat these persistent menus to choose a persistent label for the other nodes.
Press Enter to finish the operation.

6-48 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Synchronize
smitty hacmp -> Extended Configuration
Extended Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.

* Verify, Synchronize or Both


* Automatically correct errors found during
verification?
* Force synchronization if verification fails?
* Verify changes only?
* Logging

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
[Both]
+
[No]
+

[No]
+
[No]
+
[Standard] +

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 6-36. Synchronize

AU548.0

Notes:
The Extended Verification and Synchronization menu
This time the extended configuration paths HACMP Verification and Synchronization
screen was chosen. When the extended path version is chosen, it presents a
customization menu (shown above) which the standard path does not do:
Verify, Synchronize, or Both - This option is useful to verify a change without
synchronizing it (you might want to make sure that what you are doing makes sense
without committing to actually using the changes yet). Synchronizing without verifying is
almost certainly a foolish idea except in the most exotic of circumstances.
Automatically correct errors found during verification? - This option is discussed in
the unit on problem determination. This feature can fix certain errors that clverify
detects. By default it is turned off. This option only displays if cluster services are not
started.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-49

Student Notebook

Force synchronization if verification fails? - This is almost always a very bad idea.
Make sure that you really and truly must set this option to Yes before doing so.
Verify changes only? - Setting this option to Yes will cause the verification to focus
on aspects of the configuration that changed since the last synchronization. As a result,
the verification will run slightly faster. This might be useful during the mid to early stages
of cluster configuration. It seems rather risky once the cluster is in production.
Logging - You can increase the amount of logging related to this verification and
synchronization by setting this option to Verbose. This can be quite useful if you are
having trouble figuring out what is going wrong with a failed verification.

6-50 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Save configuration: snapshot


Snapshot Configuration
Move cursor to desired item and press Enter.
Create a Snapshot of the Cluster Configuration
Change/Show a Snapshot of the Cluster Configuration
Remove a Snapshot of the Cluster Configuration
Restore the Cluster Configuration From a Snapshot
Configure a Custom Snapshot Method
Convert an Existing Snapshot for Online Planning Worksheets

Create a Snapshot of the Cluster Configuration


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Cluster Snapshot Name
Custom-Defined Snapshot Methods
Save Cluster Log Files in snapshot
* Cluster Snapshot Description

[Entry Fields]
[]
[]
No
[]

/
+
+

Copyright IBM Corporation 2008

Figure 6-37. Save configuration: snapshot

AU548.0

Notes:
Saving the cluster configuration
You can save the cluster configuration to a snapshot file or to an XML file. The cluster
can be restored either from the snapshot file for the xml file. The xml file can also be
used with the online planning worksheets and potentially with other applications. This
visual looks at the snapshot method and the next visual looks at the XML method.

Creating a snapshot
smit hacmp -> Extended Configuration -> Snapshot Configuration
A snapshot captures the HACMP ODM files, which allows you to recover the cluster
definitions. There is also an info file. The info file is discussed further in the AU61
course HACMP Administration II: Administration and Troubleshooting.
If necessary there is, from the Snapshot Configuration menu, another option to restore
(apply) a snapshot.
Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-51

Student Notebook

Save configuration: xml file


smitty hacmp ->Extended Configuration
Export Definition File for Online Planning Worksheets
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[/var/hacmp/log/cluster.haw] /
[]

* File Name
Cluster Notes
Snapshot Configuration

Move cursor to desired item and press Enter.


Create a Snapshot of the Cluster Configuration
Change/Show a Snapshot of the Cluster Configuration
Remove a Snapshot of the Cluster Configuration
Restore the Cluster Configuration From a Snapshot
Configure a Custom Snapshot Method
Convert an Existing Snapshot for Online Planning Worksheets
Copyright IBM Corporation 2008

Figure 6-38. Save configuration: xml file

AU548.0

Notes:
Creating the xml file
Using Extended Configuration, you can save the cluster configuration directly to an xml
file via the menu Export Definition File for Online Planning Worksheets or from a
snapshot via the Snapshot Configuration menu Convert Existing Snapshot For
Online Planning Worksheets.
When created, you can use the Online Planning worksheets to get an updated view of
the configuration or change the configuration or both. The xml file can potentially be
used from other applications or manually to make and display configuration information.
This will be explored in the lab exercise for this course. For the moment, in case you
want to know the command to apply an xml file, it is
/usr/es/sbin/cluster/utilities/cl_opsconfig

6-52 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Two-node cluster configuration assistant


Two-Node Cluster Configuration Assistant
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
*
*
*
*
*

Communication Path
Application Server
Application Server
Application Server
Service IP Label

F1=Help
F5=Reset
F9=Shell

to Takeover Node
Name
Start Script
Stop Script

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

[Entry Fields]
[ukboot1]
[xwebserver]
[/mydir/xweb_start]
[/mydir/xweb_stop]
[xweb]

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 6-39. Two-node cluster configuration assistant

AU548.0

Notes:
The two-node cluster configuration assistant smit menu
If you have a simple two-node, hot-standby cluster to configure, the Two-Node Cluster
Configuration Assistant might be the answer. Here is the menu.
If your network is setup correctly and you have configured a shared enhanced
concurrent mode volume group, then HACMP will use this menu to build a complete
two-node cluster including Topology, Resources, Resource group and a non-IP network
using heartbeat over disk.
Also, synchronization is done and you are all ready to start cluster services on both
nodes.
The example in the visual is run from the usa node. This makes usa the home node
(highest priority) in the resource group that is created. You will have defined the boot
addresses on both usa and uk and created any shared volume groups, on both nodes.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-53

Student Notebook

System-generated names will be created based on the application server name


supplied for the cluster, resource group, and application server.

6-54 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What does the two-node assistant give you?


# /usr/es/sbin/cluster/utilities/cltopinfo
Cluster Name: xwebserver_cluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 2 network(s) defined
NODE usa:
Network net_ether_01
usaboot1
192.168.15.29
usaboot2
192.168.16.29
Network net_diskhb_01
usa_hdisk5_01
/dev/hdisk5
NODE uk:
Network net_ether_01
ukboot1
192.168.15.31
ukboot2
192.168.16.31
Network net_diskhb_01
uk_hdisk5_01
/dev/hdisk5
Resource Group xwebserver_group
Startup Policy
Online On Home Node Only
Fallover Policy Fallover To Next Priority Node in List
Fallback Policy Never Fallback
Participating Nodes
usa uk
Service IP Label
xweb
Copyright IBM Corporation 2008

Figure 6-40. What does the two-node assistant give you?

AU548.0

Notes:
Seeing what was done
One utility that displays what was done is the cltopinfo command. This command
displays the clusters topology. Notice that each nodes IP labels on the ethernet
adapters have been defined on the net_ether_01 HACMP network. The non-IP diskhb
network was also configured and appears with communication devices (dev/hdisk5) on
each of the two nodes. Notice what policies you get automatically configured when
using this approach.
Another utility is cldisp. This command shows what is configured from the application
point of view.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-55

Student Notebook

Points to observe
- The Two-Node Configuration Assistant did everything -- created topology objects
including a non-IP heartbeat over disk network when it saw an enhanced concurrent
volume group, created resource groups, and verified and synchronized the cluster.
- The Two-Node Configuration Assistant assigns names; so you will have to decide if
you like them.
- The assistant also takes for HACMP all network adapters found. You might have to
remove ones for interfaces that you dont want HACMP to have.
- Only one application and two-nodes are supported.
- You need to pre-configure the shared volume group. If it is Enhanced Concurrent
Mode then a non-IP heartbeat over disk network is configured. Else you are on your
own to configure a non-IP network.
- The Fallback policy is set to Never Fallback.
- No persistent or rs-232 non-IP network is defined.

6-56 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Where are we in the implementation?


9Plan for network, storage, and application
Eliminate single points of failure

9Define and configure the AIX environment


Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP)
Application start and stop scripts

9Install the HACMP filesets and reboot


9 Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks

Resources, resource group, attributes:


Resources: Application Server, service label
Resource group: Identify name, nodes, policies
Attributes: Application Server, service label, VG, filesystem

Synchronize

Start Cluster Services


Test configuration
Save configuration
Copyright IBM Corporation 2008

Figure 6-41. Where are we in the implementation?

AU548.0

Notes:
Cluster configuration is implemented
Wow! All is done except for starting Cluster Services.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-57

Student Notebook

Starting Cluster Services (1 of 4)


HACMP for AIX
Move cursor to desired item and press Enter.
Initialization and Standard Configuration
Extended Configuration
System Management (C-SPOC)
Problem Determination Tools
Cluster Simulator

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-42. Starting Cluster Services (1 of 4)

AU548.0

Notes:
How to start HACMP Cluster Services
Starting Cluster Services involves a trip to the top-level HACMP menu because we
need to go down into the System Management (C-SPOC) part of the tree. C-SPOC will
be covered in more detail in the next unit.
It might be worth pointing out that if you use the Web-based smit for HACMP fileset,
then there is a navigation menu that allows you to skip from one menu path to another
one without having to go back to the top.
After a few times, you will probably learn to use the command smit clstart or smitty
clstart to bypass this menu and the next two menus.

6-58 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Starting Cluster Services (2 of 4)


System Management (C-SPOC)
Move cursor to desired item and press Enter.
Manage HACMP Services
HACMP Communication Interface Management
HACMP Resource Group and Application Management
HACMP Log Viewing and Management
HACMP File Collection Management
HACMP Security and Users Management
HACMP Logical Volume Management
HACMP Concurrent Logical Volume Management
HACMP Physical Volume Management
Open a SMIT Session on a Node

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-43. Starting Cluster Services (2 of 4)

AU548.0

Notes:
The C-SPOC menu
Choose Manage HACMP Services next.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-59

Student Notebook

Starting Cluster Services (3 of 4)


Manage HACMP Services
Move cursor to desired item and press Enter.
Start Cluster Services
Stop Cluster Services
Show Cluster Services

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 6-44. Starting Cluster Services (3 of 4)

AU548.0

Notes:
The Manage HACMP Services menu
Were almost there...

6-60 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Starting Cluster Services (4 of 4)


# smit clstart
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Start now, on system restart or both
Start Cluster Services on these nodes
* Manage Resource Groups
BROADCAST message at startup?
Startup Cluster Information Daemon?
Ignore verification errors?
Automatically correct errors found during
cluster start?

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
now
[usa,uk]
Automatically
true
true
false
Interactively

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+
+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 6-45. Starting Cluster Services (4 of 4)

AU548.0

Notes:
Startup choices
There are a few choices to make. For the moment, we will just recommend the defaults,
except selecting both nodes and turning on the Cluster Information Daemon. The other
options are discussed in the next unit in more detail.

Remember the fast path


Notice the smit clstart fastpath. This is often much faster than working your way
through the menu tree.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-61

Student Notebook

Removing a cluster
Use Extended Topology Configuration
Configure an HACMP Cluster
Move cursor to desired item and press Enter.
Add/Change/Show an HACMP Cluster
Remove an HACMP Cluster
Reset Cluster Tunables

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

# > /usr/es/sbin/cluster/etc/rhosts
Copyright IBM Corporation 2008

Figure 6-46. Removing a cluster

AU548.0

Notes:
Starting over
If you have to start over, you can:
- Stop cluster services on all nodes.
- Use Extended Configuration, as shown above to remove the cluster (on all nodes).
- Remove the entries (but not the file) from /usr/es/sbin/cluster/etc/rhosts (on all
nodes).
If you really want to start over, then you can:
- installp -u cluster
- rm -r /usr/es/* (be very careful here)

6-62 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

We're there!
We've configured a two-node cluster with multiple resource
groups, including all the steps with a :
Each resource group has a different home (primary) node
Each resource group falls back to its home node on recovery

This is called a two-node mutual takeover cluster


usa

uk
X

Each resource group is also configured to use IPAT via IP aliasing.


This particular style of cluster (mutual takeover with IPAT) is, by far,
the most common style of HACMP cluster.
Copyright IBM Corporation 2008

Figure 6-47. We're there!

AU548.0

Notes:
Mutual takeover completed
Weve finished configuring a two-node HACMP cluster with two resource groups
operating in a mutual takeover configuration. The term mutual takeover derives from the
fact that each node is the home node for one resource group and provides fallover (that
is, takeover) services to the other node.
This is, without a doubt, the most common style of HACMP cluster as it provides a
reasonably economical way to protect two separate applications. It also keeps the folks
with budgetary responsibility happier because each of the systems is clearly doing
something useful all the time (many would argue that a system that is just acting as a
standby for a critical application is doing something useful but it is a lot easier to make
the case if both systems are actually running an important application at all times).
The cluster even has the mandatory non-IP network!

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-63

Student Notebook

Checkpoint
1. True or False?
It is possible to configure a recommended simple two-node cluster environment using
just the standard configuration path.

2. In which of the top-level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a.Initialization and Standard Configuration
b.Extended Configuration
c.System Management (C-SPOC)
d.Problem Determination Tools

3. In which of the top-level HACMP menu choices is the menu for defining a non-IP
heartbeat network?
a.Initialization and Standard Configuration
b.Extended Configuration
c.System Management (C-SPOC)
d.Problem Determination Tools

4. True or False?
It is possible to configure HACMP faster by having someone help you on the other
node.

5. True or False?
You must specify exactly which filesystems you want mounted when you put resources
into a resource group.

Copyright IBM Corporation 2008

Figure 6-48. Checkpoint

AU548.0

Notes:

6-64 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Checkpoint
1.True or False?
It is possible to configure a recommended simple two-node cluster environment
using just the standard configuration path.

2.In which of the top-level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a.Initialization and Standard Configuration
b.Extended Configuration
c.System Management (C-SPOC)
d.Problem Determination Tools

3.In which of the top-level HACMP menu choices is the menu for defining a non-IP
heartbeat network?
a.Initialization and Standard Configuration
b.Extended Configuration
c.System Management (C-SPOC)
d.Problem Determination Tools

4.True or False?
It is possible to configure HACMP faster by having someone help you on the other
node.

5.True or False?
You must specify exactly which filesystems you want mounted when you put
resources into a resource group.

Copyright IBM Corporation 2008

Figure 6-49. Break time!

AU548.0

Notes:
Some notes from the developer :-)
This is a photograph of Lake Louise in the Canadian Rocky Mountains (located about a
90 minute drive west of Calgary). If you are ever there, make sure that you rent one of
the canoes in the photograph and go for a paddle out on the lake. Theres also a
number of quite spectacular and not particularly strenuous hikes that start from near the
point that this photograph was taken. The hike that goes up to the tea house is
definitely worth an afternoon (you can pay money to go up on horseback if you dont
feel like walking for free).
Also, can you read this? ;-)
Aoccdrnig to a rscheearchr at an Elingsh uinervtisy, it deosn't mttaer in waht oredr the
ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer is at the rghit
pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is
bcuseae we do not raed ervey lteter by it slef but the wrod as a wlohe.

Copyright IBM Corp. 1998, 2008

Unit 6. Initial cluster configuration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

6-65

Student Notebook

Checkpoint solutions
1.

True or False?
It is possible to configure a recommended simple two-node cluster environment
using just the standard configuration path.
You cant create the non-IP network from the standard path.

2.

In which of the top-level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a.
b.
c.
d.

3.

In which of the top-level HACMP menu choices is the menu for defining a nonIP heartbeat network?
a.
b.
c.
d.

4.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

True or False?
It is possible to configure HACMP faster by having someone help you on the other
node.

5.

True or False?
You must specify exactly which filesystems you want mounted when you put
resources into a resource group.
Copyright IBM Corporation 2008

Figure 6-50. Unit summary

AU548.0

Notes:

6-66 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 7. Basic HACMP administration


What this unit is about
This unit describes basic administration tasks for HACMP for AIX.

What you should be able to do


After completing this unit, you should be able to:
Use the SMIT Standard and Extended menus to make topology
and resource group changes
Describe the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
Start and stop Cluster Services
Perform resource group move operations
Discuss the benefits and capabilities of DARE
Use the snapshot facility to return to a previous cluster
configuration or to roll back changes
Configure and use WebSMIT

How you will check your progress


Accountability:
Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Use the SMIT Standard and Extended menus to make
topology and resource group changes
Describe the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
Start and stop Cluster Services
Perform resource group move operations
Discuss the benefits and capabilities of DARE
Use the snapshot facility to return to a previous cluster
configuration or to roll back changes
Configure and use Web SMIT

Copyright IBM Corporation 2008

Figure 7-1. Unit objectives

AU548.0

Notes:

7-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

7.1 Topology and resource group management

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-3

Student Notebook

Topology and resource group management


After completing this topic, you should be able to:
Add a resource group and resources to an existing cluster
Remove a resource group from a cluster
Add a new node to an existing cluster
Remove a node from an existing cluster
Configure a non-IP heartbeat network

Copyright IBM Corporation 2008

Figure 7-2. Topology and resource group management

AU548.0

Notes:

7-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Yet another resource group


The users have asked that a third application be added to the cluster
The application uses very little CPU or memory and there's money in the
budget for more disk drives in the disk enclosure
Minimizing downtime is particularly important for this application
The resource group is called zwebgroup

usa

uk
X

Copyright IBM Corporation 2008

Figure 7-3. Yet another resource group

AU548.0

Notes:
Introduction
Were now going to embark on a series of hypothetical scenarios to illustrate a number
of routine cluster administration tasks. Some of these scenarios are more realistic than
others.

Add a resource group


In this first scenario, were going to add a resource group to the cluster. This new
resource group is called zwebgroup. This resource groups application has been
reported to use very little in the way of system resource, and there is a strong desire to
avoid unnecessary zwebgroup outages.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-5

Student Notebook

Adding a third resource group


We'll change the startup policy to "Online On First Available Node" so that
the resource group comes up when usa is started when uk is down.

Add a Resource Group


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Resource Group Name
* Participating Node Names (Default Node Priority)
Startup Policy
Fallover Policy
Fallback Policy

[Entry Fields]
[zwebgroup]
[usa uk] +

Online On First Avail> +


Fallover To Next Prio> +
Never Fallback
+

avoid startup delay by starting on first available node


avoid fallback outage by never falling back

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Does the order in which the node names are specified matter?
Copyright IBM Corporation 2008

Figure 7-4. Adding the third resource group

AU548.0

Notes:
Add a resource group
We use the Extended path. It is configured to start up on whichever node is available
first and to never fallback when a node rejoins the cluster. The combination of these two
parameters should go a long way towards minimizing this resource groups downtime.
If youre familiar with the older terminology of cascading and rotating resource groups,
this resource groups policies make it essentially identical to a cascading without
fallback resource group.

7-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding a third service IP label (1 of 2)


The extended configuration path screen for adding a service IP label
provides more options. Choose those that mimic the standard path.
Configure HACMP Service IP Labels/Addresses
Move cursor to desired item and press Enter.

Add a Service IP Label/Address


Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

+--------------------------------------------------------------------------+

Select a Service IP Label/Address type

Move cursor to desired item and press Enter.

Configurable on Multiple Nodes

Bound to a Single Node

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 7-5. Adding a third service IP label (1 of 2)

AU548.0

Notes:
Introduction
We need to define a service IP label for the zwebgroup resource group.

IPAT via IP aliasing required


Creating a third resource group on a cluster with one network and two nodes requires
the use of IPAT via IP aliasing. A cluster that uses only IPAT via IP replacement is for all
practical purposes restricted to one resource group with a service IP label per node per
IP network. Because our cluster has only one IP network, it would not be able to support
three different resource groups with service IP labels if it used IPAT via replacement.

Resource group limits


HACMP 5.2 and above supports a maximum of 64 resource groups and 256 IP
addresses known to HACMP (for example, service and interface IP addresses). There
Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-7

Student Notebook

are no other limits on the number of resource groups with service labels that can be
configured on an IPAT via IP aliasing network (although, eventually, you run out of CPU
power or memory or something for all the applications associated with these resource
groups).

Service IP label/address type


Bound to a Single Node is used with IBMs General Parallel File System (GPFS).

Network name
The next step is to associate this Service Label with one of the HACMP networks. This
is not shown in the visual.

7-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding a third service IP label (2 of 2)


The Alternate Hardware Address ... field is used for hardware address takeover
(HWAT). You can find more information on HWAT configuration in Appendix C.
Add a Service IP Label/Address configurable on Multiple Nodes (extended)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* IP Label/Address
[zweb]
* Network Name
net_ether_01
Alternate HW Address to accompany IP Label/Address []

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-6. Adding a third service IP label (2 of 2)

AU548.0

Notes:
Adding a service IP label
The visual shows the entry fields for this panel.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-9

Student Notebook

Adding a third application server


The Add Application Server screen is identical in both configuration
paths.
Add Application Server
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[zwebserver]
[/usr/local/scripts/startzweb]
[/usr/local/scripts/stopzweb]

* Server Name
* Start Script
* Stop Script

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-7. Adding a third application server

AU548.0

Notes:
Add an application server
You must give it a name and specify a start and stop script.

7-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding resources to the third RG (1 of 2)


The extended path's SMIT screen for updating the contents
of a resource group is much more complicated!
Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP]
Resource Group Name
Participating Node Names (Default Node Priority)

[Entry Fields]
zwebgroup
usa uk

Startup Behavior
Fallover Behavior
Fallback Behavior
Fallback Timer Policy (empty is immediate)

Online On First Avail>


Fallover To Next Prio>
Never Fallback
[]
+

Service IP Labels/Addresses
Application Servers

[zweb]
[zwebserver]

+
+

[zwebvg]
false
false
[]
fsck

+
+
+
+
+

Volume Groups
Use forced varyon of volume groups, if necessary
Automatically Import Volume Groups
Filesystems (empty is ALL for VGs specified)
Filesystems Consistency Check
[MORE...17]
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-8. Adding resources to the third RG (1 of 2)

AU548.0

Notes:
Adding resources to a resource group (extended path)
This is the first of two screens to show the Extended Path menu for adding attributes.
Unlike the Standard path, it contains a listing of all the possible attributes.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-11

Student Notebook

Adding resources to the third RG (2 of 2)


Even more choices!
Fortunately, only a handful tend to be used in any given context.
Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[MORE...17]
Filesystems Consistency Check
Filesystems Recovery Method
Filesystems mounted before IP configured
Filesystems/Directories to Export (NFSv2/3)

[Entry Fields]
fsck
sequential
false
[]

+
+
+
+

Filesystems/Directories to Export (NFSv4)


Stable Storage Path (NFSv4)
Filesystems/Directories to NFS Mount
Network For NFS Mount

[]
[]
[]
[]

+
+
+
+

Tape Resources
Raw Disk PVIDs

[]
[]

+
+

Fast Connect Services


Communication Links

[]
[]

+
+

Primary Workload Manager Class


Secondary Workload Manager Class

[]
[]

+
+

Miscellaneous Data
WPAR Name

[]
[]

[BOTTOM]
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-9. Adding resources to the third RG (2 of 2)

AU548.0

Notes:
Adding resources to a resource group (extended path)
More choices.
New choices for HACMP 5.4.1 include the NFS V4 entries and the WPAR name.

7-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Synchronize your changes


The extended configuration path provides verification and
synchronization options.
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Verify, Synchronize or Both
* Automatically correct errors found during
verification?

[Entry Fields]
[Both]
[No]

[No]
[No]
[Standard]

+
+
+

* Force synchronization if verification fails?


* Verify changes only?
* Logging

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Remember to verify that you actually implemented what was planned by


executing your test plan.
Copyright IBM Corporation 2008

Figure 7-10. Synchronize your changes

AU548.0

Notes:
Extended path synchronization
This is the Extended path screen to show the Synchronization menu options that are
not shown in the Standard path.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-13

Student Notebook

Expanding the cluster


The Users "find" money in the budget and decide to "invest" it
to improve the availability of the xweb and yweb
applications
Nobody seems to be too worried about the zweb application

usa

india

uk
X

Copyright IBM Corporation 2008

Figure 7-11. Expanding the cluster

AU548.0

Notes:
Expanding the cluster
In this scenario, well look at adding a node to a cluster.

7-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding a new cluster node


1.Physically connect the new node
Connect to IP networks
Connect to the shared storage subsystem
Connect to non-IP networks to create a ring encompassing all nodes

2.Configure the shared volume groups on the new node


3.Add the new node's IP labels to /etc/hosts on one existing node
4.Copy /etc/hosts from this node to all other nodes
5.Install AIX, HACMP and application software on the new node:
Install patches required to bring the new node up to the same level as the existing cluster
nodes
Reboot the new node (always reboot after installing or patching HACMP)

6.Add the new node to the existing cluster (from one of the existing
nodes)
7.Add non-IP networks for the new node
8.Synchronize your changes
9.Start Cluster Services on the new node
10.Add the new node to the appropriate resource groups
11.Synchronize your changes again
12.Run through your (updated) test plan
Copyright IBM Corporation 2008

Figure 7-12. Adding a new cluster node

AU548.0

Notes:
Adding a new cluster node
Adding a node to an existing cluster isnt all that difficult from the HACMP perspective
(as we see shortly). The hard work involves integrating the node into the cluster from an
AIX and from an application perspective.
Well be discussing the HACMP part of this work.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-15

Student Notebook

Add node: Standard path


Configure Nodes to an HACMP Cluster (standard)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Cluster Name
New Nodes (via selected communication paths)
Currently Configured Node(s)

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
[ibmcluster]
[indiaboot1] +
usa uk

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-13. Add node: Standard path

AU548.0

Notes:
Add node: Standard path
This operation and any other SMIT HACMP operations must be performed from an
existing cluster node. The india node wont become an existing cluster node until we
synchronize our changes in a few pages; so use an existing node until the cluster is
synchronized.
Cluster Name
SMIT fills this field in based on the previous value. Leave as is or change. The name
that you assign to your cluster is pretty much arbitrary. It appears in log files and the
output of commands.
New Nodes
The new nodes are specified by giving the IP label or IP address of one currently active
network interface on each node. Use F4 to generate a list, or type one resolvable IP
label or IP address for each node. If more than one node, they should be space

7-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

separated.
This path will be taken to initiate communication with the node.
The command launched by this SMIT screen contacts the clcomd at each address and
asks them to come together in a new cluster. Obviously, HACMP must already be
installed on the new nodes.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-17

Student Notebook

Add node: Standard path (in progress)


Here is the output shortly after pressing Enter:
COMMAND STATUS
Command: OK

stdout: yes

stderr: no

Before command completion, additional instructions may appear below.


[TOP]
Communication path indiaboot1 discovered a new node. Hostname is
india. Adding it to the configuration with Nodename india.
Discovering IP Network Connectivity
Retrieving data from available cluster nodes.
minutes....

F1=Help
F8=Image
n=Find Next

F2=Refresh
F9=Shell

This could take a few

F3=Cancel
F10=Exit

F6=Command
/=Find

Copyright IBM Corporation 2008

Figure 7-14. Add node: Standard path (in progress)

AU548.0

Notes:
Add node: Standard path (in progress)
When the Enter key is pressed on the previous SMIT screen, HACMPs automatic
discovery process begins. When the nodes have been identified, the discovery process
retrieves the network and disk configuration information from each of the cluster nodes
and builds a description of the new cluster. The network configuration information is
used to create the initial IP network configuration.
The remainder of the output from this SMIT operation isnt particularly interesting
(unless something goes wrong); so well just ignore it for now. You will get an
opportunity to add a node in the lab exercises.

7-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Add node: Extended path


Add a Node to the HACMP Cluster
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[india]
[indiaboot1]

* Node Name
Communication Path to Node

Note: In addition to this, the adapters must be added. See Student


notes below for details.
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-15. Add node: Extended path

AU548.0

Notes:
Add node: Extended path
The Extended Path is essentially the same as the Standard Path in this case.
Be aware that at this point youve only configured the node definition. You must also
configure the adapter definitions (boot adapter definitions). To do this you use the
Extended path, Extended Topology, Communications Interfaces/Devices.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-19

Student Notebook

Define the non-IP rs232 networks (1 of 2)


You have added (and tested) a fully wired rs232 null modem
cable between indias tty1 and usa's tty2; so now, define that
as a non-IP rs232 network.
Configure HACMP Communication Interfaces/Devices
+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7.

ONE OR MORE items can be selected.

Press Enter AFTER making all selections.

# Node
Device
Device Path
Pvid

usa
tty0
/dev/tty0

uk
tty0
/dev/tty0

india
tty0
/dev/tty0

usa
tty1
/dev/tty1

uk
tty1
/dev/tty1

>
india
tty1
/dev/tty1

usa
tty2
/dev/tty2

>

uk
tty2
/dev/tty2

india
tty2
/dev/tty2

F1=Help
F2=Refresh
F3=Cancel

F7=Select
F8=Image
F10=Exit

F1 Enter=Do
/=Find
n=Find Next

F9+--------------------------------------------------------------------------+
Copyright IBM Corporation 2008

Figure 7-16. Define the non-IP rs232 networks (1 of 2)

AU548.0

Notes:
Introduction
This visual, and the next one, show how to add two more non-IP networks to our cluster.
Make sure that the topology of the non-IP networks that you describe to HACMP
corresponds to the actual topology of the physical rs232 cables.
In the following notes, we discuss why we need to add two more non-IP rs-232 links.
Note that if you are using heartbeat on disk the same two steps are required. There
must be a unique disk shared between india and usa, and india and uk to define the
two heartbeat on disk networks (one between india and usa, the other between india
and uk). You cant use an hdisk on one node for a heartbeat on disk network with two
different nodes.

7-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Minimum non-IP network configuration: ring


At minimum, the non-IP networks in a cluster with more than two nodes should form a
ring encompassing all the nodes, that is each node is connected to its two directly
adjacent neighbors. A ring provides redundancy (two non-IP heartbeat paths for every
node) and is simple to implement.

Mesh configuration
The most redundant configuration would be a mesh, each node connected to every
other node. However, if you have more than three nodes, this means extra complexity
and can mean a lot of extra hardware, depending on which type of non-IP network you
are using.
Note: For a three node cluster, a ring and a mesh are the same.

Star configuration not recommended


While the HACMP for AIX Planning and Installation Guide discusses using a star, ring
or mesh configuration for non-IP networks, a star is not a good choice. A star means
that the center node is a SPOF for the non-IP networks; losing the center node means
that all the other nodes lose non-IP network connectivity.

Three-node example
In the example in the visual, we already have a non-IP network between usa and uk; so
we need to configure one between india and usa (on this page) and another one
between uk and india (on the next page).
If, for example, we left out the uk and india non-IP network, then the loss of the usa
node would leave the uk and india nodes without a non-IP path between them.

Five-node example
In even larger clusters, it is still necessary to configure only a ring of non-IP networks.
For example, if the nodes are A, B, C, D, and E, then five non-IP networks would be the
minimum requirement: A to B, B to C, C to D, E to F, and F to A being one possibility. Of
course, other possibilities exist, such as A to B, B to D, D to C, C to E, and E to F.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-21

Student Notebook

Define the non-IP rs232 networks (2 of 2)


You have also added (and tested) a fully wired rs232 null-modem
cable between uk's tty2 and indias tty2; so now, define that as a
non-IP rs232 network.
Configure HACMP Communication Interfaces/Devices
+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7.

ONE OR MORE items can be selected.

Press Enter AFTER making all selections.

# Node
Device
Device Path
Pvid

usa
tty0
/dev/tty0

uk
tty0
/dev/tty0

india
tty0
/dev/tty0

usa
tty1
/dev/tty1

uk
tty1
/dev/tty1

india
tty1
/dev/tty1

usa
tty2
/dev/tty2

>
uk
tty2
/dev/tty2

>
india
tty2
/dev/tty2

F1=Help
F2=Refresh
F3=Cancel

F7=Select
F8=Image
F10=Exit

F1 Enter=Do
/=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 7-17. Define the non-IP rs232 networks (2 of 2)

AU548.0

Notes:
Define non-IP networks
Make sure that the topology of the non-IP networks that you describe to HACMP
corresponds to the actual topology of the physical rs232 cables.

7-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Synchronize your changes


HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[Both]
[No]

* Verify, Synchronize or Both


* Automatically correct errors found during
verification?
* Force synchronization if verification fails?
* Verify changes only?
* Logging

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[No]
[No]
[Standard]

F3=Cancel
F7=Edit
Enter=Do

+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-18. Synchronize your changes

AU548.0

Notes:
Synchronize
At this point, all this configuration exists only on the node where the data was entered.
To populate the other nodes HACMP ODMs, you must synchronize. When weve
synchronized our changes, the india node is an official member of the cluster.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-23

Student Notebook

Start Cluster Services on the new node


# smitty clstart
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Start now, on system restart or both
Start Cluster Services on these nodes
Manage Resource Groups
BROADCAST message at startup?
Startup Cluster Information Daemon?
Ignore verification errors?
Automatically correct errors found during
cluster start?

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
now
[india]
Automatically
true
false
false
Interactively

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+
+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-19. Start Cluster Services on the new node

AU548.0

Notes:
Start Cluster Services on the new node
Now that india is an official member of the cluster, we can start Cluster Services on the
node.
This and all future SMIT HACMP operations can be performed from any of the three
cluster nodes.

7-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Add the node to a resource group


Change/Show a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Resource Group Name
ywebgroup
New Resource Group Name
[]
Participating Node Names (Default Node Priority) [uk usa india]
Startup Policy
Fallover Policy
Fallback Policy

F1=Help
F5=Reset
F9=Shell

Online On Home Node On> +


Fallover To Next Prior> +
Fallback To Higher Pri> +

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Remember to synchronize and verify that the non-IP network is active


Copyright IBM Corporation 2008

Figure 7-20. Add the node to a resource groups

AU548.0

Notes:
Add the node to a resource group
Remember that adding the new india node to the HACMP configuration is the easy
part. You would not perform any of the SMIT HACMP operations shown so far in this
scenario until you were CERTAIN that the india node was actually capable of running
the application.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-25

Student Notebook

Shrinking the cluster


The Auditors are not impressed with the latest investment and
force the removal of the india node from the cluster so that it
can be transferred to a new project (some users suspect that
political considerations might have been involved)

usa

uk
X

india

Copyright IBM Corporation 2008

Figure 7-21. Shrinking the cluster

AU548.0

Notes:
Removing a node
In this scenario, we take a look at how to remove a node from an HACMP cluster.

7-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Removing a cluster node


1. Using any cluster node, move resource groups to other nodes
2. Remove the departing node from all resource groups and
synchronize your changes

Ensure that each resource group is left with at least two nodes

3. Stop Cluster Services on the departing node


4. Using one of the cluster nodes that is not being removed:

Remove the departing node from the cluster's topology


Remove a Node from the HACMP Cluster (Extended Configuration)
Synchronize
When the synchronization is completed successfully, the departing node is no longer
a member of the cluster

5. Remove the departed node's IP addresses from


/usr/es/sbin/cluster/etc/rhosts on the remaining nodes

Prevents departed node from interfering with HACMP on remaining nodes

6. Physically disconnect the (correct) rs232 cables, if necessary


7. Disconnect the departing node from the shared storage
subsystem

Strongly recommended because it makes it impossible for the departed node to


corrupt the cluster's shared storage

8. Run through your (updated) test plan


Copyright IBM Corporation 2008

Figure 7-22. Removing a cluster node

AU548.0

Notes:
Removing a node
Although removing a node from a cluster is another fairly involved process, some of the
work has little, if anything, to do with HACMP.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-27

Student Notebook

Removing an application
The zwebserver application has been causing problems and a
decision has been made to move it out of the cluster

usa

uk
X

Copyright IBM Corporation 2008

Figure 7-23. Removing an application

AU548.0

Notes:
Removing an application
In this scenario, we remove a resource group.
It looks like this imaginary organization could do with a bit more long range planning.

7-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Removing a resource group (1 of 3)


1. Take the resource group offline
2. OPTIONAL: Take a cluster snapshot
3. Using any cluster node and either configuration path:
Remove the departing resource group using the
Remove a Resource Group SMIT screen
Remove any service IP labels previously used by the departing resource group using
the Remove Service IP Labels/Addresses SMIT screen
Synchronize your changes

4. Clean out anything that is no longer needed by the cluster:


Export any shared volume groups previously used by the application.
Consider deleting service IP labels from the /etc/hosts file
Uninstall the application

5. Run through your (updated) test plan

Copyright IBM Corporation 2008

Figure 7-24. Removing a resource group (1 of 3)

AU548.0

Notes:
Introduction
The procedure for removing a resource group is actually fairly straightforward.

Cluster snapshot
HACMP supports something called a cluster snapshot. This would be an excellent time
to take a cluster snapshot, just in case we decide to go back to the old configuration.
We will discuss snapshots later in this unit.

Remove unused resources


Do not underestimate the importance of removing unused resources, such as service IP
labels and volume groups. They will only clutter up the clusters configuration and, in

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-29

Student Notebook

the case of shared volume groups, tie up physical resources, that could presumably be
better used elsewhere.
A cluster should not have any useless resources or components because anything
that simplifies the cluster tends to improve availability by reducing the likelihood of
human error.

7-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Removing a resource group (2 of 3)


HACMP Extended Resource Group Configuration
Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Change/Show Resources and Attributes for a Resource Group

Remove a Resource Group


Show All Resources by Node or Resource Group
+--------------------------------------------------------------------------+

Select a Resource Group

Move cursor to desired item and press Enter.

xwebgroup

ywebgroup

zwebgroup

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 7-25. Removing a resource group (2 of 3)

AU548.0

Notes:
Removing a resource group
Make sure that you delete the correct resource group.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-31

Student Notebook

Removing a resource group (3 of 3)


HACMP Extended Resource Group Configuration
Move cursor to desired item and press Enter.
Add a Resource Group
Change/Show a Resource Group
Change/Show Resources and Attributes for a Resource Group

Remove a Resource Group


Show All Resources by Node or Resource Group

+--------------------------------------------------------------------------+

ARE YOU SURE?

Continuing may delete information you may want

to keep. This is your last chance to stop

before continuing.

Press Enter to continue.

Press Cancel to return to the application.

F1=Help
F2=Refresh
F3=Cancel

F1 F8=Image
F10=Exit
Enter=Do

F9+--------------------------------------------------------------------------+

Press Enter (if you are sure). Be sure to synchronize and run
through validation testing.
Copyright IBM Corporation 2008

Figure 7-26. Removing a resource group (3 of 3)

AU548.0

Notes:
Are you sure?
Pause to make sure you know what you are doing. If you arent sure, its easy to go
back and step through the process again.

7-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Lets review: Topic 1


1. True or False?
You cannot add a node while HACMP is running.

2. You have decided to add a third node to your existing twonode HACMP cluster. What very important step follows
adding the node definition to the cluster configuration
(whether through Standard or Extended Path)?
a. Take a well deserved break, bragging to co-workers about
your success.
b. Install HACMP software.
c. Configure a non-IP network.
d. Start Cluster Services on the new node.
e. Add a resource group for the new node.

3. Why would you choose to use the Extended Path to add


resources to a resource group versus the Standard Path?
__________________________________________________
Copyright IBM Corporation 2008

Figure 7-27. Lets review: Topic 1

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-33

Student Notebook

7-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

7.2 Cluster single point of control

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-35

Student Notebook

Cluster single point of control


After completing this topic, you should be able to:
Discuss the need for change management when using
HACMP
Describe the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
Start and stop cluster services
Perform resource group move operations

Copyright IBM Corporation 2008

Figure 7-28. Cluster single point of control

AU548.0

Notes:

7-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Administering a high availability cluster


Administering an HA cluster is different from administering a
stand-alone server:
Changes made to one node must be reflected on the other node
Poorly considered changes can have far-reaching implications
Beware the law of unintended consequences

Aspects of the clusters configuration could be quite subtle and yet


critical
Scheduling downtime to install and test changes can be challenging
Saying oops while sitting at a cluster console could get you fired!

Copyright IBM Corporation 2008

Figure 7-29. Administering a high availability cluster

AU548.0

Notes:
Introduction
You must develop good change management procedures for managing an HACMP
cluster. As you will see, C-SPOC utilities can be used to help, but do not do the job by
themselves. Having well documented and tested procedures to follow, as well as
restricting who can make changes (for example you should not have more than two or
three persons with root privileges) minimizes loss of availability when making changes.
The snapshot utility should be used before any change is made.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-37

Student Notebook

Recommendations
Implement and adhere to a change control/management
process
Wherever possible, use HACMP's C-SPOC facility to make
changes to the cluster (details to follow)
Document routine operational procedures in a step-by-step list
fashion (for example, shutdown, startup, increase size of a
filesystem)
Restrict access to the root password to trained High Availability
cluster administrators
Always take a snapshot (explained later) of your existing
configuration before making a change

Copyright IBM Corporation 2008

Figure 7-30. Recommendations

AU548.0

Notes:
Some beginning recommendations
These recommendations are considered to be the minimum acceptable level of cluster
administration. There are additional measures and issues that should probably be
carefully considered (for example, problem escalation procedures should be
documented, and both hardware and software support contracts should either be kept
current or a procedure developed for authorizing the purchase of time and materials
support during off hours should an emergency arise).

Importance of change management


A real change control or management process requires a serious commitment on the
part of the entire organization:
- Every change must be carefully considered

7-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

As the cluster administrator you should make yourself part of every change
meeting that occurs on your HACMP systems.
Think about the implications of the change on the cluster configuration and
function, keeping in mind the networking concepts weve discussed as well as
any changes to the applications data organization or start/stop procedures.
- The onus should be on the requester of the change to demonstrate that it is
necessary; not on the cluster administrators to demonstrate that it is unwise
- Management must support the process.
Defend cluster administrators against unreasonable request or pressure.
Do not allow politics to affect a change's priority or schedule.
- Every change, even the minor ones, must follow the process.
No system, cluster, or database administrator can be allowed to sneak
changes past the process.
The notion that a change might be permitted without following the process must
be considered to be absurd.

Other recommendations
Ensure that you request sufficient time during the maintenance window for testing the
cluster. If this isnt possible, advise all parties of the risks of running without testing.
Update any documentation as soon as possible after the change is made to reflect the
new configuration or function of the cluster, if anything changes.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-39

Student Notebook

Cluster single point of control


C-SPOC provides facilities for performing common cluster wide
administration tasks from any node within the cluster.
Relies on the clcomdES socket based subsystem for secure node-tonode communications
C-SPOC operations might fail if any target node is down at the time of
execution or selected resource is not available
Any change to a shared VGDA is synchronized automatically if CSPOC is used to change a shared LVM component
C-SPOC uses a script parser called the command execution language
Target
node

Target
node

Initiating
node

Target
node

Target
node
Copyright IBM Corporation 2008

Figure 7-31. Cluster single point of control

AU548.0

Notes:
C-SPOC command execution
C-SPOC commands first execute on the initiating node. Then the HACMP command
cl_rsh is used to propagate the command (or a similar command) to the target nodes.

Secure distributed communications between the nodes


The clcomdES subsystem provides secure communications between nodes. This
daemon provides secure communication between cluster nodes for all cluster utilities,
such as verification and synchronization and system management (C-SPOC). The
clcomd daemon is started automatically at boot time by the init process.

More details
All the nodes in the resource group must be available or the C-SPOC command will be
performed partially across the cluster, only on the active nodes. This can lead to
7-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

problems later when nodes are brought up and are out of sync with the other nodes in
the cluster.
As you saw in the LVM unit, LVM changes, if made through C-SPOC, can be
synchronized automatically (for enhanced concurrent mode volume groups, and then
only the LV information, not the filesystem information).

C-SPOC command line


C-SPOC commands can be executed from the command line (or through SMIT, of
course).
Error messages and warnings returned by the commands are based on the underlying
AIX-related commands.
Appendix C: HACMP for AIX 5L Commands in the HACMP for AIX Administration
Guide provides a list of all C-SPOC commands provided with the HACMP for AIX 5L
software.

Command execution language


C-SPOC commands are written as execution plans in command execution language
(CEL). Each plan contains constructs to handle one or more underlying AIX 5L tasks (a
command, executable, or script) with a minimum of user input.
An execution plan becomes a C-SPOC command when the
/usr/es/sbin/cluster/utilities/celpp utility converts it into a cluster aware ksh
script, meaning the script uses the C-SPOC distributed mechanism (the C-SPOC
Execution Engine) to execute the underlying AIX 5L commands on cluster nodes to
complete the defined tasks.
CEL is a programming language that enables you to integrate dshs distributed
functionality into each C-SPOC script the CEL preprocessor (celpp) generates. When
you invoke a C-SPOC script from a single cluster node to perform an administrative
task, the script is automatically executed on all nodes in the cluster. The language is
described further in Appendix B of the HACMP for AIX Troubleshooting Guide.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-41

Student Notebook

The top-level C-SPOC menu


System Management (C-SPOC)
Move cursor to desired item and press Enter.
Manage HACMP Services
HACMP Communication Interface Management
HACMP Resource Group and Application Management
HACMP Log Viewing and Management
HACMP File Collection Management
HACMP Security and Users Management
HACMP Logical Volume Management
HACMP Concurrent Logical Volume Management
HACMP Physical Volume Management
Open a SMIT Session on a Node

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 7-32. The top-level C-SPOC menu

AU548.0

Notes:
Top-level C-SPOC menu
The top-level C-SPOC menu is one of the four top-level HACMP menus.
C-SPOC scripts are used for Users, LVM, CLVM, and Physical Volume Management.
RGmove is used for Resource Group management.
The other functions are included here as a logical place to put these system
management facilities. We will look at Managing Cluster Services and the Logical
Volume Management tasks.
The fast path is smitty cl_admin.

7-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Starting cluster services


# smit clstart
Start Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Start now, on system restart or both
Start Cluster Services on these nodes
* Manage Resource Groups
BROADCAST message at startup?
Startup Cluster Information Daemon?
Ignore verification errors?
Automatically correct errors found during
cluster start?

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
now
[usa,uk]
Automatically
true
true
false
Interactively

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+
+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-33. Starting cluster services

AU548.0

Notes:
Briefly, how did we get here?
The first choice in the C-SPOC menu is Manage HACMP Services. This option brings
up another menu containing three choices: Start Cluster Services, Stop Cluster
Services, and Show Cluster Services. This menu displays when we select Start
Cluster Services. Better yet, just use the fast path, smitty clstart.

Starting cluster services


We saw this in the previous unit. Now for the details.
You have the option to start Cluster Services at system boot time (adds entry to
/etc/inittab), only when running through this menu (just invokes cl_rc.cluster), or both.
Think carefully about starting Cluster Services at system boot time because this might
result in Resource Group movement, depending on your Fallback Policies.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-43

Student Notebook

You have a choice of any or all nodes in the cluster to start services. Use F4 to get a
pick list. If the field is left blank, services will be started on all nodes.
When Cluster Services is started, it wants to acquire resources in Resource Groups, if
so configured, and make applications available. Beginning with HACMP V5.4, the
function of managing resource groups can be deferred. The option to choose in that
case is Manually. To allow Cluster Services to acquire resources and make
applications available if so configured (pre-HACMP v5.4 behavior), choose the default,
Automatically.
You can broadcast a message that cluster services are being started.
You have the option to start the Client Information Daemon, clinfo, along with the start of
Cluster Services. This is usually a good idea as it allows you to use the clstat cluster
monitor utility.
Finally, there are options regarding verification. Before Cluster Services is started, a
verification is run to ensure that you are not starting a node with an inconsistent
configuration. You can choose to ignore verification errors and start anyway. This is not
something that you would do unless you are very aware of the reason for the
verification error, you understand the ramifications of starting with the error and you
must activate Cluster Services. An alternative that is safer would be to choose to
Interactively correct errors found during verification. Not all errors can be corrected,
but you have a better chance of getting cluster services activated in a clean
configuration with this option.
The options that you choose here are retained in the HACMP ODM and repopulated on
reentry.

7-44 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Verifying that cluster services has started


You have a few options
usa # clcheck_server grpsvcs;print $?
1
Note: An rc=1 means cluster services is active

usa # clstat -a
clstat - HACMP Cluster Status Monitor
------------------------------------Cluster: ibmcluster (1156578448)
Wed Aug 30 11:16:19 2006
State: UP
Nodes: 2
SubState: STABLE

Node: usa
Interface: usaboot1 (2)
Interface: usaboot2 (2)
Interface: usa_hdisk5_01 (0)
Interface: xweb (2)
Resource Group: xwebgroup

usa # lssrc -ls clstrmgrES


Current state: ST_STABLE
State: UP
Address: 192.168.15.29
State: UP
Address: 192.168.16.29
State: UP
Address: 0.0.0.0
State: UP
Address: 192.168.5.92
State: UP
State: On line

First three rules


1. patience
2. patience
3. patience

Also consider using the cldump command. This relies solely on SNMP to get the
current cluster status.
Copyright IBM Corporation 2008

Figure 7-34. Verifying that cluster services has started

AU548.0

Notes:
The Three rules
Patience is key with HACMP tasks. There are many things going on under the covers
when you ask the Cluster Manager to do something. Getting the OK in SMIT does
not mean that the task has been completely performed. Its just the beginning in many
cases.
Did I mention patience?
The Cluster Manager daemon queues events. It doesnt forget (usually anyway). So
keep in mind, that if you launch a task with the Cluster Manager and dont verify its
status closely and then attempt to give the process a boost by launching another task
(such as following an rgmove with an offline) you have just queued the second task.
When the Cluster Manager completes the first task, provided that its in a state where it
can continue processing, it will perform the second task. This might not be what you
wanted.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-45

Student Notebook

Its easy to encourage patience when writing a course. The author is extremely
impatient and rarely follows his own advice. That doesnt make it right! I have learned
the value of patience the hard way, by not being patient and paying the price.

What to look at, what to look for


Documentation for HACMP V5.3 indicated that the clcheck_server utility was to be
used given that the Cluster Manager daemon was a long running process. This method
still works. Run it with grpsvcs as the only parameter and then look at the return code.
A return code of 1 indicates that the Cluster Manager is a member of a group services
group that implies Cluster Services are active.
Although you might find the output to be unreliable at times, the clstat utility is a good
mechanism to use. If youre not a fan of clstat, consider using cldump, that relies on
SNMP directly.
Another option is to use lssrc. This is to be used with caution. You must understand
what state is expected and then be patient, retrying the command to ensure that the
state changes are no longer occurring. A state of ST_STABLE is a tricky indication. It
might mean that Cluster Services are active or it might mean that Cluster Services was
forced down on this node. Pay close attention to the Forced down nodes list: portion of
the output of the lssrc -ls clstrmgrES. Know what state to expect.
Finally, although not shown (due to lack of space on the visual), another option is to use
WebSMIT. This is the solution for those of you who want to see a graphical
representation of cluster status. You will see more of that later in this unit.

7-46 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Checking on what actually happened


Logs containing what was done as Cluster Services start
cluster.log
Log of events that have been processed, good starting point

hacmp.out (popular place to consult, usually a good idea)


Events write here, essentially the result of set x in event scripts)
Very detailed, but contains everything event related

hacmp.out formatting helps with navigation/understanding


tagging of log lines includes resource group, script name, resource, invoked function,
and in some cases elapsed time
Example:
+rg1:cl_activate_fs[278] ALLFS=All_filesystems
+rg1:cl_activate_fs[279] [[ '' == EMUL ]]
+rg1:cl_activate_fs[284] cl_RMupdate resource_acquiring All_filesystems cl_activate_fs
Reference string: Mon.Sep.17.14:45:54.CDT.2007.cl_activate_fs.All_filesystems.rg1.ref
+rg1:cl_activate_fs(2.980)[287] PS4_TIMER=true
+rg1:cl_activate_fs(2.980)[287] typeset PS4_TIMER
+rg1:cl_activate_fs(2.980):/fs01[290] PS4_LOOP=/fs01
+rg1:cl_activate_fs(2.980):/fs01[291] [[ sequential == parallel ]]
+rg1:cl_activate_fs(2.980):/fs01[305] [[ '' == EMUL ]]
+rg1:cl_activate_fs(2.980):/fs01[310] fs_mount /fs01 fsck rg1_activate_fs.tmp27018
+rg1:cl_activate_fs(2.980):/fs01[fs_mount+5] FS=/fs01
+rg1:cl_activate_fs(2.980):/fs01[fs_mount+5] typeset FS

Copyright IBM Corporation 2008

Figure 7-35. Checking on what actually happened

AU548.0

Notes:
Base Cluster Logs
The cluster.log file is a good starting point to see what events have been run. You can also
see errors and timestamps to help in navigating the hacmp.out log file. It can be said that
looking at the hacmp.out file is as much art as it is science. The more you become
comfortable with what you expect to see, the easier it will be to navigate. As you see, the
format of the entries helps you to understand what is being done, on what resource and
how long its been running.
More detailed log
You might also want to consult the clstrmgr.debug log too. This is the Cluster Manager
daemon log. It can be difficult to understand as its very detailed internal processing, but
error messages found here might be useful as well as an understanding of whether the
Cluster Manager is busy doing something even when no event processing is occurring.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-47

Student Notebook

Stopping cluster services


# smit clstop
Stop Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Stop now, on system restart or both
Stop Cluster Services on these nodes
BROADCAST cluster shutdown?
* Select and Action on Resource Groups

[Entry Fields]
now
[usa]
true
Bring Resource Groups>

+
+
+
+

+--------------------------------------------------------------------------+

Shutdown mode

Move cursor to desired item and press Enter.

Bring Resource Groups Offline

Move Resource Groups

Unmanage Resource Groups

F1=Help
F2=Refresh
F3=Cancel

F1 F8=Image
F10=Exit
Enter=Do

F5 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 7-36. Stopping cluster services

AU548.0

Notes:
Briefly, how did we get here?
From the Manage HACMP Services C-SPOC menu, this menu displays when we
choose Stop Cluster Services. You can use the fast path, smitty clstop.

Stopping cluster services


Remember that this is not stopping the Cluster Manager daemon. It runs all the time.
Actually, when you stop Cluster Services, the Cluster Manager daemon dies gracefully
and is respawned by the System Resource Controller.
You have the option to stop cluster services when you run through this menu or remove
the option to start cluster services at system start (removes entry from /etc/inittab), or
both. Note that the system start option is a reversal of the setting made for system start
when starting cluster services.

7-48 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

You have a choice of any or all nodes in the cluster to stop services. Use F4 to get a
pick list. If the field is left blank, services will be stopped on all nodes.
You can broadcast a message that cluster services are being stopped.
Finally, the options regarding Resource Group management. Prior to HACMP V5.4, the
options were graceful, takeover and, forced. Graceful meant to Bring Resource Groups
Offline prior to stopping cluster services. Takeover meant to Move Resource Groups to
other available nodes, if applicable, according to the current locations and Fallover
policies of the Resource Groups. As you can see, these options map directly to the
current options and their functions are self-explanatory.
But what about forced down you say? Prior to HACMP V5.4, forcing down Cluster
Services was supported sometimes, in some scenarios, and resulted in an environment
that was potentially unstable (that is, potentially unavailable), Forcing cluster services
down when using Enhanced Concurrent Mode Volume Groups was not supported
because Group Services and gsclvmd were brought down as part of the forced down
operation. Group Services and gsclvmd are the components that maintain the volume
groups VGDA/VGSA integrity across all nodes. With HACMP V5.4 and later, forcing
down cluster services is supported by moving the resource groups to an Unmanaged
state. In addition, the cluster manager and the RSCT infrastructure remain active
permitting this action with Enhanced Concurrent Mode Volume Groups; thus, the option
in the menu shown, Unmanage Resource Groups. While in this state, the cluster
manager remains in the ST_STABLE state. It doesnt die gracefully and respawn as
stated earlier and doesnt return to the ST_INIT state. This allows the Cluster Manager
to participate in cluster activities and keep track of changes that occur in the cluster.
As with starting cluster services, the options that you choose here are retained in the
HACMP ODM and repopulated on reentry.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-49

Student Notebook

Cover the resource group management options thoroughly

Verifying that cluster services has stopped (1 of 2)


You have a few options stopping without Unmanaged RGs
usa # tail -2 hacmp.out.1
clexit.rc : Normal termination of clstrmgrES. Restart now.
0513-059 The clstrmgrES Subsystem has been started. Subsystem PID is 483466.

usa # lssrc -ls clstrmgrES


Current state: ST_INIT

uk # clstat -a
clstat - HACMP Cluster Status Monitor
------------------------------------Cluster: ibmcluster (1156578448)
Wed Aug 30 10:44:20 2006
State: UP
Nodes: 2
SubState: STABLE

Node: usa
Interface: usaboot1 (2)
192.168.15.29
Interface: usaboot2 (2)
192.168.16.29

Same three rules

State: DOWN
Address:
State: DOWN
Address:

1. patience
2. patience
3. patience

State: DOWN

usa # tail -1 clstrmgr.debug.1


Wed Aug 30 10:31:54 code is 0 - exhale our dying breath and count on the good graces of SRC to reincarnate us!
Copyright IBM Corporation 2008

Figure 7-37. Verifying that cluster services has stopped (1 of 2)

AU548.0

Notes:
Stopping cluster services without going to unmanaged
This means youve chosen to stop cluster services either with the Bring Resource
Groups Offline or Move Resource Groups option. In other words, its not a forced
down.
As with starting cluster services, remember that patience is essential. Many tasks are
performed behind the scenes when you ask the Cluster Manager to do something.
Getting the OK in SMIT does not mean that the task has been completely performed.
Its just the beginning in many cases.
Did I mention patience?

7-50 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What to look at, what to look for


First a comment on the log file locations. If this is a new HACMP 5.4.1 or later install,
the logs will be in /var/hacmp/log. Otherwise, they will be in /tmp. In addition, it might be
necessary to view the cycled log, that is, the one that ends in .1.
As stated previously, stopping Cluster Services results in the Cluster Manager daemon
being respawned by the System Resource Controller. The surest way to verify that
Cluster Services has stopped completely is the following message in hacmp.out,
indicating that Cluster Services has stopped and the Cluster Manager Daemon has
been respawned:
clexit.rc: Normal termination of clstrmgrES. Restart now.
0513-059 The clstrmgrES Subsystem has been started. Subsystem PID is nnnnnn.

Although you might find the output to be unreliable at times, the clstat utility is a good
mechanism to use. Note that it was run on another system, not the one where cluster
services was stopped. If youre not a fan of clstat, consider using cldump, which relies
on SNMP directly.
Another option is to use lssrc. This is to be used with caution. You must understand
what state is expected and then be patient, retrying the command to ensure that the
state changes are no longer occurring. A state of ST_INIT is the indication that Cluster
Services has stopped on this node. This is the resulting state from a respawn of the
Cluster Manager daemon. As you will see in the next visual, stopping Cluster Services
with Unmanaged Resource Groups leaves the Cluster Manager daemon in
ST_STABLE. Know what state to expect.
Finally, although not shown (due to lack of space on the visual), another option is to use
WebSMIT. This is the solution for those of you who want to see a graphical
representation of cluster status. You will see more of that later in this unit.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-51

Student Notebook

Verifying that cluster services has stopped (2 of 2)


You have a few options stopping with Unmanaged RGs
usa # clRGinfo
-------------------------------------Group Name
Group State
Node
-------------------------------------xwebgroup
UNMANAGED
usa
UNMANAGED
uk
uk # clstat -a
clstat - HACMP Cluster Status Monitor
-------------------------------------

usa # lssrc -ls clstrmgrES


Current state: ST_STABLE

Forced down node list: usa

Cluster: ibmcluster (1156578448)


Wed Aug 30 11:16:19 2006
State: UP
Nodes: 2
SubState: STABLE

Node: usa
Interface: usaboot1 (2)

State: UP
Address: 192.168.15.29
State: UP

Interface: xweb (2)


Resource Group: xwebgroup

Ditto on the rules

Address: 192.168.5.92
State: UP
State: Unmanaged

Copyright IBM Corporation 2008

Figure 7-38. Verifying that cluster services has stopped (1 of 2)

AU548.0

Notes:
Stopping cluster services with unmanaged resource groups
This means youve chosen to force down cluster services.
One more time, remember that patience is essential. Did I mention that getting the OK
in SMIT does not mean that the task has been completely performed. Its just the
beginning in many cases.
Did I mention patience?

What to look at, what to look for


In the case of Unmanaged resource groups, stopping Cluster Services does not result
in the Cluster Manager daemon dieing gracefully and being respawned by the System
Resource Controller. The Cluster Manager daemon stays up and should remain in the
ST_STABLE state. But using lssrc -ls clstrmgrES can be useful in determining which
nodes have been forced down, as it provides a list as shown on the visual.
7-52 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Again, the clstat utility can be a good mechanism to use. Note that it was run on another
system, not the one where cluster services was stopped. Notice that the resource group
shows online. This is valid. It also shows the state as Unmanaged. You only stopped
Cluster Services, not the resources.
The quickest way to see that there are unmanaged resources is to use clRGinfo. Note
that is shows the state of the resource group as Unmanaged on both nodes. In fact, it
will show Unmanaged on any node where that resource group can acquired if this is not
an online on all nodes startup policy resource group. It will show Unmanaged only on
the node where Cluster Services was stopped if the resource group startup policy is
online on all nodes.
As in the previous slides on verifying the state, another option is to use WebSMIT. This
is the solution for those of you who want to see a graphical representation of cluster
status. You will see more of that later in this unit.

How do I get a resource group out of the unmanaged state?


You might be tempted to change the resource group to the offline state and move it to
another node. This doesnt work and is very dangerous because it leaves the
application running on the original node, but shows that its offline to the cluster
manager. Activating it on another node would be very bad as both nodes would attempt
to access the storage and would have the IP address defined.
You cant move the resource group to another node; its not online anywhere (according
to the cluster manager).
The best option (and really only option) is to restart cluster services on the forced node,
specifying Automatically for the Manage Resource Groups option. Understand that
this will cause the application server start script to be run again, unless an Application
Monitor is configured for the application that indicates the application is currently
running. In the case where the Application Monitor detects the running application, the
application server start script is not invoked. A similar option is to start Cluster Services
on the forced node, but specify Manually for the Manage Resource Groups option.
Then use C-SPOC to bring the resource group online at your discretion. The same
warning applies about a respawn of the application server start script in this scenario.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-53

Student Notebook

Managing shared LVM components


HACMP Logical Volume Management

Make non-Enhanced Concurrent


Mode Volume Groups
Manage volume groups in
home node or first available
Resource Groups

Move cursor to desired item and press Enter.


Shared Volume Groups
Shared Logical Volumes
Shared File Systems
Synchronize Shared LVM Mirrors
Synchronize a Shared Volume Group Definition

HACMP Concurrent Logical Volume Management


F1=Help
F9=Shell

F2=Refresh
F10=Exit

cursor to desired item and press Enter.


F3=Cancel MoveF8=Image
Enter=Do
Concurrent Volume Groups
Concurrent Logical Volumes
Synchronize Concurrent LVM Mirrors

Make Enhanced Concurrent


Mode Volume Groups
Manage online on all nodes
volume groups

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 7-39. Managing shared LVM components

AU548.0

Notes:
Introduction
This is the menu for using C-SPOC to perform LVM change management and
synchronization. As was mentioned in the LVM unit, you can make changes in AIX
directly and then synchronize or, if you can make the changes using C-SPOC utilities,
where the synchronization is automatic.

C-SPOC simplifies the process


When youve configured the clusters topology and added a resource group, you can
configure your shared disks using this part of the C-SPOC hierarchy (available directly
from the top level C-SPOC SMIT menu). Generally, shared disk configuration and
maintenance is considerably easier and less prone to errors if you use the C-SPOC for
this work.

7-54 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

How it works
When you create a shared volume group, you must rerun the discovery mechanism
(refer to top-level menu in the enhanced configuration path) to get HACMP to know
about the volume group. You must then add the volume group to a resource group
before you can use C-SPOC to add shared logical volumes or filesystems.

Synchronization
Note that you only need to add the volume group to a resource group using SMIT from
one of the cluster nodes, and then you can start working with C-SPOC from the same
node. You do not need to synchronize the cluster between adding the volume group to a
resource group and working with it using C-SPOC unless you want to use C-SPOC
from some other node. Remember that the volume group is not really a part of the
resource group until you synchronize the addition of the volume group to the resource
group.

Concurrent versus non-concurrent


The C-SPOC menus shown are the two menus on the main C-SPOC menu for Logical
Volume Management. Whats the difference? The Concurrent Logical Volume
Management menus are used for two things. First, to create enhanced concurrent
mode volume groups, and second, most importantly, for managing volume groups that
are in Resource Groups that are configured Online on all nodes for their Startup
Policy. These are sometimes referred to as Concurrent Mode Resource Groups or if
youve been around HACMP a long time, Mode 3 resource groups. You dont see any
options for adding filesystems to these volume groups. They are expected to be used in
true concurrent mode across all the nodes in the resource group. The HACMP Logical
Volume Management menus are for managing volume groups in the other resource
group types (Startup Policy is Online on home node or Online on first available). It is
supported and generally recommended to use enhanced concurrent mode volume
groups for these types of resource groups as well as for concurrent resource groups.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-55

Student Notebook

Creating a shared volume group


Create a Concurrent Volume Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.

Node Names
PVID
VOLUME GROUP name
Physical partition SIZE in megabytes
Volume group MAJOR NUMBER
Enhanced Concurrent Mode
Enable Cross-Site LVM Mirroring Verification

[Entry Fields]
usa,uk
00055207bbf6edab 0000>
[xwebvg]
64
[207]
true
false

+
#
+
+

Warning :
Changing the volume group major number may result
in the command being unable to execute
successfully on a node that does not have the
major number currently available. Please check
for a commonly available major number on all nodes
before changing this setting.

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-40. Creating a shared volume group

AU548.0

Notes:
Creating a shared volume group
You can use C-SPOC to create a volume group but be aware that you must then add
the volume group name to a resource group and synchronize. This is one case of using
C-SPOC where synchronization is not automatic.
Before creating a shared volume group for the cluster using C-SPOC check that:
- All disk devices are properly attached to the cluster nodes
- All disk devices are properly configured on all cluster nodes and the device is listed
as available on all nodes
- Disks have a PVID
(C-SPOC lists the disks by their PVIDs. This ensures that we are using the same
disk on all nodes, even if the hdisk names are not consistent across the nodes).
This menu was reached through the Concurrent Logical Volume Management option
on the main C-SPOC menu.
7-56 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Discover, add VG to a resource group


Extended Configuration
Move cursor to desired item and press Enter.
Discover HACMP-related Information from Configured Nodes
Extended Topology Configuration
Add the VG to an RG so it can be used
Extended Resource Configuration
Extended Event Configuration
Extended Cluster Service Settings
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets
Import Cluster Configuration from Online Planning Worksheets File

in the next steps

then verify and sync to put it on all cluster nodes


Extended Verification and Synchronization
HACMP Cluster Test Tool

F1=Help
Esc+9=Shell

F2=Refresh
Esc+0=Exit

F3=Cancel
Enter=Do

Esc+8=Image

Copyright IBM Corporation 2008

Figure 7-41. Discover, add VG to resource group

AU548.0

Notes:
Discover and add VG to resource group
After creating a volume group, you must discover it so that the new volume group will be
available in pick lists for future actions, such as adding it to a resource group, and so
forth.
You must use the Extended Configuration menu for both of these actions.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-57

Student Notebook

Creating a shared file system (1 of 2)


First, create logical volumes for the filesystem and jfs2log. Remember to logform
the jfs2log logical volume. For a mirrored LV, change Number of copies to 2.
Add a Shared Logical Volume
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP]
Resource Group Name
VOLUME GROUP name
Reference node
* Number of LOGICAL PARTITIONS
PHYSICAL VOLUME names
Logical volume NAME
Logical volume TYPE
POSITION on physical volume
RANGE of physical volumes
MAXIMUM NUMBER of PHYSICAL VOLUMES
to use for allocation
Number of COPIES of each logical
partition
[MORE...11]
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F7=Edit
F10=Exit

[Entry Fields]
xwebgroup
xwebvg
usa
[200]

[xweblv]
[jfs2]
middle
minimum
[]

+
+
#

F3=Cancel
F8=Image
Enter=Do

F4=List

The volume group must be in a resource group that is online; otherwise, it does not display in the pop-up
list.
Copyright IBM Corporation 2008
Figure 7-42. Creating a shared file system (1 of 2)

AU548.0

Notes:
Creating a shared file system using C-SPOC
It is generally preferable to control the names of all of your logical volumes.
Consequently, it is generally best to explicitly create a logical volume for the file system.
If the volume group does not already have a JFS2 log (unless you plan to use inline
logs, then the jfs2log wont be needed), then you must also explicitly create a logical
volume for the JFS log and format it with logform. The same can be said if you are
creating a JFS filesystem.
The volume group to which you want to add the filesystem must be online. Your choice,
either varyonvg the volume group manually, or via starting cluster services.
However, C-SPOC enables you to add a journaled file system to either:
- A shared volume group (no previously defined cluster logical volume)
SMIT checks the list of nodes that can own the resource group that contains the
volume group, creates the logical volume (on an existing log logical volume if
7-58 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

present; otherwise, it creates a new log logical volume) and adds the file system to
the node where the volume group is varied on (whether it was varied on by the
C-SPOC utility or it was already online). All other nodes in the resource group run an
importvg -L for non-enhanced concurrent mode volume groups, or an imfs for
enhanced concurrent mode volume groups.
- A previously defined cluster logical volume (in a shared volume group)
SMIT checks the list of nodes that can own the resource group that contains the
volume group where the logical volume is located. It adds the file system to the node
where the volume group is varied on (whether it was varied on by the C-SPOC utility
or it was already online). All other nodes in the resource group run an importvg -L
for non-enhanced concurrent mode volume groups, or an imfs for enhanced
concurrent mode volume groups.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-59

Student Notebook

Creating a shared file system (2 of 2)


Then create the filesystem on the now "previously defined logical volume"
Add an Enhanced Journaled File System on a Previously Defined Logical Volume
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
usa,uk
xweblv
[/xwebfs]
read/write
[]
4096
no
[]

Node Names
LOGICAL VOLUME name
* MOUNT POINT
PERMISSIONS
Mount OPTIONS
Block Size (bytes)
Inline Log?
Inline Log size (MBytes)

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+
#

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-43. Creating a shared file system (2 of 2)

AU548.0

Notes:
Creating a shared file system, step 2
When youve created the logical volume, then create a file system on it.

7-60 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

LVM change management


Historically, lack of LVM change management has been a
major cause of cluster failure during fallover. There are several
methods available to ensure LVM changes are correctly
synced across the cluster.

Manual updates to each node to synchronize the ODM records


Lazy update
C-SPOC synchronization of ODM records
RSCT for Enhanced Concurrent Volume Groups
C-SPOC LVM operations - cluster enabled equivalents of the standard
SMIT LVM functions
VGDA = ODM

Copyright IBM Corporation 2008

Figure 7-44. LVM change management

AU548.0

Notes:
The importance of LVM change management
LVM change management is critical for successful takeover in the event of a node
failure.
Information regarding LVM constructs is held in a number of different locations:
- Physical disks: VGDA, LVCB
- AIX files: primarily the ODM, but also /usr/sbin/cluster/etc/vg, files in the /dev
directory and /etc/filesystems
- Physical RAM: kernel memory space
This information must be kept in sync on all nodes that might access the shared volume
group or groups in order for takeover to work.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-61

Student Notebook

How to keep LVM synchronized across the cluster


There are several ways to ensure this information is kept in sync:
Manual update
Lazy Update
C-SPOC VG synchronization utility
C-SPOC LVM operations
RSCT (for enhanced concurrent mode volume groups)

7-62 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

LVM changes: Manual


To perform manual changes, the Volume Group must be
active on one of the nodes
1.
2.

Make necessary changes to the volume group or filesystem


Unmount filesystems and varyoff the vg (or stop cluster services)

On all the other nodes that share the volume group


1.
2.
3.
4.
5.
6.

Export the volume group from the ODM


Import the information from the VGDA
Change the auto vary on flag (if necessary)
Correct the permissions and ownership's on the logical volumes as required
Repeat to all other nodes
Restart Cluster Services to restart the application
#mklv -ydb10lv' -t'jfs2' sharedvg 10
#crfs -v jfs2 -d'db10lv' -m'/db10'
#unmount /sharedfs
#varyoffvg sharedvg
#exportvg sharedvg
#importvg -V123 -y sharedvg hdisk3
#chvg -an sharedvg
#varyoffvg sharedvg

Copyright IBM Corporation 2008

Figure 7-45. LVM changes: Manual

AU548.0

Notes:
After making a change to an LVM component, such as creating a new logical volume and
file system as shown in the figure, you must propagate the change to the other nodes in the
cluster that are sharing the volume group using the steps described. Make sure that the
auto activate is turned off (chvg -an sharedvg) after the importvg command is executed
because the cluster manager will control the use of the varyonvg command on the node
where the volume group should be varied on.
Other than the sheer complexity of this procedure, the real problem with it is that it requires
that the resource group be down while the procedure is being carried out.
Fortunately, there are better ways...

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-63

Student Notebook

LVM changes: Lazy update


At fallover time, lazy update compares the time stamp value in
the VGDA with one stored in the ODM. If the time stamps are
the same, then the varyonvg proceeds.
If the timestamps do not agree, then HACMP does the
export/import cycle similar to a manual update.
HACMP does change the VG auto vary on flag
It preserves permissions and ownership of the logical volumes when a
Big/Scalable VG
Will fail if:
Necessary PVIDs not known on all nodes participating in VG
VG not known on all nodes in RG

11 12 1
10
2
3
4
7 6 5

11 12 1
10
2
3
4
7 6 5

9
8

9
8

Copyright IBM Corporation 2008

Figure 7-46. LVM changes: Lazy update

AU548.0

Notes:
HACMP has a facility called Lazy Update that it uses to attempt to synchronize LVM
changes during a fallover.
HACMP uses a copy of the timestamp kept in the ODM and a timestamp from the volume
groups VGDA. AIX updates the ODM timestamp whenever the LVM component is
modified on that system. When a cluster node attempts to vary on the volume group,
HACMP for AIX compares the timestamp from the ODM with the timestamp in the VGDA
on the disk (use /usr/es/sbin/cluster/utilities/clvgdata hdiskn to find the VGDA timestamp for
a volume group). If the values are different, the HACMP for AIX software exports and
re-imports the volume group before activating it. If the timestamps are the same, HACMP
for AIX activates the volume group without exporting and re-importing. The time needed for
takeover expands by a few minutes if a Lazy Update occurs.
This method requires no downtime; although, as indicated, it does increase the fallover
time minimally for the first fallover after the LVM change was made.

7-64 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Realize though that this mechanism will not fix every situation where nodes are
out-of-sync. Further, having the takeover process fix problems with the LVM meta-data at
takeover time is not the preferred method of handling the synchronization.
To preserve permissions/ownership over an import, the volume group must be a Big or
Scalable VG and the logical volumes must be modified using chlv with the -U (for user id),
-G for group id, -P (for permissions) flags. The importvg must be done with a -R.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-65

Student Notebook

LVM changes: C-SPOC synchronization


Manually make your change to the LVM on one node
Use C-SPOC to propagate the changes to all nodes in the
resource group
Filesystem updates (imfs) are not performed using this function if the Volume Group is
an enhanced concurrent mode volume group

update vg constructs
use C-SPOC syncvg

C-SPOC updates ODM


and the time stamp file

Copyright IBM Corporation 2008

Figure 7-47. LVM changes: C-SPOC synchronization

AU548.0

Notes:
Using C-SPOC to synchronize manual LVM changes
In this method, you manually make your change to the LVM on one node and then
invoke C-SPOC to propagate the change. Most likely the reason you are using this
C-SPOC task is because someone who is unfamiliar with cluster node management
made a change to a shared LVM component without using C-SPOC, creating an
out-of-sync condition between a node in the cluster and the rest of the nodes. This task
allows you to use C-SPOC to clean-up after-the-fact.
Note: If using an enhanced concurrent mode volume group and a filesystem has been
added to an existing logical volume without using C-SPOC, the imfs is not done
meaning this is an ineffective function. For this reason (among many others), you are
strongly encouraged to use C-SPOC to perform the LVM add/remove/update and not
use this mechanism to synchronize after-the-fact.

7-66 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

This facility is accessed by using the following SMIT path in HACMP:


smitty hacmp --> System Management (C-SPOC) --> HACMP Logical Volume
Management --> Synchronize a Shared Volume Group Definition.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-67

Student Notebook

Enhanced concurrent mode volume groups


Another synchronization method is the use of ECMVGs
(Enhanced Concurrent Mode Volume Groups)
RSCT updates LVM information automatically for ECMVGs
Happens immediately on all nodes running cluster services
Nodes that are not running cluster services will be updated when cluster services
are started

Benefits
Fast Disk Takeover
Can convert existing VGs to ECMVGs via C-SPOC

Limitations
Incomplete
/etc/filesystems not updated
Incompatible
Must be careful using ECMVGs if any product that is running on the system
places SCSI reserves on the disks as part of its function
Copyright IBM Corporation 2008

Figure 7-48. Enhanced concurrent mode volume groups

AU548.0

Notes:
RSCT as LVM change management
With enhanced concurrent mode (ECM) volume groups, RSCT will automatically
update the ODM on all the nodes that share the volume group when an LVM change
occurs on one node.
However, because it is limited to only ECM volume groups and because
/etc/filesystems is not updated, its better to explicitly use C-SPOC to make LVM
changes.

7-68 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

The best method: C-SPOC LVM changes


Enhanced Journaled File Systems
Move cursor to desired item and press Enter.
Add an Enhanced Journaled File System
Add an Enhanced Journaled File System on a Previously Defined Logical Volume
List All Shared File Systems
Change / Show Characteristics of a Shared Enhanced Journaled File System
Remove a Shared File System

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 7-49. The best method: C-SPOC LVM changes

AU548.0

Notes:
You can use C-SPOC to both make the change and to distribute the change.
This approach has two major advantages: no downtime is required and you can be
confident that the nodes are in sync. It might take a little longer to run than the normal chfs
application, but it is well worth the wait.
Other C-SPOC screens exist for pretty much any operation that you are likely to want to do
with a shared volume group.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-69

Student Notebook

LVM changes: Select your filesystem


Enhanced Journaled File Systems
Move cursor to desired item and press Enter.
Add an Enhanced Journaled File System
Add an Enhanced Journaled File System on a Previously Defined Logical Volume
List All Shared File Systems
Change / Show Characteristics of a Shared Enhanced Journaled File System
Remove a Shared File System

+--------------------------------------------------------------------------+

File System Name

Move cursor to desired item and press Enter.

# Resource Group
File System

xwebgroup
/xwebfs

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 7-50. LVM changes: Select your file system

AU548.0

Notes:
Changing a shared file system using C-SPOC
We have to provide the name of the file system that we want to change. The file system
must be in a volume group that is currently online somewhere in the cluster and is
already configured into a resource group.

7-70 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Update the size of a filesystem


Change/Show Characteristics of a Shared File System in the Cluster
Type or select values in entry fields.
Press Enter AFTER making all desired changes.

Resource Group Name


File system name
NEW mount point
SIZE of file system (in 512-byte blocks)
Mount GROUP
PERMISSIONS
Mount OPTIONS
Start Disk Accounting?
Block Size (bytes)
Inline Log?
Inline Log size (MBytes)

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
xwebgroup
/xwebfs
[/xwebfs]
[4000000]
[]
read/write
[]
no
4096
no
0

F3=Cancel
F7=Edit
Enter=Do

+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-51. Update the size of a file system

AU548.0

Notes:
Changing file system size
Specify a new file system size, in 512 byte blocks, and press Enter. The file system is
re-sized and the relevant LVM information is updated on all cluster nodes configured to
use the file systems volume group.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-71

Student Notebook

HACMP resource group operations


HACMP Resource Group and Application Management
Move cursor to desired item and press Enter.
Show the Current State of Applications and Resource Groups
Bring a Resource Group Online
Bring a Resource Group Offline
Move a Resource Group to Another Node / Site
Suspend/Resume Application Monitoring
Application Availability Analysis

F1=Help
Esc+9=Shell

F2=Refresh
Esc+0=Exit

F3=Cancel
Enter=Do

Esc+8=Image

Copyright IBM Corporation 2008

Figure 7-52. HACMP resource group operations

AU548.0

Notes:
HACMP resource group and application management
This visual shows the selections for managing resource groups.

7-72 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Priority override location (POL): Old


Old, problem behavior
Assigned during a resource group move operation.
The destination node for a resource group online, offline or move request becomes
the resource group's POL
Represents the location a Resource Group goes to regardless of cluster events,
Meant to honor the administrators desire to have the Resource Group on a
specific node
Truly an override of Resource Group policy setting
RestoreNodePriority caused resource group movement, regardless of Fallback
policy
POL is viewed with the command:
/usr/es/sbin/cluster/utilities/clRGinfo p
Information maintained in a file
Manual manipulation possible by changing the file
Obvious problem is that the behavior of the Resource Group might be unexpected in
that it might contradict the policy in the Resource Group

Copyright IBM Corporation 2008

Figure 7-53. Priority override location (POL): Old

AU548.0

Notes:
Priority override location (old) problem behavior
Problem behavior is in the following levels:
- Before HACMP V5.3 PTF IY84883 May 2006
- Before HACMP V5.2 PTF IY82989 April 2006
- Before HACMP V5.1 PTF IY84646 May 2006
HACMP 5.x introduced the notion of a priority override location. A priority override
location overrides all other fallover and fallback policies and possible locations for the
resource group.
A resource group does not normally have a priority override location (POL). The
destination node that you specify for a resource group move, online or offline request
(see next couple of visuals) becomes the priority override location for the resource
group. The resource group remains on that node in an online state (if you moved or
on-lined it there) or offline state (if you off-lined it there) until the priority override location
is cancelled.
Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-73

Student Notebook

Persistent and non-persistent POL


Priority override locations can be persistent and non-persistent.
- A persistent priority override location remains in effect until explicitly cancelled.
- A non-persistent priority override location is cancelled either explicitly or implicitly
when the HACMP daemons are shut down on all the nodes in the cluster
simultaneously.

Concurrent access resource groups


The behavior of priority override location varies depending on whether the resource
group is a concurrent access resource group. The discussion here refers to the
behavior of non-concurrent access resource groups. Refer to Chapter 15 of the HACMP
for AIX Administration Guide for information on how priority override locations work for
concurrent access resource groups.

7-74 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Priority override location (POL): New


Pre-HACMP V5.4
RestoreNodePriority resets POL, then moves RG back to highest priority node only if
Fallback Policy is fallback to highest priority node

HACMP V5.4 and later

Function is strictly internal


The resource group is moved only for resource group move operations
No RestoreNodePriority SMIT choice
Original highest priority node is remembered and flagged in SMIT on later moves
Persist across cluster reboot is no longer supported

Destination node is now the new home node


Changes to /usr/es/sbin/cluster/utilities/clRGinfo p
Now shows location of temporary highest priority and timestamp of move

Copyright IBM Corporation 2008

Figure 7-54. Priority override location (POL): New

AU548.0

Notes:
Priority override location: Problems solved
New behavior is in the following levels and later:
- HACMP V5.3 PTF IY84883 May 2006
- HACMP V5.2 PTF IY82989 April 2006
- HACMP V5.1 PTF IY84646 May 2006
Prior to HACMP 5.4 but with the above mentioned PTFs or later, the problem where the
resource group moved on RestoreNodePriority regardless of Fallback Policy settings
was fixed. Now the RestoreNodePriority only resets the POL setting, unless the
Fallback Policy is fallback to highest priority node. In that case, the behavior is the
same as the old way.
For HACMP 5.4 and later, the function is strictly internal and the Resource Group Move
operation is treated as temporary. If more permanent changes are desired, make the
changes in the Resource Group. The original highest priority node is flagged in SMIT
when subsequent resource group moves are initiated.
Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-75

Student Notebook

Moving a resource group (1 of 2)


HACMP Resource Group and Application Management
Move cursor to desired item and press Enter.
Move Resource Groups to Another Node
Move Resource Groups to Another Site

Select a Destination Node

Move cursor to desired item and press Enter.

# *Denotes Originally Configured Highest Priority Node

*usa

uk

india

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9

Copyright IBM Corporation 2008

Figure 7-55. Moving a resource group (1 of 2)

AU548.0

Notes:
Moving a resource group
Prior to the SMIT panel shown, the resource group must be chosen from a list of online
resource groups. You can request that a resource group be moved to any node that is in
the resource groups list of nodes (where cluster services are active).
The clRGmove utility program is used, which can also be invoked from the command
line. See the man page for details.
The destination node that you specify becomes the resource groups priority override
location.

Working with the POL


For HACMP 5.3 and earlier, a resource groups priority override location can be
cancelled by selecting a destination node of Restore_Node_Priority_Order.

7-76 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

For HACMP 5.3 and earlier, if Persist Across Cluster Reboot is set to true, then the
priority override location will be persistent. Otherwise, it will be non-persistent.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-77

Student Notebook

Moving a resource group (2 of 2)


Move a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
xwebgroup
uk

Resource Group to be Moved


Destination Node

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

An option to persist across cluster reboot is available prior to HACMP V5.4


Monitor for the cluster to stabilize and verify that the resources are available on the target
Copyright IBM Corporation 2008

Figure 7-56. Moving a resource group (2 of 2)

AU548.0

Notes:
This screen follows; press enter to move the xwebgroup to the uk node.

7-78 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Bring a resource group offline (1 of 3)


HACMP Resource Group and Application Management
Move cursor to desired item and press Enter.
Show the Current State of Applications and Resource Groups
Bring a Resource Group Online
Bring a Resource Group Offline
Move a Resource Group to Another Node / Site
Suspend/Resume Application Monitoring

Select a Resource Group

Move cursor to desired item and press Enter.

# Resource Group
State
Node(s) / Site

xwebgroup
ONLINE
uk
/

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9

Copyright IBM Corporation 2008

Figure 7-57. Bring a resource group offline (1 of 3)

AU548.0

Notes:
Bring a resource group offline: Select a resource group
To start, you must select the resource group you wish to take offline. Then youll select
an online node where you want the resource group brought offline. This is pretty
obvious for a resource group that will only be active on one node at a time (OHNO or
OFAN). For resource groups that can be online on more than one node at once (Online
on All Available), you can choose All or just one of the active nodes.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-79

Student Notebook

Bring a resource group offline (2 of 3)


HACMP Resource Group and Application Management
Move cursor to desired item and press Enter.
Show the Current State of Applications and Resource Groups
Bring a Resource Group Online
Bring a Resource Group Offline
Move a Resource Group to Another Node / Site
Suspend/Resume Application Monitoring
Application Availability Analysis

Select an Online Node

Move cursor to desired item and press Enter.

uk

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9-

Copyright IBM Corporation 2008

Figure 7-58. Bring a resource group offline (2 of 3)

AU548.0

Notes:
Now choose the node where the resource group will be taken offline.

7-80 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Bring a resource group offline (3 of 3)


Bring a Resource Group Offline
Type or select values in entry fields.
Press Enter AFTER making all desired changes.

Resource Group to Bring Offline


Node On Which to Bring Resource Group Offline

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

[Entry Fields]
xwebgroup
uk

F4=List
F8=Image

The option to persist across cluster reboot is available prior to HACMP V5.4
Copyright IBM Corporation 2008

Figure 7-59. Bring a resource group offline (3 of 3)

AU548.0

Notes:
Bring a resource group offline
When a resource group is brought offline on a node, all resources will be deactivated on
that node.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-81

Student Notebook

Bring a resource group back online


HACMP Resource Group and Application Management
Move cursor to desired item and press Enter.
Show the Current State of Applications and Resource Groups
Bring a Resource Group Online
Bring a Resource Group Offline
Move a Resource Group to Another Node / Site
Suspend/Resume Application Monitoring
Application Availability Analysis

Select a Destination Node

Move cursor to desired item and press Enter.

# *Denotes Originally Configured Highest Priority Node

usa

uk

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9

Copyright IBM Corporation 2008

Figure 7-60. Bring a resource group back online

AU548.0

Notes:
Bring a resource group online
First youll choose an offline Resource Group. Then the option above will display with the
potential nodes on which to bring it online.
Bringing a resource group online will activate the resources in it on the target node.
Again, watch for the cluster to go stable and verify that the resources are available on the
intended target node.

7-82 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Log files generated by HACMP


/var/hacmp/adm/cluster.log
/var/hacmp/adm/history/cluster.mmddyyyy

"High level view" of cluster activity.


Cluster history files generated daily.

/var/hacmp/log/cspoc.log*
/var/hacmp/clverify/clverify.log

Generated by C-SPOC commands.


Contains verbose messages from clverify
(cluster verification utility).
Output of emulated events.
Output of today's HACMP event scripts.

/var/hacmp/log/emuhacmp.out*
/var/hacmp/log/hacmp.out*
/var/hacmp/log/hacmp.out.<1-7>*
AIX error log
/var/ha/log/topsvcs

All sorts of stuff!


Tracks execution of topology services daemon.
Tracks execution of group services daemon.
Tracks internal execution of the cluster
manager.
Tracks activity of clcomd.
Tracks more detailed activity of clcomd when
tracing is turned on.
Output of application availability analysis tool.

/var/ha/log/grpsvcs
/var/hacmp/log/clstrmgr.debug*
/var/hacmp/clcomd/clcomd.log
/var/hacmp/clcomd/clcomddiag.log
/var/hacmp/log/clavan.log
/var/hacmp/log/

-Two-Node Cluster Configuration Assistant


-Generated by utilities and file propagation
-Generated by test tool

clconfigassist.log
clutils.log
cl_testtool.log

* denotes logs that were in /tmp prior to HACMP 5.4.1


Copyright IBM Corporation 2008

Figure 7-61. Log files generated by HACMP - before HACMP 5.4.1

AU548.0

Notes:
Log files
The visual summarizes the HACMP log files.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-83

Student Notebook

Log files generated by HACMP: HACMP 5.4.1 and later

On HACMP 5.4.1 and later cluster configurations, all log files


default to /var/hacmp

HACMP Log Viewing and Management facility


Existing configurations preserve any log file redirections

New log files:


/var/hacmp/log/clstrmgr.debug.long
/var/hacmp/log/migration.log
/var/hacmp/log/cspoc.log.remote

Improvements to event script logging in hacmp.out

More in Unit 10

Logging improvements for clcomd, clver

snap command can collect AIX snapshot data from cluster


nodes as well as HACMP data
Copyright IBM Corporation 2008

Figure 7-62. Log files generated by HAMCP - HACMP 5.4.1 and later

AU548.0

Notes:
When installed from scratch, HACMP 5.4.1 will use /var/hacmp/log as the default for all log
files.
You can view the current settings through SMIT using the HACMP Log Viewing and
Management path.
Of course, if you install on top of an existing configuration, or apply a snapshot, your
settings will be preserved; however, if you want to redirect all log files there is a new SMIT
path that enables you to redirect them all at once.
HACMP uses korn shell scripts to perform recovery operations. An effort was made in
HACMP 5.4.1 to clean up these scripts and consolidate the use of things like VERBOSE
LOGGING, set x and the PS4 settings. This produces more consistent results in
hacmp.out and makes it easier to read and follow.
Similarly for key components such as clcomd and clver, the logging was made more
consistent.

7-84 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

The clsnap command was also updated to collect everything needed at the same time
rather than multiple commands and multiple options.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-85

Student Notebook

Lets review: Topic 2


1.

True or False?
Using C-SPOC reduces the likelihood of an outage by reducing the
likelihood that you will make a mistake.

2.

True or False?

3.

C-SPOC cannot do which of the following administration tasks?

C-SPOC reduces the need for a change management process.


a.
b.
c.
d.
e.
f.

4.

Add a user to the cluster


Change the size of a filesystem
Add a physical disks to the cluster
Add a shared volume groups to the cluster
Synchronize existing passwords
None of the above

True or False?
It does not matter which node in the cluster is used to initiate a C-SPOC
operation.

5.

Which log file provides detailed output on HACMP event script


execution?
a. /tmp/clstrmgr.debug
b. /tmp/hacmp.out
c. /var/adm/cluster.log
Copyright IBM Corporation 2008

Figure 7-63. Lets review topic 2

AU548.0

Notes:

7-86 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

7.3 Dynamic automatic reconfiguration event facility

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-87

Student Notebook

Dynamic Automatic Reconfiguration Event facility


After completing this topic, you should be able to:
Discuss the benefits and capabilities of DARE
Make changes to cluster topology and resources in an active
cluster
Use the snapshot facility to return to a previous cluster
configuration or to roll back changes

Copyright IBM Corporation 2008

Figure 7-64. Dynamic Automatic Reconfiguration Event facility

AU548.0

Notes:
Dynamic Automatic Reconfiguration Event
In this topic, we examine HACMPs capability to perform changes to the cluster
configuration while the cluster is running. This capability is known as Dynamic
Automatic Reconfiguration Event, or DARE for short.

7-88 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Dynamic reconfiguration
HACMP provides a facility that allows changes to cluster
topology and resources to be made while the cluster is active.
This facility is known as DARE.
DARE requires three copies of the HACMP ODM.

DCD

Default Configuration Directory


which is updated by SMIT/command
line: /etc/objrepos

SCD

Staging Configuration Directory


which is used during reconfiguration:
/usr/es/sbin/cluster/etc/objrepos/staging

rootvg
ACD

Active Configuration Directory from which


clstrmgr reads the cluster configuration:
/usr/es/sbin/cluster/etc/objrepos/active

Copyright IBM Corporation 2008

Figure 7-65. Dynamic reconfiguration

AU548.0

Notes:
How it works
Dynamic Reconfiguration is made possible by the fact that HACMP holds three copies
of the ODM, known as the Default, Staging, and Active configuration directory. By
holding three copies of the ODM, HACMP can make changes on one node and
propagate them to other nodes in the cluster while an active configuration is currently
being used.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-89

Student Notebook

What can DARE do?


DARE allows changes to be made to most cluster topology
and nearly all resource group components without the need to stop
Cluster Services, take the application offline or reboot a node. All
changes must be synchronized in order to take effect.
Here are some examples of the tasks that DARE can complete for
Topology and Resources without having to bring Cluster Services
down.

Topology Changes
Adding or removing cluster nodes
Adding or removing networks
Adding or removing communication interfaces or
devices
Swapping a communication interface's IP address
Resource Changes
All resources can be changed
Copyright IBM Corporation 2008

Figure 7-66. What can DARE do?

AU548.0

Notes:
What can DARE do?
The visual shows some of the changes that can be made dynamically using DARE.

7-90 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What limitations does DARE have?


DARE cannot change all cluster topology and resource group components without
the need to stop Cluster Services, take the application offline or reboot a node
Here are some examples that require a stop and restart of Cluster Services for the
change to be made
Topology Changes
Change the name of the cluster
Change the name of a cluster node
Change a communication interface attribute
Changing whether a network uses IPAT via IP aliasing or via IP replacement
Change the name of a network module
Add a network interface module
Removing a network interface module
Resource Changes
Change the name of a resource group
Change the name of an application server
Change the node relationship

DARE cannot run if two nodes are not at the same HACMP level
Copyright IBM Corporation 2008

Figure 7-67. What limitations does DARE have?

AU548.0

Notes:
Limitations
Some changes require a restart of Cluster Services.
Also, DARE requires that all nodes are at the same HACMP level.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-91

Student Notebook

So how does DARE work?


DARE uses the three separate copies of the ODM to
allow changes to be propagated to all nodes while the
cluster is active

change topology
synchronize topology snapshot taken of cluster manager reads SCD is deleted
ACD and refreshes
or resources in SMIT or resources in SMIT the current ACD

HACMP

HACMP

Move cursor to desired item and press Enter.

Move cursor to desired item and press Enter.

Cluster Configuration

Cluster Configuration

Cluster Services
Cluster System Management
Cluster Recovery Aids

Cluster Services
Cluster System Management
Cluster Recovery Aids

RAS Support

RASfdsfsfsafsafsfs
fsafsfdsafdsafdsafdsfsdafsdadafsdafsdf
Support

SCD
F1=Help
F2=Refresh
F3=Cancel
Esc+9=Shell
Esc+0=Exit
Enter=Do

Esc+8=Image

F1=Help
F2=Refresh
F3=Cancel
Esc+9=Shell
Esc+0=Exit
Enter=Do

Esc+8=Image

Type
text

DCD

SCD
SCD

ACD

ACD

SCD

Copyright IBM Corporation 2008

Figure 7-68. So how does DARE work?

AU548.0

Notes:
How it works
DARE uses three copies of the HACMP ODM to propagate live updates to the cluster
topology or resource configuration across the cluster. This is done in five steps detailed
above. Although it is possible to make a nearly arbitrarily large set of changes to the
configuration and then synchronize them all in one operation, it is usually better to make
a modest change, synchronize it, verify that it works, and then move on to more
changes.
Note that many changes are incompatible with the clusters current AIX configuration.
Such changes are, therefore, not possible to synchronize using DARE. Instead, the
cluster has to be taken down while the appropriate AIX configuration changes are
applied. (It is sometimes possible to remove some resources from a resource group,
synchronize, change the AIX configuration of the resources, add them back into the
resource group, and synchronize again; although, there is likely to be little point in
running the resource group without the resources).
7-92 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP 5.x synchronizes both topology changes and resource changes whenever it is
run. This is a change from previous releases of HACMP.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-93

Student Notebook

Verifying and synchronizing (standard)


Initialization and Standard Configuration
Move cursor to desired item and press Enter.
Configuration Assistants
Configure an HACMP Cluster and Nodes
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
Display HACMP Configuration
HACMP Cluster Test Tool

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 7-69. Verifying and synchronizing (standard)

AU548.0

Notes:
Verifying and synchronizing (standard)
This visual highlights the Verify and Synchronize HACMP Configuration menu entry
in the top-level Standard Configuration paths SMIT menu.
Invoking this menu entry initiates an immediate verification and synchronization of the
HACMP configuration from the local nodes DCD (there is no opportunity provided to
modify the process in any way).

7-94 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Verifying and synchronizing (extended)


HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
(When NODE DOWN)
* Verify, Synchronize or Both
* Automatically correct errors found during
verification?

[Entry Fields]
[Both]
+
[No]
+

* Force synchronization if verification fails?


* Verify changes only?
* Logging

[No]
[No]
[Standard]

+
+
+

HACMP Verification and Synchronization (Active Cluster Nodes Exist)


(When NODE UP)
* Emulate or Actual
* Verify changes only?
* Logging
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Actual]
[No]
[Standard]
F3=Cancel
F7=Edit
Enter=Do

+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-70. Verifying and synchronizing (extended)

AU548.0

Notes:
Verifying and synchronizing (extended)
When the Extended Verification and Synchronization option in the extended
configuration paths top-level menu is selected, the SMIT screen above displays. It
allows the cluster administrator to modify the default verification and synchronization
procedure somewhat.

Emulate or actual
The default of Actual causes the changes being verified and synchronized to take
effect (become the actual cluster configuration) if the verification succeeds. Setting this
field to Emulate causes HACMP to verify and then go through the motions of a
synchronize without actually causing the changes to take effect. This is useful to get a
sense of what side effects the synchronization is likely to result in. For example, if the
proposed change would trigger a fallover or a fallback (because node priorities have

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-95

Student Notebook

changed) then this would be apparent by looking at /<log_dir>/emuhacmp.out or


/var/hacmp/adm/cluster.log.
Note: Because, in this case, no fallover or fallback actually occurs, it is not possible to
determine if the hypothetical fallback works if actually performed (it might fail for any
number of subtle reasons that simply cannot be discovered by an emulated
synchronization).

Force synchronization if verification fails?


Setting this to True requests that HACMP accept configurations that it does not
consider to be entirely valid. This is potentially a very dangerous request and should not
be made without considerable planning and analysis to ensure that the impact is
acceptable.

Verify changes only?


Setting this to True causes the proposed change to be verified but not synchronized.
This can be used to see if a change is valid without actually putting it into effect.

Logging
This field can be set to Standard to request the default level of logging or to Verbose to
request a more, ummmm, verbose level of logging! If you are having problems getting a
change to verify and do not understand why it will not verify, then setting the logging
level to Verbose might provide additional information.

7-96 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Discarding unwanted changes


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure

Restore HACMP Configuration Database from Active Configuration


Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification
Manage RSCT Services
Open a SMIT Session on a Node

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 7-71. Discarding unwanted changes

AU548.0

Notes:
Rolling back an unwanted change that has not yet been synchronized
If you have made changes that you have decided to not synchronize, they can be
discarded using the Restore HACMP Configuration Database from Active
Configuration menu entry shown above. It is located under the Problem
Determination Tools menu (accessible from the top-level HACMP SMIT menu).
Prior to rolling back the DCD on all nodes, the current contents of the DCD on the node
used to initiate the roll back is saved as a snapshot (in case they should prove useful in
the future). The snapshot will have a rather long name similar to:
Restored_From_ACD.Sep.18.19.33.58
This name can be interpreted to indicate that the snapshot was taken at 19:33:58 on
September 18th (the year is not preserved in the name).
Because the change being discarded is sometimes a change that has been emulated,
this operation is sometimes called rolling back an emulated change. This is a misnomer
Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-97

Student Notebook

as the operation rolls back any change that has not yet been verified and synchronized
by restoring all nodes DCDs to the contents of the currently active cluster configuration.

7-98 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Rolling back from a DARE operation


Restore the Cluster Snapshot
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
Cluster Snapshot Name
Cluster Snapshot Description
Un/Configure Cluster Resources?
Force apply if verify fails?

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
jami
Cuz -- he did the lab>
[Yes]
+
[No]
+

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 7-72. Rolling back from a DARE operation

AU548.0

Notes:
Rolling back an unwanted change that has been synchronized
If you find that a DARE change does not give the desired result, then you might want to
roll it back. DARE cuts a snapshot of the active configuration immediately prior to
committing HACMP configuration. This snapshot is named active.x.odm (where x is
0...9, 0 being the most recent). It can be used to restore the cluster to an earlier state.

Manual snapshots are useful


If many changes have been made in reasonably rapid succession, then you might lose
track of which active.x snapshot is the one that you want. To defend yourself against
this possibility, it is best to manually take a snapshot before embarking on a series of
changes. This allows you to roll back to a known point rather than having to guess
which active.x snapshot is the right one!

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-99

Student Notebook

Snapshots are stored in the directory /usr/es/sbin/cluster/snapshots by default (the


default can be overridden by setting the SNAPSHOTPATH environment variable).

7-100 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What if DARE fails?


If a dynamic reconfiguration fails because of an unexpected
cluster event, then the staging configuration directory might still exist.
This prevents further changes being made to the cluster.

change topology
synchronize topology snapshot taken of cluster manager reads SCD is deleted
ACD and refreshes
or resources in SMIT or resources in SMIT the current ACD

HACMP

HACMP

Move cursor to desired item and press Enter.

Move cursor to desired item and press Enter.

Cluster Configuration

Cluster Configuration

Cluster Services
Cluster System Management
Cluster Recovery Aids

Cluster Services
Cluster System Management
Cluster Recovery Aids

RAS Support

RASfdsfsfsafsafsfs
fsafsfdsafdsafdsafdsfsdafsdadafsdafsdf
Support

SCD
F1=Help
F2=Refresh
F3=Cancel
Esc+9=Shell
Esc+0=Exit
Enter=Do

Esc+8=Image

F1=Help
F2=Refresh
F3=Cancel
Esc+9=Shell
Esc+0=Exit
Enter=Do

Esc+8=Image

Type
text

Bang!

DCD

SCD
SCD

ACD

ACD

SCD

Copyright IBM Corporation 2008

Figure 7-73. What if DARE fails?

AU548.0

Notes:
What if DARE fails?
If a node failure should occur while a synchronization is taking place, then the Staging
Configuration Directory (SCD) was not cleared on all nodes. The presence of the SCD
prevents further configuration changes from being performed. If the SCD is not cleared
at the end of a synchronize, then this indicates that the DARE operation did not
complete or was not successful; and hence the SCD acts as a lock against further
changes being made.
Note that the SCD copies are made before the change is copied by each nodes cluster
manager into each nodes ACD. If there is an SCD when Cluster Services starts up on a
node, it copies it to the ACD, deletes the SCD and uses the new ACD as its
configuration. Because a node failure at any point after any of the SCDs exists could
result in only some of the nodes having the updated SCD, the SCDs must be removed
before a restart of Cluster Services on any node (or you risk different cluster nodes

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-101

Student Notebook

running with different configurations, a situation that results in one or more cluster
nodes crashing).

7-102 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Dynamic reconfiguration lock


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification
Manage RSCT Services
Open a SMIT Session on a Node

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 7-74. Dynamic reconfiguration lock

AU548.0

Notes:
Clearing dynamic reconfiguration locks
The SMIT menu option Release Locks Set By Dynamic Reconfiguration clears out the
SCD and allows further synchronizations to be made to the cluster configuration. If an
SCD exists on any cluster node, then no further synchronizations are permitted until it is
deleted using the above SMIT menu option.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-103

Student Notebook

Lets review: Topic 3


1. True or False?
DARE operations can be performed while the cluster is running.
2. Which operations can DARE not perform (select all that apply)?
a.
b.
c.
d.

Changing the name of the cluster


Removing a node from the cluster
Changing a resource in a resource group
Change whether a network uses IPAT via IP aliasing or via IP
replacement

3. True or False?
It is possible to roll back from a successful DARE operation using an
automatically generated snapshot.
4. True or False?
Running a DARE operation requires three separate copies of the
HACMP ODM.
5. True or False?
Cluster snapshots can be applied while the cluster is running.
6. What is the purpose of the dynamic reconfiguration lock?
a. To prevent unauthorized access to DARE functions
b. To prevent further changes being made until a DARE operation has
completed
c. To keep a copy of the previous configuration for easy rollback
Copyright IBM Corporation 2008

Figure 7-75. Lets review: Topic 3

AU548.0

Notes:

7-104 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

7.4 WebSMIT

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-105

Student Notebook

Implementing WebSMIT
After completing this topic, you should be able to:
Configure and use WebSMIT

Copyright IBM Corporation 2008

Figure 7-76. Implementing WebSMIT

AU548.0

Notes:

7-106 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Web-enabled SMIT
HACMP 5.2 and up includes a web-enabled user interface that
provides easy access to:
HACMP configuration and management functions
Interactive cluster status display and manipulation
HACMP online documentation

The Web-enabled SMIT (WebSMIT) interface is similar to the


ASCII SMIT interface. You do not need to learn a new user
interface or terminology and can easily switch between ASCII
SMIT and WebSMIT
To use the WebSMIT interface, you must configure and run a
Web server process on the cluster nodes to be administered
The configuration has been made simpler with HACMP 5.4 and later
Use websmit_config utility

Copyright IBM Corporation 2008

Figure 7-77. Web-enabled SMIT (WebSMIT)

AU548.0

Notes:
Introduction
WebSMIT combines the advantages of SMIT with the ease of access from any system
that runs a browser.
For those looking for a graphical interface for managing and monitoring HACMP,
WebSMIT provides those capabilities via a Web browser. It provides real-time graphical
status of the cluster components, similar to the clstat.cgi. It also provides context menu
access to those components to control by launching a WebSMIT menu containing the
action or actions to take. There are multiple views, Node-by-node, Resource Group,
Associations, component Details, and so on.

Configuration
This utility uses snmp; so it is imperative that you have your snmp interface to the
cluster manager functioning. To test that, attempt a cldump command on the system
Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-107

Student Notebook

where you will be running the WebSMIT utility. A configuration utility is provided
(websmit_config) requiring that only a supported HTTP server be installed to configure
the system for use as a WebSMIT server. A robust control tool is provided as well to
control the HTTP server functioning. The tool is called websmitctl. Check it out in lab.

Features
- Off-line/Unavailable status is displayed as grayed out.
- Most WebSMIT items can be assigned a custom color set.
- Auto-configuration improvements.
- Language support is more sophisticated.
- An instant help system.
- Resource-type awareness in the display.

7-108 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

WebSMIT main page

HACMP SMIT access

Copyright IBM Corporation 2008

Figure 7-78. WebSMIT main page

AU548.0

Notes:
Introduction
To connect to WebSMIT, point your browser to the cluster node that you have
configured for WebSMIT.
WebSMIT uses port 42267 by default.
After authentication, this will be the first screen that you see. Note the Navigation Frame
(left side) and the Activity Frame (right side). Also, note that were looking at
configuration options only. Each pane is tabulated to provide access to different status,
functions or controls.
Navigation Frame tabs:
- SMIT - access to HACMP SMIT
- N&N - a Node-by-node relationship and status view of the cluster (if snmp can get
cluster information)

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-109

Student Notebook

- RGs - a Resource Group relationship and status view of the cluster status
Use the Expand All or Collapse All links to get the full view or clean up the view.
Activity Frame tabs:
- Configuration - permanent access to HACMP SMIT from Activity Frame
- Details - comes to top when a component is selected in Navigation Frame, and
displays configuration information about the component
- Associations - shows component relationship to other HACMP components for
component that is selected in the Navigation Frame
- Doc - If the HACMP pubs were installed (html or pdf version), this tab will display
links to access them
Dont attempt to navigate using the browsers Back or Forward buttons. Note the
FastPath box at the bottom of the Configuration tab. This allows you to go directly to
any (that is any) SMIT panel if you know the fastpath. Whats the fastpath to the SMIT
top menu?

7-110 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

WebSMIT context menu controls

Activity Frame changes

Right mouse click


on app_server

Choose an item from the context menu

Copyright IBM Corporation 2008

Figure 7-79. WebSMIT context menu controls

AU548.0

Notes:
Using the context menus
Right-click the object in the Navigation Frame. Choose the item you want to control from
the context menu and watch the Activity Frame change to the task youre trying to
perform. Remember this is still SMIT, so youll get HACMP SMIT menus as a result of
the context menu selections.

Status
Notice that the icons (on the screen anyway) indicate online (not grayed out) or offline
(grayed out). This is real-time status. More to come on the next visual, regarding the
associations.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-111

Student Notebook

WebSMIT associations

Copyright IBM Corporation 2008

Figure 7-80. WebSMIT associations

AU548.0

Notes:
Associations
If you dont click fast enough (or just pause long enough) between selecting the
Resource Group and clicking the Associations tab, youll see the Details tab come to
the top of the Activity Frame with the configuration details of the Resource Group.

Enhancements with HACMP 5.4.1


Some of the changes made for HACMP 5.4.1 are:
- Prior releases used a red square for off-line/unavailable status
- Industry convention is that the color red is used to indicate a problem situation.
- Off-line/unavailable status is now indicated by graying out the affected item or
items.
- More common, industry standard approach.
7-112 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- Most WebSMIT items can now be assigned custom colors.


- Improves accessibility for customers with visual deficits.
- Color customizations can be assigned both globally and at the browser level.
Global customizations must be made manually in the wsm_custom.css file on
the WebSMIT server.
Local, per-browser customizations can be made through the new Customize
WebSMIT panel, which can be accessed via the Configuration tab under
Extended Options.
Note: Local customizations are stored as a cookie in the browser; so if the customers
change browsers, they must recreate their customizations in their new browsers.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-113

Student Notebook

WebSMIT online documentation

Copyright IBM Corporation 2008

Figure 7-81. WebSMIT online documentation

AU548.0

Notes:
Online documentation
This screen enables you to view the HACMP manuals in either HTML or PDF format.
You must install the HACMP documentation file sets.

7-114 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

WebSMIT configuration
Base Directory is /usr/es/sbin/cluster/wsm
Consult the documentation
Readme located at /usr/es/sbin/cluster/wsm/README
Manuals installed from cluster.doc.en_US.es.html and cluster.doc.en_US.es.pdf

Configure and run a Web server on cluster nodes


./websmit_config takes it from there

Optionally, implement stricter security


Customize ./wsm_smit.conf
Setuid program ./cgi-bin/wsm_cmd_exec permissions must be set correctly

Consult log files for progress status


./logs/wsm_smit.log
./logs/wsm_smit.script

Optionally, control the SMIT panels that can be accessed


./wsm_smit.allow
./wsm_smit.deny
./wsm_smit.redirect

Copyright IBM Corporation 2008

Figure 7-82. WebSMIT configuration

AU548.0

Notes:
Documentation
The primary source for information on configuring WebSMIT is the WebSMIT README
file as shown in the visual. The HACMP Planning and Installation Guide provides some
additional information on installation and the HACMP Administration Guide provides
information on using WebSMIT.

Web server
To use WebSMIT, you must configure one (or more) of your cluster nodes as a Web
server. You must use either IBM HTTP Server (IBMIHS) V6.0 (or later) or Apache 1.3
(or later). Refer to the specific documentation for the Web server you choose.
This configuration is done using the websmit_config utility, located in
/usr/es/sbin/cluster/wsm. See the README file for details.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-115

Student Notebook

WebSMIT security
Because WebSMIT gives you root access to all the nodes in your cluster, you must
carefully consider the security implications.
WebSMIT uses a configuration file, wsm_smit.conf, that contains settings for
WebSMIT's security related features. This file is installed as
/usr/es/sbin/cluster/wsm/wsm_smit.conf, and it may not be moved to another location.
The default settings used provide the highest level of security in the default AIX/Apache
environment. However, you should carefully consider the security characteristics of your
system before putting WebSMIT to use. You might be able to use different combinations
of security settings for AIX, Apache, and WebSMIT to improve the security of the
application in your environment.
WebSMIT uses the following configurable mechanisms to implement a secure
environment:
-

Non-standard port
Secure http (https)
User authentication
Session time-out
wsm_cmd_exec setuid program

Use non-standard port


WebSMIT can be configured to allow access only over a specified port using the
wsm_smit.conf AUTHORIZED_PORT setting. If you do not specify an AUTHORIZED_PORT,
or specify a port of 0, then any connections via any port will be accepted. It is strongly
recommended that you explicitly specify the AUTHORIZED_PORT, and that you use a
non-standard port. The default setting for this configuration variable is 42267.
Allow only secure http
If your HTTP server supports secure HTTP, it is strongly recommended that you require
all WebSMIT connections to be established via HTTPS. This will ensure that you are
not transmitting sensitive information about your cluster over the Internet in plain text.
WebSMIT can be configured to require secure http access using the wsm_smit.conf
REDIRECT_TO_HTTPS setting. If the value for this setting is 1, then users connecting to
WebSMIT via an insecure connection will be redirected to a secure http connection.
The default value for REDIRECT_TO_HTTPS is 1.
Note: Regarding the REDIRECT_TO_HTTPS variable, the README file states:
This variable will only function correctly if the AUTHORIZED_PORT feature is disabled.
This did not appear to be true in our testing.
Require user authentication
If Apache's built-in authentication is not being used, WebSMIT can be configured to use
AIX authentication using the wsm_smit.conf file REQUIRE_AUTHENTICATION setting. If
the value for this setting is 1 and there is no .htaccess file controlling access to
7-116 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

WebSMIT, the user will be required to provide AIX authentication information before
gaining access.
(Refer to the documentation included with Apache for more details about Apache's
built-in authentication.)
The default value for REQUIRE_AUTHENTICATION is 1. If REQUIRE_AUTHENTICATION is
set, then the HACMP administrator must specify one or more users who are allowed to
access the system. This can be done using the wsm_smit.conf ACCEPTED_USERS
setting. Only users whose names are specified will be allowed access to WebSMIT, and
all ACCEPTED_USERS will be provided with root access to the system. By default, only the
root user is allowed access via the ACCEPTED_USERS setting.
Warning
Because AIX authentication mechanisms are in use, login failures can cause an account to
be locked. It is recommended that a separate user be created for the sole purpose of
accessing WebSMIT. If the root user has a login failure limit, failed WebSMIT login attempts
could quickly lock the root account.

Session time-out
Continued access to WebSMIT is controlled through the use of a non-persistent session
cookie. Cookies must be enabled in the client browser in order to use AIX
authentication for access control. If the session is used continuously, then the cookie
will not expire. However, the cookie is designed to time out after an extended period of
inactivity. WebSMIT allows the user to adjust the time-out period using the
wsm_smit.conf SESSION_TIMEOUT setting. This configuration setting must have a value
expressed in minutes. The default value for SESSION_TIMEOUT is 20 (minutes).
Controlling access to wsm_cmd_exec (setuid)
A setuid program is supplied with WebSMIT that allows non-root users to execute
commands with root permissions (wsm_cmd_exec). The setuid bit for this program must
be turned on in order for the WebSMIT system to function.
For security reasons, wsm_cmd_exec must not have read permission for non-root users.
Do not allow a non-root user to copy the executable to another location or to
decompile the program.
Thus the utility wsm_cmd_exec (located in /usr/es/sbin/cluster/wsm/cgi-bin/) must be
set with 4511 permissions.
See the README for details.
Care must be taken to limit access to this executable. WebSMIT allows the user to
dictate the list of users who are allowed to use the wsm_cmd_exec program using the
wsm_smit.conf REQUIRED_WEBSERVER_UID setting. The real user ID of the process
must match the UID of one of the users listed in wsm_smit.conf for the program to
Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-117

Student Notebook

carry out any of its functionality. The default value for REQUIRED_WEBSERVER_UID is
nobody.
By default, a Web server CGI process runs as user nobody, and by default, non-root
users cannot execute programs as user nobody. If your HTTP server configuration
executes CGI programs as a different user, it is important to ensure that the
REQUIRED_WEBSERVER_UID value matches the configuration of your Web server. It is
strongly recommended that the HTTP server be configured to run CGI programs as a
user who is not authorized to open a login shell (as with user nobody).

Log files
All operations of the WebSMIT interface are logged to the wsm_smit.log file and are
equivalent to the logging done with smitty -v. Script commands are also captured in
the wsm_smit.script log file.
WebSMIT log files are created by the CGI scripts using a relative path of <../logs>. If
you copy the CGI scripts to the default location for the IBM HTTP Server, the final path
to the logs is /usr/HTTPServer/logs.
The WebSMIT logs are not subject to manipulation by the HACMP Log Viewing and
Management SMIT panel. Also, just like smit.log and smit.script, the files grow
indefinitely.
The snap -e utility captures the WebSMIT log files if you leave them in the default
location (/usr/es/sbin/cluster/wsm/logs); but if you install WebSMIT somewhere else,
snap -e will not find them.

Customizing the WebSMIT status panel


wsm_clstat.cgi displays cluster information in the WebSMIT status panel. You can
customize wsm_clstat.cgi by changing the
/usr/es/sbin/cluster/wsm/cgi-bin/wsm_smit.conf file. This file allows you to configure
logging and the menus for the WebSMIT status panel.

Controlling which SMIT screens can be used


As mentioned earlier, WebSMIT will process just about any valid SMIT panel. You can
limit the set of panels that WebSMIT will process by configuring one or more of these
files.
- wsm_smit.allow
If this file exists on the server, it will be checked before any SMIT panel is
processed. If the SMIT panel ID (fast path) is not contained in the file, the http
request will be rejected. Use this file to limit WebSMIT to a specific set of SMIT
panels. A sample file is provided, which contains all the SMIT panel IDs for HACMP.
Simply rename this file to wsm_smit.allow if you want to limit access to just the
HACMP SMIT panels.
7-118 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- wsm_smit.deny
Entering a SMIT panel ID in this file will cause WebSMIT to deny access to that
panel. If the same SMIT panel ID is stored in both the .allow and .deny files, .deny
processing takes precedence.
- wsm_smit.redirect
Instead of simply rejecting access to a specific page, you can redirect the user to a
different page. The default .redirect file has entries to redirect the user from specific
HACMP SMIT panels that are not supported by WebSMIT.

Using the online documentation feature


To use the online documentation feature, you must install the file sets shown in the
visual.
See the README file for details.

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-119

Student Notebook

Checkpoint
1.
2.
3.
4.

5.

True or False?
A star configuration is a good choice for your non-IP networks.
True or False?
Using DARE, you can change from IPAT via aliasing to IPAT via
replacement without stopping the cluster.
True or False?
RSCT will automatically update /etc/filesystems when using enhanced
concurrent mode volume groups
True or False?
With HACMP V5.4, a resource groups priority override location can be
cancelled by selecting a destination node of
Restore_Node_Priority_Order.
You want to create an Enhanced Concurrent Mode Volume Group that
will be used in a Resource Group that will have an Online on Home
Node Startup policy. Which C-SPOC menu should you use?
a. HACMP Logical Volume Management
b. HACMP Concurrent Logical Volume Management

6.

You want to add a logical volume to the volume group you created in the
question above. Which C-SPOC menu should you use?
a. HACMP Logical Volume Management
b. HACMP Concurrent Logical Volume Management

Copyright IBM Corporation 2008

Figure 7-83. Checkpoint

AU548.0

Notes:

7-120 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Key points from this unit:
Implementing procedures for change management is a critical part of
administering an HACMP cluster
C-SPOC provides facilities for performing common cluster-wide
administration tasks from any node within the cluster:

Perform routine administrative changes


Start and stop cluster services
Perform resource group move operations
Start and stop cluster services

The SMIT Standard and Extended menus are used to make topology
and resource group changes
The Dynamic Automatic Reconfiguration Event facility (DARE)
provides the mechanism to make changes to cluster topology and
resources without stopping the cluster
The Cluster Snapshot facility allows the user to save and restore a
cluster configuration
WebSMIT provides access to HACMP SMIT menus from any system
with a Web browser
Copyright IBM Corporation 2008

Figure 7-84. Unit summary

AU548.0

Copyright IBM Corp. 1998, 2008

Unit 7. Basic HACMP administration

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

7-121

Student Notebook

7-122 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 8. Events
What this unit is about
This unit describes the event process in HACMP.

What you should be able to do


After completing this unit, you should be able to:
Describe what is meant by the term event
Describe the sequence of events when:
- The first node starts in a cluster
- A new node joins an existing cluster
- A node leaves a cluster voluntarily
Explain what happens when HACMP processes an event
Describe how to customize the event flow
State how to monitor other devices

How you will check your progress


Accountability:
Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Describe what an HACMP event is
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily

Explain what happens when HACMP processes an event


Describe how to customize the event flow
State how to monitor other devices

Copyright IBM Corporation 2008

Figure 8-1. Unit objectives

AU548.0

Notes:

8-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

8.1 HACMP events

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-3

Student Notebook

Topic 1 objectives: HACMP events


After completing this topic, you should be able to:
Describe what an HACMP event is
Explain what happens when HACMP processes an event
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily

Copyright IBM Corporation 2008

Figure 8-2. Topic 1 objectives: HACMP events

AU548.0

Notes:

8-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What is an HACMP event?


An HACMP event is an incident of interest to HACMP:

A node joins the cluster


A node crashes
A NIC fails
A NIC recovers
Cluster administrator requests a resource group move
Cluster administrator requests a configuration change
(synchronization)

An HACMP event script is a script invoked by a recovery


program to perform the recovery function required.

node_up
node_down
fail_interface
join_interface
rg_move
reconfig_topology_start

Copyright IBM Corporation 2008

Figure 8-3. What is an HACMP event?

AU548.0

Notes:
What the term HACMP event means
The term HACMP event has two contexts:
- An incident that is of interest to the cluster, such as the failure of a node or the
recovery of a NIC
- A script that is used by HACMP to actually deal with one of these incidents
Unfortunately, it is not all that uncommon for the script word to be left off in a discussion
of event scripts. Fortunately, which meaning is appropriate is almost certainly obvious
from the context of the discussion.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-5

Student Notebook

HACMP basic event flow


Recovery
Programs

Recovery Command
Recovery Command
__
__
__

Event Script

HACMP Cluster Manager


#

HACMP Rules
ODM

## Beginning of Event Definition Node Up ###


#
TE_JOIN_NODE
0
/usr/sbin/cluster/events/node_up.rp
2
0
# 6) Resource variable only used for event manager events
# 7) Instance vector, only used for event manager events

Group Services/ES

Topology Services/ES

Copyright IBM Corporation 2008

Figure 8-4. HACMP basic event flow

AU548.0

Notes:
How an event script is triggered
Most HACMP events result from the detection and diagnostic capabilities of RSCTs
Topology Services component. They arrive at the Cluster Manager, which then uses
recovery programs to determine which event scripts to call to actually deal with the
event. The coordination of and sequencing of the recovery programs is actually handled
by the Cluster Manager working with RSCT group services. The rules for how these
recovery programs should be coordinated and sequenced are described in the HACMP
Rules ODM file.
The RMC subsystem is used for implementing User-defined Events, Application
Monitoring, Dynamic Node Priority, and DLPAR. Dynamic Node Priority is one of the
fallover policies and DLPAR refers to the Dynamic LPAR capability of HACMP.

8-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Recovery programs
cluster_notify.rp
external_resource_state_change.rp
external_resource_state_change_complete.rp
fail_interface.rp
fail_standby.rp
join_interface.rp
join_standby.rp
migrate.rp
network_down.rp
network_up.rp
node_down.rp
node_down_dependency.rp
node_down_dependency_complete.rp
node_up.rp
node_up_dependency.rp
node_up_dependency_complete.rp

reconfig_configuration.rp
reconfig_configuration_dependency_acquire.rp
reconfig_configuration_dependency_complete.rp
reconfig_configuration_dependency_release.rp
reconfig_resource.rp
reconfig_topology.rp
resource_state_change.rp
resource_state_change_complete.rp
rg_move.rp
rg_offline.rp
rg_online.rp
server_down.rp
server_restart.rp
site_down.rp
site_isolation.rp
site_merge.rp
site_up.rp
swap_adapter.rp

Copyright IBM Corporation 2008

Figure 8-5. Recovery programs

AU548.0

Notes:
Recovery programs
This visual lists the recovery programs that are used by the resource manager
component of the Cluster Manager Services to determine what event scripts to invoke.
These form the first step in processing an event.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-7

Student Notebook

Recovery program example


site_up.rp
# This file contains the HACMP/ES recovery program for
# site_up events
#
# format:
# relationship
command to run
expected status NULL
#
other "site_up" 0 NULL
#
barrier
#
event "site_up" 0 NULL
#
barrier
#
all "site_up_complete" 0 NULL

Copyright IBM Corporation 2008

Figure 8-6. Recovery program example

AU548.0

Notes:
Format of a recovery program
The first type of line contains where the event script should run and what the name of
the script is.
The second type of line is the word barrier. This is a wait, which is handled by group
services so that other nodes can complete their processing before the next step of this
recovery program.

8-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Event scripts
(called by cluster manager)

(called by other events)

node_up_local, remote
site_up, site_up_complete,down,down_complet
node_down_local, remote
site_merge, site_merge_complete
node_up_local_complete
node_up, node_up_complete, down,
node_up_remote_complete
down_complete
node_down_local_complete
network_up, network_up_complete, down,
node_down_remote_complete
down_complete
acquire_aconn_service
swap_adapter, swap_adapter_complete
acquire_service_addr
swap_address, swap_address_complete
acquire_takeover_addr
fail_standby, join_standby
start_server, stop_server
fail_interface, join_interface
get_disk_vg_fs
rg_move, rg_move_complete
get_aconn_rs
rg_online, rg_offline
release_service_addr, takeover_addr
event_error
release_vg_fs, aconn_rs
config_too_long
swap_aconn_protocols
reconfig_topology_start, complete
releasing, acquiring
reconfig_resource_release, acquire, complete
rg_up, down, error
reconfig_configuration_dependency_acquire
rg_temp_error_state
reconfig_configuration_dependency_complete
rg_acquiring_secondary
reconfig_configuration_dependency_release
rg_up_secondary
node_up_dependency, complete
rg_error_secondary
node_down_dependency, complete
resume_appmon
migrate, migrate_complete
suspend_appmon
external _resource_state_change
server_down, server_restart
Copyright IBM Corporation 2008
Figure 8-7. Event scripts

AU548.0

Notes:
Event scripts
This is the list of HACMP events that are managed by HACMP.
The events on the left are directly called by the cluster manager or process_resources
in response to unexpected happenings. The events on the right are invoked by primary
or other secondary events on an as-needed basis.
Each of these events can have an optional notify command, one or more pre-event
scripts, one or more post-event scripts and an optional recovery command associated
with it.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-9

Student Notebook

process_resources
Cluster Manager

process_resources
clrgpa

RGPA

Cluster Status?
next task

Resource
Manager

Update RM

cl_RMupdate

Update RM

Exit

Copyright IBM Corporation 2008

Figure 8-8. process_resources

AU548.0

Notes:
Script process_resources
The script process_resources handles the calls from event scripts to the Resource
Group Policy Administrator (RGPA):
- Loops through each returned task (JOB_TYPE):
Calls cl_RMupdate as required to update the Cluster Manager with the status
change
Processes the next JOB_TYPE that the RGPA passes (via clrgpa) until all tasks
in the list are completed.
- There is one JOB_TYPE for each resource type. Some can be run once each event,
useful for parallel processing of resources
This is meant to show you that the process_resources script is responsible for
interacting with the event scripts. You will see the JOB_TYPE in the /tmp/hacmp.out log
file.
8-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

First node starts cluster services

Start
Cluste
r
servic
es

1) node_up
ca

lls
RC

clstrmgrES
Event
Manager

cal
RC

ls

process_resources (NONE)
for each RG:
process_resources (ACQUIRE)
process_resources (SERVICE_LABELS)
acquire_service_addr
acquire_aconn_service en0 net_ether_01
process_resources (DISKS)
process_resources (VGS)
process_resources (LOGREDO)
process_resources (FILESYSTEMS)
process_resources (SYNC_VGS)
process_resources (TELINIT)
process_resources (NONE)
< Event Summary >

2) node_up_complete
for each RG:
process resources (APPLICATIONS)
start_server app01
process_resources (ONLINE)
process_resources (NONE)
< Event Summary >
Copyright IBM Corporation 2008

Figure 8-9. First node starts cluster services

AU548.0

Notes:
Startup processing
Implicit in this example is the assumption that there is actually a resource group to start
on the node. If there are no resource groups to start on the node, then node_up_local
and node_up_local_complete do very little processing at all.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-11

Student Notebook

Another node joins the cluster

g
nin
n
u
r
clstrmgrES

t
ar
S t st e r s
e
u
Cl rvic
c a se
ll

clstrmgrES

Event
Manager

Event
Manager

Messages
ca

C
R

ll
RC
RC

process_resources (NONE)
or
process_resources (release)

ll
ca

call

RC

1) node_up

3) node_up_complete
for each RG:
process_resources (SYNC_VGS)
process_resources (NONE)
< Event Summary >
Copyright IBM Corporation 2008

2) node_up
Same sequence as
node 1 up (previous visual)

4) node_up_complete
for each RG:
process resources (APPLICATIONS)
start_server app02
process_resources (ONLINE)
process_resources (NONE)
< Event Summary >

Figure 8-10. Another node joins the cluster

AU548.0

Notes:
Another node joins the cluster
When another node starts up, it must first join the cluster. After that, the determination is
made to move an already active resource group to the new node (this is the assumption
in this visual). If that is the case, node_up processing on the old node 1) must
inactivate the resource group before node_up processing on the new node 2) can
acquire and activate the resource group.

8-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Node leaves the cluster (stopped)

n
run

ing

clstrmgrES

p
Sto ter
s
Clu vices
ca
ll ser1) node_down takeover

clstrmgrES

Event
Manager

Event
Manager

Messages
ll
ca C
R

RC

ll
ca

RC

Same sequence
as node up

ll
ca

3) node_down takeover

RC

4) node_down_complete
for each RG:
process_resources (APPLICATIONS)
start_server app02
process_resources (ONLINE)
< Event Summary >

for each RG:


process_resources (RELEASE)
process_resources (APPLICATIONS)
stop_server app02
process_resources (FILESYSTEMS)
process_resources (VGS)
process_resources (SERVICE_LABELS)
release_service_addr
< Event Summary >

2) node_down_complete
process_resources (OFFLINE)
process_resources (SYNC_VGS)
< Event Summary >

Copyright IBM Corporation 2008

Figure 8-11. Node leaves the cluster (stopped)

AU548.0

Notes:
Node down processing normal with takeover
Implicit in this example is the assumption that there is actually a resource group on the
departing node which must be moved to one of the remaining nodes.

Node failure
The situation is only slightly different if the node on the right had failed suddenly.
Because it is not in a position to run any events, the calls to process_resources listed
under the right hand node do not get run.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-13

Student Notebook

Lets review
1.

Which of the following are examples of primary HACMP events (select all that
apply)?
a.
b.
c.
d.
e.

2.

node_up
node_up_local
node_up_complete
start_server
Rg_up

When a node joins an existing cluster, what is the correct sequence for these
events?
a.
b.
c.
d.

node_up on new node, node_up on existing node, node_up_complete on new


node, node_up_complete on existing node
node_up on existing node, node_up on new node, node_up_complete on new
node, node_up_complete on existing node
node_up on new node, node_up on existing node, node_up_complete on
existing node, node_up_complete on new node
node_up on existing node, node_up on new node, node_up_complete on
existing node, node_up_complete on new node

Copyright IBM Corporation 2008

Figure 8-12. Lets review

AU548.0

Notes:

8-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

8.2 Cluster customization

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-15

Student Notebook

Topic 2 objectives: Event customization


After completing this topic, you should be able to:
Describe how to customize the event flow
State how to handle devices outside the control of HACMP

Copyright IBM Corporation 2008

Figure 8-13. Topic 2 objectives: Event customization

AU548.0

Notes:
In this topic, we examine how to customize events in HACMP.

8-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Event processing customization


Notify Command

Pre-Event Script (1)

Pre-Event Script (n)

Event Manager
clcallev

Recovery
Command

HACMP Event

HACMP Event

RC=0
ODM
HACMP
Classes

No

Yes

Post-Event Script (1)

Counter
>0

Yes

No

Event Error

Post-Event Script (n)

Notify Command
Copyright IBM Corporation 2008

Figure 8-14. Event processing customization

AU548.0

Notes:
Event processing without customization
When a decision is made to run a particular HACMP event script on a particular node,
the above event processing logic takes control. If no event-related cluster customization
has been done on the cluster, then the HACMP Event itself (in other words, the HACMP
Event Script), is run and whether it works is noted. (If it worked, then everyone is happy;
if not then you better go look at the Problem Determination unit, which is coming up
later in the week.)
Events are logged in the /var/hacmp/adm/cluster.log file and the
/<log_dir>/hacmp.out file.

Event processing with customization


The rather simple procedure described in the last paragraph can be modified by the
cluster configurator or administrator to deal with cluster requirements, environmental
Copyright IBM Corp. 1998, 2008
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-17

Student Notebook

issues, or both, beyond the normal scope of HACMP. These customization


opportunities are as follows:
- Each HACMP event can have a single optional Notify Command associated with it.
This command is run once at the very start of processing the event and once again
right as the last step in processing the event. This is the oldest form of HACMP
event-related customization. It is not used all that often anymore because better
mechanisms now exist. It is still supported in order to avoid breaking long existing
clusters that rely upon it.
- Each HACMP event can have zero or more pre-event scripts associated with it.
Each of these pre-event scripts are run after the optional notify command (if it has
been configured). When all of the pre-event scripts have been executed, the
HACMP event script itself is executed.
- A recovery command can be specified for each HACMP event. This recovery
command is run if the HACMP event script fails. When the recovery command
completes, the HACMP event script is run again. Associated with each recovery
command is a count of the maximum number of times that the HACMP event script
might fail in a single overall attempt to run the event before HACMP should declare
the failure as not fixable by the recovery command.
- Each HACMP event can have zero or more post-event scripts associated with it.
Each of these are run after the HACMP event script itself completes and before the
optional notify command.

Location of event processing scripts


The HACMP event scripts are stored in /usr/es/sbin/cluster/events. Note there are .rp
scripts (recovery programs) that call the event scripts. The event scripts then might call
other event scripts.

What does Error mean in the visual?


The cluster manager expects the event scripts to complete successfully. If not, it retries
the command. If the number of retries expires, the cluster manager waits. It wont go on
until told to go on. But how do you now that Error has occurred? Well talk more about
that in later units and much more in the HACMP System Administration II class. The
simplest way youll know that Error has occurred is an error message in the
/tmp/hacmp.out log file that indicates that the cluster has been in ...reconfig too long....
The message is given that way because it can vary depending on the error. A time is set
prior to the start of event processing. If the time pops, event processing didnt finish
successfully and the error message is placed in the /tmp/hacmp.out log file. When you
see that, you must begin troubleshooting, resulting in fixing the problem and instructing
the cluster manager to continue (Recover from Script Failure option in the Problem
Determination SMIT panel). But this is a little ahead of the course. Youll see more in the
following units.
8-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Adding/changing cluster events (1 of 3)


Extended Event Configuration
Move cursor to desired item and press Enter.
Configure Pre/Post-Event Commands
Change/Show Pre-Defined HACMP Events
Configure User-Defined Events
Configure Pager Notification Methods
Change/Show Time Until Warning

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 8-15. Adding/changing cluster events (1 of 3)

AU548.0

Notes:
Path to smit menu
smitty hacmp -> Extended Configuration -> Extended Event Configuration

pre, post event scripts


To customize the event processing for pre- and post-event scripts, you must first create
a custom event object that points to your script. We start here with the SMIT menu, that
manages custom cluster events.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-19

Student Notebook

Adding/changing cluster events (2 of 3)


Add a Custom Cluster Event
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Cluster Event Name
* Cluster Event Description
* Cluster Event Script Filename

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
[stop_printq]
[stop the print queues]
[/usr/local/cluster/events/stop_printq]

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 8-16. Adding/changing cluster events (2 of 3)

AU548.0

Notes:
Path to smit menu
smitty hacmp -> Extended Configuration -> Extended Event Configuration ->
Configure Pre/Post-Event Commands -> Add a Custom Cluster Event

Example of creating pre- and post-custom cluster event object


In this example, we add a new custom cluster event called stop_printq. This event runs
a script of our own creation, which in this case resides in /usr/local/cluster/events
(a directory created for this purpose by the HACMP administrator). The custom event
has a description that allows us to identify what the script does when six months down
the line, we have forgotten why we wrote the script or the system administrator for the
cluster has changed.
Custom events are given a name, rather than referenced directly by the script path. This
makes it easy to reuse the same custom event script for multiple HACMP events.
8-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Script considerations
HACMP does not develop the script content for you, neither does it synchronize the
script content between cluster nodes (indeed the content can be different on each
node). The only requirements that HACMP imposes are that the script must exist on
each node in a local (non-shared) location, be executable and have the same path and
name on every node.
Of course, an additional requirement is that the script perform as required under all
circumstances!
In HACMP 5.2 and later there is a file collections feature if you wish to have your
changes kept in sync.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-21

Student Notebook

Adding/changing cluster events (3 of 3)


Change/Show Cluster Events
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Event Name

node_down

Description

Script run after the >

* Event Command

[/usr/es/sbin/cluster/>

Notify Command
Pre-event Command
Post-event Command
Recovery Command
* Recovery Counter

[]
[]
[stop_printq]
[]
[0]

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

+
+
#

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 8-17. Adding/changing cluster events (3 of 3)

AU548.0

Notes:
The path to the menu
smitty hacmp -> Extended Configuration -> Extended Event Configuration ->
Change/Show Pre-Defined HACMP Events -> node_down

Associating a custom cluster event with the node_down event


Notice that in the menu path you choose Pre-Defined to see the list of standard
HACMP events. On this visual, we see our new custom event object, stop_printq,
being added as a post event to the HACMP event script node_down. Because we are
simply referencing the script by its name, we can run more than one pre- and
post-event script by stringing their names together in the pre- or post-event script field.
Note that for the commands (other than pre and post) on this menu you need not create
a custom object first--you would come directly to this menu.

8-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Recovery commands
If an event script fails to exit 0, Recovery Commands can be
executed

Recovery
Command

HACMP Event

RC=0

No

Counter
>0

Yes

No

Cluster manager waits, logging


error in /<log_dir>/hacmp.out
Copyright IBM Corporation 2008

Figure 8-18. Recovery commands

AU548.0

Notes:
Recovery command event customization
Recovery commands are another customization that can be made to recover from the
failure of an HACMP event script.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-23

Student Notebook

Adding/changing recovery commands


Change/Show Cluster Events
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Event Name

start_server

Description

Script run to start a>

* Event Command

[/usr/es/sbin/cluster/>

Notify Command
Pre-event Command
Post-event Command
Recovery Command
Recovery Counter

[]
[]
+
[]
+
[/usr/local/bin/recover]
[3]
#

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 8-19. Adding/changing recovery commands

AU548.0

Notes:
Recovery command menu
Here we see an example of a recovery command being added to the start_server event
script. This can handle an incorrect application start up.
Recovery commands do not execute unless the recovery counter is > 0.

8-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Points to note
The execute bit must be set on all pre-, post-, notify, and
recovery scripts.
Synchronization does not copy pre- and post-event script
content from one node to another.
You need to copy all your pre- and post-event scripts to all
nodes.
Your pre- and post-event scripts must handle non-zero exit
codes.
All scripts must declare the shell they will run in, such as:
#!/bin/ksh
Test your changes very carefully because a mistake is likely to
cause a fallover to abort.

Copyright IBM Corporation 2008

Figure 8-20. Points to note

AU548.0

Notes:
Test your changes
Without a doubt, the most important point to note is the last one: test your changes very
carefully. An error in a pre-, post- or recovery script/command generally becomes
apparent during a fallover; in other words, at a point in time when you can least afford it
to happen!

Use the CSPOC file collection facility


In HACMP 5.2 and later, you can implement the file collections feature to synchronize
your scripts across the cluster. This facility is covered in more depth in the HACMP
course HACMP System Administration II: Administration and Problem Determination
(AU61).

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-25

Student Notebook

RG_move event and selective fallover


Selective Fallover allows fallover for a resource group
Cluster Manager uses rg_move event for selective fallover
CSPOC can also be used to cause an rg_move event
Selective fallover can happen for the following failures:

NIC failures
Applications
Communication Links
Volume groups

Selective Fallover can be customized by resource group

Copyright IBM Corporation 2008

Figure 8-21. RG_Move event and selective fallover

AU548.0

Notes:
Selective fallover logic
In general, the following scenarios and utilities can lead HACMP to selectively move an
affected resource group, using the Selective Fallover logic:
- In cases of service IP label failures, Topology Services, that monitors the health of
the service IP labels, starts a network_down event. This causes the selective
fallover of the affected resource group.
- In cases of application failures, the application monitor informs the ClusterManager
about the failure of the application, which causes the selective fallover of the
affected resource group.
- In cases of WAN Connections failures, the Cluster Manager monitors the status of
the SNA links and captures some of the types of SNA link failures. If an SNA link
failure is detected, the selective fallover utility moves the affected resource group.

8-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- In cases of volume group failures, the occurrence of the AIX error label
LVM_SA_QUORCLOSE indicates that a volume group went off-line on a node in the
cluster. This causes the selective fallover of the affected resource group.
Remember that in each case when HACMP uses Selective Fallover, an rg_move event
is launched as a response to a resource failure. You can recognize that HACMP uses
Selective Fallover when you identify that an rg_move event is run in the cluster.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-27

Student Notebook

Customizing event flow for other devices


HACMP provides smit screens for managing the AIX error logging facility's
error notification mechanism.
Disks

Disk adapters

CPU

Other shared devices


Disk subsystems

Copyright IBM Corporation 2008

Figure 8-22. Customizing event flow for other devices

AU548.0

Notes:
Dealing with other failures detected by AIX
Remember that HACMP natively only monitors nodes, networks, and network adapters
by default. If you wish to monitor other devices, you can use error notification methods.
Error notification is a facility of AIX, that allows the administrator to map an entry in the
AIX error log to a command to execute.
HACMP provides a smit menu to simplify the process.

8-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Error notification within smit


HACMP Error Notification
Move cursor to desired item and press Enter.
Configure Automatic Error Notification
Add a Notify Method
Change/Show a Notify Method
Remove a Notify Method
Emulate Error Log Entry

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 8-23. Error notification within smit

AU548.0

Notes:
Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Error Notification

What HACMP provides


This is the smit menu that HACMP provides for managing error notification methods.
- HACMP provides error notification methods that you can add by selecting the option
Configure Automatic Error Notification above. However, in HACMP 5.3 and later,
these Automatic Error Notification methods are automatically added during
verification and synchronization.
- HACMP provides Add a Notify Method to handle any AIX error label that might not
be detected by HACMP.
- Finally, HACMP provides a tool to Emulate an Error Log Entry.
We will look at these options in this and the subsequent visuals.
Copyright IBM Corp. 1998, 2008
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-29

Student Notebook

Configuring automatic error notification


Configure Automatic Error Notification
Move cursor to desired item and press Enter.
List Error Notify Methods for Cluster Resources
Add Error Notify Methods for Cluster Resources
Remove Error Notify Methods for Cluster Resources

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 8-24. Configuring automatic error notification

AU548.0

Notes:
Removing automatic error notify methods
In HACMP 5.3 and later, the Automatic Error Notify Methods are automatically added;
therefore, you can come here to remove them, but it is not recommended. If you do then
after synchronization you would have to come back here to remove them again.

For HACMP nodes with only virtualized I/O resources


The output you will receive when running Automatic Error Notification is as follows:
rt1s1vlp5:

Disk type vscsi is unknown to HACMP.

rt1s1vlp5:

Disk type vscsi is unknown to HACMP.

rt1s1vlp6:

Disk type vscsi is unknown to HACMP.

rt1s1vlp6:

Disk type vscsi is unknown to HACMP.

This output highlights the fact that the virtual SCSI adapters are not recognized.
8-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Listing automatic error notification (non-virtual HACMP


nodes)
COMMAND STATUS
Command: OK

stdout: yes

stderr: no

Before command completion, additional instructions may appear below.


[TOP]
bondar:
bondar: HACMP Resource
bondar:
bondar: hdisk0
bondar: scsi0
bondar: hdisk11
bondar: hdisk5
bondar: hdisk9
bondar: hdisk7
bondar: ssa0
hudson:
hudson: HACMP Resource
[MORE...9]
F1=Help
F8=Image
n=Find Next

Error Notify Method


/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
Error Notify Method

F2=Refresh
F9=Shell

F3=Cancel
F10=Exit

F6=Command
/=Find

Copyright IBM Corporation 2008

Figure 8-25. Listing automatic error notification (non-virtual HACMP nodes)

AU548.0

Notes:
Listing the automatic event notification methods
Heres the full output from this screen for a sample cluster:
bondar:
bondar:
bondar:
bondar:
bondar:
bondar:
bondar:
bondar:
bondar:
bondar:
hudson:

HACMP Resource
hdisk0
scsi0
hdisk11
hdisk5
hdisk9
hdisk7
ssa0

Error Notify Method


/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-31

Student Notebook

hudson:
hudson:
hudson:
hudson:
hudson:
hudson:
hudson:
hudson:
hudson:

HACMP Resource
hdisk0
scsi0
hdisk10
hdisk4
hdisk8
hdisk6
ssa0

8-32 HACMP Implementation

Error Notify Method


/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror
/usr/es/sbin/cluster/diag/cl_logerror

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Listing automatic error notification (virtual HACMP


nodes)
COMMAND STATUS
Command: OK

stdout: yes

stderr: no

Before command completion, additional instructions may appear below.


rt1s1vlp5:
rt1s1vlp5:
rt1s1vlp5:
rt1s1vlp5:
rt1s1vlp5:
rt1s1vlp6:
rt1s1vlp6:
rt1s1vlp6:
rt1s1vlp6:
rt1s1vlp6:

HACMP Resource

Error Notify Method

hdisk0
hdisk1

/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_logerror

HACMP Resource

Error Notify Method

hdisk0
hdisk1

F1=Help
F8=Image
n=Find Next

/usr/es/sbin/cluster/diag/cl_failover
/usr/es/sbin/cluster/diag/cl_logerror
F2=Refresh
F9=Shell

F3=Cancel
F10=Exit

F6=Command
/=Find

Copyright IBM Corporation 2008

Figure 8-26. Listing automatic error notification (virtual HACMP nodes)

AU548.0

Notes:
We already saw that there were errors when running the automatic error notification setup
on HACMP nodes that have only virtual I/O resources. Here we see that it will cover the
disks, but the adapters are not protected. Should you cover them? Probably not, because
theyre virtual.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-33

Student Notebook

Adding error notification methods


Add a Notify Method
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[]
No
[]
None
None
None
[]
[All]
[All]
[All]
[]

* Notification Object Name


* Persist across system restart?
Process ID for use by Notify Method
Select Error Class
Select Error Type
Match Alertable errors?
Select Error Label
Resource Name
Resource Class
Resource Type
* Notify Method

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

+
+#
+
+
+
+
+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 8-27. Adding error notification methods

AU548.0

Notes:
Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Error Notification ->
Add a Notify Method

The error notify stanza


errnotify:
en_pid = 0
en_name = ""
en_persistenceflg = 1
en_label = ""
en_crcid = 849857919
en_class = ""
en_type = ""
en_alertflg = ""
8-34 HACMP Implementation

This is an example of a stanza


from /etc/objrepos/errnotify
Notice the screen above is designed to
create a stanza like this.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

en_resource = ""
en_rtype = ""
en_rclass = ""
en_symptom = ""
en_err64 = ""
The last line is the command to execute
en_dup = ""
en_method = "/usr/lib/ras/notifymeth -l $1 -t CHECKSTOP"

Parameters passed to the error notify method


One or more error notification methods can be added for every error that can be in the
AIX error log.
The $ parameters that can be used with the en_method are:
$1 Sequence Number
$2 Error ID
$3 Error CLASS
$4 Error Type
$5 Alert Flag
$6 Resource Name
$7 Resource Type
$8 Resource Class
$9 Error Label

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-35

Student Notebook

Emulating errors (1 of 2)
HACMP Error Notification
Mo+--------------------------------------------------------------------------+
|
Error Label to Emulate
|
|
|
| Move cursor to desired item and press Enter.
|
|
|
| [TOP]
|
|
LVM_SA_QUORCLOSE
rootvg
|
|
LVM_SA_QUORCLOSE
xwebvg
|
|
FIRMWARE_EVENT
diagela_FIRM
|
|
PLAT_DUMP_ERR
diagela_PDE
|
|
SERVICE_EVENT
diagela_SE
|
|
INTRPPC_ERR
diagela_SPUR
|
|
FCP_ARRAY_ERR6
fcparray_err
|
|
FCS_ERR10
fcs_err10
|
|
DISK_ARRAY_ERR2
ha_hdisk0_0
|
|
DISK_ARRAY_ERR3
ha_hdisk0_1
|
|
DISK_ARRAY_ERR5
ha_hdisk0_2
|
| [MORE...12]
|
|
|
| F1=Help
F2=Refresh
F3=Cancel
|
| F8=Image
F10=Exit
Enter=Do
|
F1| /=Find
n=Find Next
|
F9+--------------------------------------------------------------------------+

Note that LVM_SA_QUORCLOSE entries exist only for


mirrored volume groups

Copyright IBM Corporation 2008

Figure 8-28. Emulating errors (1 of 2)

AU548.0

Notes:
Menu path
smitty hacmp -> Problem Determination Tools -> HACMP Error Notification ->
Emulate Error Log Entry

Emulating an error log entry


HACMP provides a menu to allow you to emulate an error log entry. This screen shows
part of the list of error labels, that is provided when the Emulate Error Log Entry is
selected in the HACMP Error Notification menu (this menu appears a few foils back).
We are going to generate an emulated loss of quorum on the xwebvg volume group.
This will generate an example of the error LVM_SA_QUORCLOSE in the AIX error log
and run the script associated with the error notification method quorum_lost.
This mechanism for emulating errors allows you to do basic testing of an error
notification method. If at all possible to do so without actually damaging the equipment,
8-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

it would be best to cause the actual hardware error that is of concern to verify that the
error notification method has been associated with the correct AIX error label.
Note that the emulated error does not have the same resource name as an actual
record, but otherwise passes the same arguments to the method as the actual one.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-37

Student Notebook

Emulating errors (2 of 2)
Emulate Error Log Entry
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
LVM_SA_QUORCLOSE
xwebvg
/usr/es/sbin/cluster/>

Error Label Name


Notification Object Name
Notify Method

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 8-29. Emulating errors (2 of 2)

AU548.0

Notes:
Kicking off the emulation
Use this screen to start the emulation process.

8-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

What will this cause?


# errpt -a
--------------------------------------------------------------------------LABEL:
LVM_SA_QUORCLOSE
IDENTIFIER:
CAD234BE
Date/Time:
Sequence Number:
Machine Id:
Node Id:
Class:
Type:
Resource Name:
Resource Class:
Resource Type:
Location:

Fri Sep 19 13:58:05 MDT


469
000841564C00
bondar
H
UNKN
LVDD
NONE
NONE

Description
QUORUM LOST, VOLUME GROUP CLOSING
Probable Causes
PHYSICAL VOLUME UNAVAILABLE
Detail Data
MAJOR/MINOR DEVICE NUMBER
00C9 0000
QUORUM COUNT
0
ACTIVE COUNT
0
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------

... and a fallover of the xwebgroup resource group to uk.


Copyright IBM Corporation 2008

Figure 8-30. What will this cause?

AU548.0

Notes:
Example emulated error record
Here is an example of the output produced by running such an emulated event. The top
of the screen is the truncated output of the error template associated with the
LVM_SA_QUORCLOSE error, which gives a brief indication of the nature of the error.
The output of an emulation will have the value Resource Name: EMULATE. If you are
depending on this field, you have a problem testing. You might have to change your
command to execute while testing via emulation.

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-39

Student Notebook

Checkpoint
1. Which of the following runs if an HACMP event script fails?
(select all that apply)
a.Pre-event scripts
b.Post-event scripts
c.Error notification methods
d.Recovery commands
e.Notify methods

2. How does an event script get started?


a.Manually by an administrator
b.Called by the SNMP SMUX (clsmuxpd)
c.Called by the cluster manager using a recovery program
d.Called by the topology services daemon

3. True or False?
Pre-event scripts are automatically synchronized.

4. True or False?
Writing error notification methods is a normal part of
configuring a cluster.
Copyright IBM Corporation 2008

Figure 8-31. Checkpoint

AU548.0

Notes:

8-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Having completed this unit, you should be able to:
Describe what an HACMP event is
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily

Explain what happens when HACMP processes an event


Describe how to customize the event flow
State how to monitor other devices

Copyright IBM Corporation 2008

Figure 8-32. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

Unit 8. Events

8-41

Student Notebook

8-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 9. Integrating NFS into HACMP


What this unit is about
This unit covers the concepts of using Suns Network File System
(NFS) in a highly available cluster. You learn how to configure NFS in
an HACMP environment for maximum availability.

What you should be able to do


After completing this unit, you should be able to:
Explain the concepts of NFS
Configure HACMP to support NFS
Discuss why Volume Group major numbers must be unique when
using NFS with HACMP
Outline the NFS configuration parameters for HACMP

How you will check your progress


Accountability:
Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Explain the concepts of NFS
Configure HACMP to support NFS
Discuss why Volume Group major numbers must be unique
when using NFS with HACMP
Outline the NFS configuration parameters for HACMP

Copyright IBM Corporation 2008

Figure 9-1. Unit objectives

AU548.0

Notes:
Objectives
In this unit, we examine how NFS can be integrated in to HACMP to provide a Highly
Available Network File System.

9-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

So, what is NFS?


The Network File System is a client/server
application that lets a computer user view and optionally
store and update files on a remote computer as though they
were on the user's own computer
NFS Client
NFS mount

NFS Server
read-write

NFS mount
read-only

JFS mount
read-only

NFS mount

NFS Client and Server


shared_vg
Copyright IBM Corporation 2008

Figure 9-2. So, what is NFS?

AU548.0

Notes:
NFS
NFS is a suite of protocols that allow file sharing across an IP network. An NFS server
is a provider of file service (that is, a file, a directory or a file system). An NFS client is a
recipient of a remote file service. A system can be both an NFS client and server at the
same time.

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-3

Student Notebook

NFS background processes


NFS uses TCP/IP and a number of background processes to
allow clients to access disk resource on a remote server
Configuration files are used on the client and server to
specify export and mount options
NFS Client

NFS Server
n x nfsd and mountd

n x biod

/etc/exports
/etc/filesystems

NFS Client and Server


n x biod
n x nfsd and mountd
Copyright IBM Corporation 2008

Figure 9-3. NFS background processes

AU548.0

Notes:
NFS processes
The NFS server uses a process called mountd to allow remote clients to mount a local
disk or CD resource across the network. One or more nfsd processes handle I/O on the
server side of the relationship.
The NFS client uses the mount command to establish a mount to a remote storage
resource which is offered for export by the NFS server. One or more block I/O
daemons, biod, run on the client to handle I/O on the client side.
The server maintains details of data resources offered to clients in the /etc/exports file.
Clients can automatically mount network file systems using the /etc/filesystems file.

9-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Combining NFS with HACMP


NFS exports can be made highly available by using the HACMP
resource group to specify NFS exports and mounts
client system

# mount aservice:/fsa /a
The A resource group specifies:
aservice as a service IP label resource
/fsa as a filesystem resource (by
default as part of a volume group)
/fsa as an NFS filesystem to export

client system sees /fsa as /a

aservice

export /fsa
A

/fsa

# mount /fsa

uk

usa
Copyright IBM Corporation 2008

Figure 9-4. Combining NFS with HACMP

AU548.0

Notes:
Combining NFS with HACMP
We can combine NFS with HACMP to achieve a Highly Available Network File System.
One node in the cluster mounts the disk resource locally and offers that disk resource
for export across the IP network. Clients optionally mount the disk resource. A second
node is configured to take over the NFS export in the event of node failure.
There is one unusual aspect to the above configuration, which should be discussed.
The HACMP cluster is exporting the /fsa file system via the aservice service IP label.
The client is mounting the aservice:/fsa file system on the local mount point /a. This
is somewhat unusual in the sense that client systems usually use a local mount point
which is the same as the NFS file systems name on the server.
In the configuration shown above, there is no particularly good reason why the client is
using a different mount point than /fsa and, in fact, the client is free to use whatever
mount point is wishes to use including, of course, /fsa. Why this example is using a
local mount point of /a will become clear shortly.
Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-5

Student Notebook

NFS fallover with HACMP


In this scenario, the resource group moves to the surviving node in the
cluster, which exports /fsa. Clients see NFS server not responding
during fallover
client system
The A resource group specifies:
aservice as a service IP label resource
/fsa as a filesystem resource (by default
as part of a volume group)
/fsa as a NFS filesystem to export

# mount aservice:/fsa /a
client system "sees" /fsa as /a

export /fsa

aservice
/fsa

# mount /fsa

uk

usa
Copyright IBM Corporation 2008

Figure 9-5. NFS fallover with HACMP

AU548.0

Notes:
Fallover
If the node offering the NFS export should fail, a standby node takes over the shared
disk resource, locally mounts the file system, and exports the file system or directory for
remote mount.
If the client was not accessing the disk resource during the period of the fallover, then it
is not aware of the change in which node is serving the NFS export.
Note that the aservice service IP label is in the resource group, which is exporting
/fsa. The HACMP NFS server support requires that resource groups that export NFS
filesystems be configured to use IPAT because the client system is not capable of
dealing with two different IP addresses for its NFS server, depending on which node the
NFS server service happens to be running on.

9-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Configuring NFS for high availability


Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[MORE...10]

[Entry Fields]

Volume Groups
Use forced varyon of volume groups, if necessary
Automatically Import Volume Groups
Filesystems (empty is ALL for VGs specified)
Filesystems Consistency Check
Filesystems Recovery Method

[aaavg]
false
false
[]
fsck
sequential

+
+
+
+
+
+

Filesystems mounted before IP configured


Filesystems/Directories to Export (NFSv2/3)

true
[/fsa]

+
+

Filesystems/Directories to Export (NFSv4)


Stable Storage Path (NFSv4)
Filesystems/Directories to NFS Mount

[]
[]
[]

+
+

[MORE...13]
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 9-6. Configuring NFS for high availability

AU548.0

Notes:
Configuring NFS for high availability
The visual shows the resource group attributes that are important for configuring an
NFS file system.
- Filesystems/Directories to Export
Specifies the filesystems to be NFS exported.
- Filesystems mounted before IP configured
When implementing NFS support in HACMP, you should also set this option. This
prevents access from a client before the filesystems are ready.
- Filesystem (empty is ALL for VGs specified)
This particular example also explicitly lists the /fsa filesystem as a resource to be
included in the resource group (see the Filesystem (empty is ALL for VGs specified)
field). This is not necessary because this field could have been left blank to indicate

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-7

Student Notebook

that all the filesystems in the aaavg volume group should be treated as resources
within the resource group.

Only non-concurrent access resource groups


The resource group policy cannot be concurrent (On Line On All Available Nodes).

9-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Cross-mounting NFS filesystems (1 of 3)


A filesystem configured in a resource group can be made
available to all the nodes in the resource group:
One node has the resource group and acts as an NFS
server
Mounts the filesystem (/fsa)
Exports the filesystem (/fsa)

All nodes act as NFS clients


Mount the NFS filesystem (aservice:/fsa) onto a local mount point (/a)

aservice

/a

acts as an NFS server


(exports /fsa)

/fsa

/a

acts as an NFS client


# mount aservice:/fsa /a
Copyright IBM Corporation 2008

Figure 9-7. Cross-mounting NFS filesystems (1 of 3)

AU548.0

Notes:
Cross-mounting
We can use HACMP to mount an NFS exported filesystem locally on all the nodes
within the cluster. This allows two or more nodes to have access to the same disk
resource in parallel. An example of such a configuration might be a shared repository
for the product manuals (read only) or a shared /home filesystem (read-write). One
node mounts the filesystem locally, then exports the filesystem. All nodes within the
resource group then NFS mount the filesystem.
By having all nodes in the resource group act as an NFS client, including the node that
holds the resource group, it is not necessary for the takeover node to unmount the
filesystem before becoming the NFS server.

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-9

Student Notebook

Concurrent access limitations


Although the NFS file system can be mounted read-write by multiple nodes, all of the
NFS caching issues that exist with a regular NFS configuration (one not involving
HACMP in any way) still exist. Parallel or concurrent writes are not supported. For
example, applications running on the two cluster nodes should not attempt to update
the same NFS served file because only one of them is likely to succeed with the other
getting either stale NFS file handle problems or mysterious loss of changes made to the
file. This is a fundamental issue with NFS.

True concurrent access


Clusters wanting to have true concurrent access to the same filesystem for reading and
writing purposes should use the IBM GPFS (General Parallel File System) product
instead of NFS to share the filesystem across the cluster nodes.

9-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Cross-mounting NFS filesystems (2 of 3)


When a fallover occurs, the role of NFS server moves with the
resource group
All (surviving) nodes continue to be NFS clients

aservice
/a

/fsa

/a

acts as an NFS server


(exports /fsa)

acts as an NFS client


(retries until JFS returns on other node)
# mount aservice:/fsa /a
Copyright IBM Corporation 2008

Figure 9-8. Cross-mounting NFS filesystems (2 of 3)

AU548.0

Notes:
Fallover with a cross-mounted file system
If the left-hand node fails then HACMP on the right hand node initiates a fallover of the
resource group. This primarily consists of:
- Assigning or aliasing (depending on which flavor of IPAT is being used) the
aservice service IP label to a NIC
- Varying on the shared volume group and mounting the /fsa journaled filesystem
- NFS exporting the /fsa filesystem
Note that the right hand node already has the aservice:/fsa filesystem NFS mounted
on /a.

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-11

Student Notebook

Cross-mounting NFS filesystems (3 of 3)


Here is a more detailed look at what is happening:
The A resource group specifies:
client system
aservice as a service IP label resource
# mount aservice:/fsa /a
/fsa as a filesystem resource
client system "sees" /fsa as /a
/fsa as a NFS filesystem to export
/fsa as a NFS filesystem to mount on /a

aservice

export /fsa
/fsa

# mount /fsa
# mount aservice:/fsa /a

usa

# mount aservice:/fsa
/a
uk

Copyright IBM Corporation 2008

Figure 9-9. Cross-mounting NFS filesystems (3 of 3)

AU548.0

Notes:
Cross-mounting details
The key change, compared to the configuration that did not use cross-mounting, is that
this configurations resource group lists /fsa as an NFS filesystem and specifies that it
is to be mounted on /a. This causes every node in the resource group to act as an NFS
client with aservice:/fsa mounted at /a. Only the node that actually has the resource
group is acting as an NFS server for the /fsa filesystem.

9-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Choosing the network for cross-mounts


In a cluster with multiple IP networks, it might be useful to specify
which network should be used by HACMP for cross-mounts
This is usually done as a performance enhancement
The A resource group specifies:
aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export
/fsa as a NFS filesystem to mount on /a
net_ether_01 is the network for NFS mounts
net_ether_01
net_ether_02

aGservice

aservice

export /fsa
/fsa

# mount /fsa
# mount aservice:/fsa /a

usa

Copyright IBM Corporation 2008

# mount aservice:/fsa /a

uk

Figure 9-10. Choosing the network for cross-mounts

AU548.0

Notes:
Network for NFS mount
HACMP allows you to specify which network should be used for NFS exports from this
resource group.
In this scenario, we have an NFS cross-mount within a cluster that has two IP networks.
For some reason, probably that the net_ether_01 network is either a faster networking
technology or under a lighter load, the cluster administrator has decided to force the
cross-mount traffic to flow over the net_ether_01 network.
This field is relevant only if you have filled in the Filesystems/Directories to NFS
Mount field. The Service IP Labels/IP Addresses field should contain a service label
which is on the network you select.
If the network you have specified is unavailable when the node is attempting to NFS
mount, it will seek other defined, available IP networks in the cluster on which to
establish the NFS mount.
Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-13

Student Notebook

Configuring HACMP for cross-mounting


Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[MORE...11]

[Entry Fields]

Volume Groups
Use forced varyon of volume groups, if necessary
Automatically Import Volume Groups

[aaavg]
false
false

+
+
+

Filesystems (empty is ALL for VGs specified)


Filesystems Consistency Check
Filesystems Recovery Method
Filesystems mounted before IP configured
Filesystems/Directories to Export (NFSv2/3)

[]
fsck
sequential
true
[/fsa]

+
+
+
+
+

Filesystems/Directories to Export (NFSv4)


Stable Storage Path (NFSv4)
Filesystems/Directories to NFS Mount

[]
[]

+
+

Network For NFS Mount

[/a;/fsa]
+
[net_ether_01]+

[MORE...12]
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 9-11. Configuring HACMP for cross-mounting

AU548.0

Notes:
Configuring HACMP for cross-mounting
The directory or directories to be cross-mounted are specified in the
Filesystems/Directories to NFS Mount field. The network to be used for NFS cross-mounts
is optionally specified in the Network for NFS Mount field.

Cross-mount syntax
Note the rather strange /a;/fsa syntax for specifying the directory to be
cross-mounted. This rather unusual syntax is explained in the next foil.
Note that the resource group must include a service IP label, which is on the
net_ether_01 network (aservice in the previous foil).

9-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Syntax for specifying cross-mounts


Where the filesystem should be mounted over

/a;/fsa

What the filesystem is exported as

# mount aservice:/fsa /a

What HACMP does


(on each node in the resource group)
Copyright IBM Corporation 2008

Figure 9-12. Syntax for specifying cross-mounts

AU548.0

Notes:
Syntax for specifying cross-mounts
The inclusion of a semi-colon in the Filesystems/Directories to NFS Mount field
indicates that the newer (and easier to work with) approach to NFS cross-mounting
described in this unit is in effect. The local mount point to be used by all the nodes in the
resource group when they act as NFS clients is specified before the semi-colon. The
NFS filesystem which they are to NFS mount is specified after the semi-colon.
Because the configuration specified in the last HACMP smit screen uses net_ether_01
for cross-mounts and the service IP label on the net_ether_01 network is aservice
(see the diagram a couple of foils back showing the two IP networks), each node in the
resource group will mount aservice:/fsa on their local /a mount point directory.

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-15

Student Notebook

Ensuring the VG major number is unique


Any Volume Group that contains a filesystem that is
offered for NFS export to clients or other cluster nodes must
use the same VG major number on every node in the cluster
To display the current VG major numbers, use:
# ls -l /dev/*webvg
crw-rw---1 root
crw-rw---1 root
crw-rw---1 root

system
system
system

201,
203,
205,

0 Sep 04 23:23 /dev/xwebvg


0 Sep 05 18:27 /dev/ywebvg
0 Sep 05 23:31 /dev/zwebvg

The command lvlstmajor will list the available major numbers for each node in the cluster

For example:
# lvlstmajor
43...200,202,206...
The VG major number may be set at the time of creating the VG using SMIT mkvg or by using the
-V flag on the importvg command, for example:
# importvg -V100 -y shared_vg_a hdisk2
C-SPOC will "suggest" a VG major number which is unique across the nodes
when it is used to create a shared volume group

Copyright IBM Corporation 2008

Figure 9-13. Ensuring the VG major number is unique

AU548.0

Notes:
VG major numbers
Volume group major numbers must be the same for any given volume group across all
nodes in the cluster. This is a requirement for any volume group that has filesystems
which are NFS exported to clients (either within or without the cluster).

9-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

NFS with HACMP considerations


Some points to note...

Resource groups which export NFS filesystems must implement


IPAT.

The filesystems mounted before IP configured resource group


attribute must be set to true.

HACMP does not use /etc/exports and the default is to export


filesystems rw to the world. Specify NFS export options in
/usr/es/sbin/cluster/etc/exports if you want better control
HACMP only preserves NFS locks if the NFS exporting resource
group has no more than two nodes.

Copyright IBM Corporation 2008

Figure 9-14. NFS with HACMP considerations

AU548.0

Notes:
HACMP exports file
As mentioned in the visual, if you need to specify NFS options, you must use the
HACMP exports file, not the standard AIX exports file. You can use AIX smit mknfsexp
to build the HACMP exports file:
Add a Directory to Exports List
* Pathname of directory to export
Anonymous UID

[]

[-2]

Public filesystem?
* Export directory now, system restart or both
Pathname of alternate exports file

no

both

[/usr/es/sbin/cluster/etc/exports]

.....

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-17

Student Notebook

Checkpoint
1. True or False?
HACMP supports all NFS export configuration options.

2. Which of the following is a special consideration when using


HACMP to NFS export filesystems? (select all that apply)
a. NFS exports must be read-write.
b. Secure RPC must be used at all times.
c. A cluster may not use NFS cross-mounts if there are client systems
accessing the NFS exported filesystems.
d. A volume group that contains filesystems that are NFS exported must
have the same major device number on all cluster nodes in the
resource group.

3. What does [/abc;/xyz] mean when specifying a directory to crossmount?


a. /abc is the name of the filesystem that is exported and /xyz is where it
should be mounted
b. /abc is where the filesystem should be mounted, and /xyz is the name
of the filesystem that is exported

4. True or False?
HACMP's NFS exporting feature supports only clusters of two nodes.

5. True or False?
IPAT is required in resource groups that export NFS filesystems.
Copyright IBM Corporation 2008

Figure 9-15. Checkpoint

AU548.0

Notes:

9-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Key points from this unit:
HACMP provides a means to make Network File System (NFS) highly
available
Configure Filesystem/Directory to Export and Filesystems
mounted before IP started in resource group
VG major number must be the same on all nodes
Clients NFS mount using service address
In case of node failure, takeover node acquires the service address,
acquires the disk resource, mounts the file system and NFS exports the
file system
Clients see NFS server not responding during the fallover

NFS file systems can be cross-mounted across all nodes


Faster takeover: Takeover node does not have to unmount the file system
A preferred network can be selected
Really only for read only file systems: NFS cross-mounted file systems
can be mounted read-write, but concurrent write attempts will produce
inconsistent results
Use GPFS for true concurrent access

Non-default export options can be specified in


/usr/es/sbin/cluster/etc/exports
Copyright IBM Corporation 2008

Figure 9-16. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 9. Integrating NFS into HACMP

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

9-19

Student Notebook

9-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit 10.Problem determination and recovery


What this unit is about
This unit describes the problem determination and recovery tools and
techniques for diagnosing problems that might occur in your cluster.

What you should be able to do


After completing this unit, you should be able to:

List reasons why HACMP can fail


Identify configuration and administration errors
Explain why the Dead Man's Switch invokes
Explain when the System Resource Controller kills a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

How you will check your progress


Accountability:
Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html
HACMP manuals

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
List reasons why HACMP can fail
Identify configuration and administration errors
List the problem determination tools available in smit
Explain why the Dead Man's Switch invokes
Explain when the System Resource Controller kills a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

Copyright IBM Corporation 2008

Figure 10-1. Unit objectives

AU548.0

Notes:
In this unit we examine some of the reasons why HACMP might fail, and how to perform
basic problem determination to recover from failure.

10-2 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Why do good clusters turn bad?


Common reasons why HACMP fails:
A poor cluster design and lack of thorough planning
Basic TCP/IP and LVM configuration problems
HACMP cluster topology and resource configuration problems
Absence of change management discipline in a running cluster
Lack of training for staff administering the cluster
Performance or capacity problems

X
A

uk

usa

Copyright IBM Corporation 2008

Figure 10-2. Why do good clusters turn bad?

AU548.0

Notes:
Root causes
Often the root cause of problems with HACMP is the absence of design and planning at
the outset, or poor design and planning. As you will have now figured out, a couple of
hours spent in planning HACMP reaps rewards later on in terms of how easy it is to
configure, administer, and diagnose problems with the cluster.
HACMP verifies all topology and resource configuration parameters and most IP
configuration parameters before synchronization takes place. This means that provided
the cluster synchronizes and starts successfully, the cluster should remain stable.
The prime reason for cluster failure when the environment is in production is
administrative mistakes and an absence of change control.
Typically, HACMP clusters are very stable. During the writing of this course, a customer
complained to IBM that his HACMP cluster had failed on him because a node had failed
and his workload did not get taken over by the standby node. Upon investigation it was
Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-3

Student Notebook

proven that in fact an earlier (undetected) failure had resulted in the standby node
taking over the workload and a subsequent component failure resulted in a second
point of failure. How many points of failure does HACMP handle?

10-4 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Test your cluster before going live!


Careful testing of your production cluster before going live
reduces the risk of problems later.
An example test plan might include:

Test Item

How to test

Checked

Node Fallover
Network Adapter Swap
IP Network Failure
Storage Adapter Failure
Disk Failure
clstrmgr daemon Killed
Serial Network Failure
Disk Adapter for rootvg Failure
Application Failure
Node re-integration
Partitioned Cluster
Copyright IBM Corporation 2008

Figure 10-3. Test your cluster before going live!

AU548.0

Notes:
Importance of testing
Every cluster should be thoroughly tested before going live. It is important that you
develop and document a cluster test plan for your environment. Start by taking your
cluster diagram and highlighting all the things that could go wrong, then write down
what you expect the cluster to do in response to that failure. Periodically, test your
cluster to ensure that fallover works correctly and correct your test plan if your
assumptions about what will happen differ from that which HACMP actually performs
(for example, shutdown -F does not cause fallover). HACMP 5.2 and later provides a
test tool, which will be discussed later in this unit.
Although it is recommended that testing of the cluster services be performed using
Move Resource Groups, it is especially important to conduct this testing if HACMP is to
be used to reduce Planned Downtime (for upgrades/maintenance) as this will be the
cluster function that will be used. This method of testing, however, should not replace

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-5

Student Notebook

the testing of a node failure due to crash (for example, halt -q or just stop the LPAR at
the HMC).
All efforts should be made to verify application functions (user level testing) as the
cluster function tests are being performed. Verifying that the cluster functions correctly
without verifying that the application functions correctly as part of the cluster function
test is not recommended. Getting the end-user commitment is sometimes the hardest
part of this process.

Use of emulation
You can emulate some common cluster status change events. Remember that
whenever you make a change to cluster configuration, test the change before putting
the cluster back into production if at all possible.
You should always emulate a DARE change before actually doing it. If a DARE change
does not succeed during emulation, then it will definitely not succeed when you actually
do it.

10-6 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Tools to help you diagnose a problem


Most problems related to IP, LVM, and cluster configuration errors
Tools:

Automatic Cluster Configuration Monitoring


Automatic Error Correction during verify
HACMP Cluster Test Tool
Emulation Tools
HACMP Troubleshooting manual

Log files: hacmp.out, cluster.log, clverify.log, clstrmgr.debug


Simple AIX and HACMP commands:
df -k

mount

lsfs

netstat -i

no -a

lsdev

lsvg [<ecmvg>]

lsvg -o

lslv

lspv

ifconfig

clRGinfo

cltopinfo

clcheck_server

clstat

Copyright IBM Corporation 2008

Figure 10-4. Tools to help you diagnose a problem

AU548.0

Notes:
Some key tools
Some of the key tools to aid you in diagnosing a problem in the cluster are detailed
above. Most problems are simple configuration issues, and hence the commands used
to diagnose them are also straightforward. Also, especially useful are the
/<log_dir>/hacmp.out and /var/hacmp/adm/cluster.log files, which document all of
the output that the HACMP event scripts generate.

Remember the documentation


Useful help on errors generated by HACMP and diagnosing problems with the cluster
can be found in the HACMP for AIX Administration Guide and the HACMP for AIX
Troubleshooting Guide.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-7

Student Notebook

Tools available from smit menu


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification
Manage RSCT Services
Open a SMIT Session on a Node
F1=Help
Esc+9=Shell

F2=Refresh
Esc+0=Exit

F3=Cancel
Enter=Do

Esc+8=Image

Copyright IBM Corporation 2008

Figure 10-5. Tools available from smit menu

AU548.0

Notes:
Tools available from the problem determination tools smit menu
We will be looking at some of these tools on the following pages. Not covered are:
- View Current State. This tool executes the /usr/es/sbin/cluster/utilities/cldump
command, which gives the state of the cluster as long as at least one node has
cluster manager services running.
- HACMP Log Viewing and Management. This tool allows you to watch as well as
scan the HACMP log files as well as set options on the /<log_dir>/hacmp.out file
to see event summaries or to see the file in searchable HTML format. watch is
basically a tail -f operation, while scan is to view the entire file.
- Restore HACMP Configuration Database from Active Configuration.
- Release Locks Set By Dynamic Reconfiguration. This was covered in Unit 7.
- Clear SSA Disk Fence Registers.
- HACMP Trace Facility.
- HACMP Error Notification. This was covered in Unit 8.
10-8 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Automatic cluster configuration monitoring


HACMP Verification
Move cursor to desired item and press Enter.
Verify HACMP Configuration
Configure Custom Verification Method

Automatic Cluster Configuration Monitoring

Automatic Cluster Configuration Monitoring


Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry
Fields]
* Automatic cluster configuration verification
Node name
* HOUR (00 - 23)

Enabled
Default
[00]

+
+
+#

Copyright IBM Corporation 2008

Figure 10-6. Automatic cluster configuration monitoring

AU548.0

Notes:
How it works
The clverify utility runs on one user-selectable HACMP cluster node once every 24
hours. By default, the first node in alphabetical order runs the verification at midnight.
When automatic cluster configuration, monitoring detects errors in cluster configuration,
clverify triggers a general_notification event. The output of this event is logged in
hacmp.out throughout the cluster on each node that is running cluster services.
clverify maintains the log file /var/hacmp/log/clverify/clverify.log.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-9

Student Notebook

Automatic correction
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[Both]

* Verify, Synchronize or Both


* Automatically correct errors found during

[No]

[No]
[No]
[Standard]

+
+
+

verification?
* Force synchronization if verification fails?
* Verify changes only?
* Logging

F1=Help
Esc+5=Reset

F2=Refresh
Esc+6=Command

F3=Cancel
Esc+7=Edit

F4=List

Also automatic synchronization during cluster start (HACMP 5.3+)


Copyright IBM Corporation 2008

Figure 10-7. Automatic connection

AU548.0

Notes:
Autocorrection of some verification errors during verify
You can run automatic corrective actions during cluster verification on an inactive
cluster. Automatic correction of clverify errors is not enabled by default. You can choose
to run this useful utility in one of two modes. If you select Interactively, when clverify
detects a correctable condition related to importing a volume group or to exporting and
re-importing mount points and filesystems, you are prompted to authorize a corrective
action before clverify continues error checking. If you select Yes, when clverify detects
that any of the conditions listed as follows exists, it takes the corrective action
automatically without a prompt.
The following errors are detected and fixed:
- Required /etc/services entries are missing on a node.
- HACMP shared volume group time stamps are not up to date on a node.
- The /etc/hosts file on a node does not contain all HACMP-managed IP addresses.
- SSA concurrent volume groups need unique SSA node numbers.
10-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- A filesystem is not created on a node, although disks are available.


- Disks are available, but the volume group has not been imported to a node.
- Required HACMP snmpd entries are missing on a node.
Note that the autocorrection selection will not appear if cluster services are running.
Instead the top line of the menu will look like:
HACMP Verification and Synchronization (Active Cluster Nodes Exist)

Additional autocorrection in HACMP 5.3


The enhancements made to autocorrection in HACMP 5.3 are:
RSCT instance number synchronized properly across all nodes.
Ensure boot-time IP-Addresses are configured on the network interfaces that
RSCT expects.
Ensure active shared volume groups are not set to auto-varyon.
Ensure filesystems are not set to auto-mount.

Additional verification in HACMP 5.3


In HACMP 5.3, the following are added to verification:
Incompatibilities between network and network adapter types.
Shared volume groups defined as auto-varyon.
Certain Network Options (no command settings) are different in cluster nodes
or will be modified by RSCT during cluster startup.
MTU sizes are different on cluster nodes.
RSCT software levels are different for the same AIX levels.
HACMP WAN support configured and WAN software is missing.
Certain volume group settings are different.
Disks are not accessible before the cluster startup.
There are resource groups with site policies defined, but no XD software is
installed.
There are resource groups with site policies defined, but no sites configured.
Issue an error instead of the warning when a volume group that is set up for
cross site mirroring does not have copies of the logical volumes at both sites.
Resource group contains a volume group set up for cross site mirroring, and
forced varyon is not set.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-11

Student Notebook

Automatic verification during cluster start


There is an additional automatic verification and correction done during cluster start:
If a user attempts to start cluster services on a node on which the HACMP topology has
not yet been synchronized, a synchronization will be done.
The assumption is there is a valid cluster configuration on the local node the user is
attempting to start, but the user has not synchronized, which leaves the HACMP cluster
node handle field blank. If cluster services are not running on any node in the cluster
(known to the local node), then the local cluster configuration will be synchronized to all
nodes attempting to start cluster services after successfully verifying the local DCD
configuration. If, however, cluster services are running on a node in the cluster, then the
local DCD will be compared against an ACD of a running cluster node where the local
node participates in the ACD's configuration. If the DCD and ACD match, then
verification is run. If the DCD and ACD do not match, then a snapshot is made of the
DCD, and the active node's ACD will be copied to the DCD on the local node and
verification will be run prior to starting cluster services.
This feature can be disabled such that verification and synchronization does not occur
during cluster startup. The smit path to disable is:
smitty hacmp -> Extended Configuration -> Extended Cluster Service Settings

10-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

HACMP cluster test tool


HACMP Cluster Test Tool
Move cursor to desired item and press Enter.
Execute Automated Test Procedure
Execute Custom Test Procedure

F1=Help
Esc+9=Shell

F2=Refresh
Esc+0=Exit

F3=Cancel
Enter=Do

Esc+8=Image

Warning: These tests are disruptive.


Copyright IBM Corporation 2008

Figure 10-8. HACMP cluster test tool

AU548.0

Notes:
Test tool description
The Cluster Test Tool utility lets you test an HACMP cluster configuration to evaluate
how a cluster operates under a set of specified circumstances, such as when cluster
services on a node fail or when a node loses connectivity to a cluster network. You can
start a test, let it run unattended, and return later to evaluate the results of your testing.
You should run the tool under both low load and high load conditions to observe how
system load affects your HACMP cluster.
The Cluster Test Tool discovers information about the cluster configuration, and
randomly selects cluster components, such as nodes and networks, to be used in the
testing.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-13

Student Notebook

How to run the test tool


You run the Cluster Test Tool from SMIT on one node in an HACMP cluster. For testing
purposes, this node is referred to as the control node. From the control node, the tool
runs a series of specified testssome on other cluster nodes, gathers information
about the success or failure of the tests processed, and stores this information in the
Cluster Test Tool log file for evaluation or future reference.
These tests are disruptive. They should not be done in production mode.
General topology tests
Resource group tests on non-concurrent resource groups
Resource group tests on concurrent resource groups
Catastrophic failure test

10-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Checking cluster subsystems (1 of 2)


# /usr/es/sbin/cluster/utilities/clcheck_server \
clstrmgrES ; echo $?
# lssrc ls clstrmgrES | grep state
# lssrc -g cluster (Daemon only - NOT Services Check)
Subsystem
Group
PID
Status
clstrmgrES
cluster
21032
active
clinfoES
cluster
21676
active

Mandatory

clstrmgrES

Cluster
Components

clinfoES

Optional
Copyright IBM Corporation 2008

Figure 10-9. Checking cluster processes (1 of 2)

AU548.0

Notes:
clstart subsystems
Listed here are the processes that are listed in the startup smit menu for HACMP. Its
interesting to note that these cluster processes are not displayed by the command
when they are inactive. This was a display option (or probably better a non-display
option) that HACMP chose to use when the subsystems were defined during the install
process. This option can be changed (one subsystem at a time) using the chssys -s
subystem_name -a -D command.

Checking for cluster services up


Starting in HACMP 5.3, you must make a distinction between the clstrmgrES
subsystem and cluster services. The clstrmgrES subsystem is always running--even if
cluster services is not running. So to check if cluster services is running, the supported
command is /usr/es/sbin/cluster/utilities/clcheck_cluster grpsvcs. This command
returns 0 (for down) or 1 (for up); so you will need to check the return code. An
Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-15

Student Notebook

alternative command that works in HACMP 5.3 but is not guaranteed for the future is
easier. It is lssrc -ls clstrmgrES | grep state. Look for ST_STABLE for a prolonged
period of time as an indication that cluster services has started successfully. Another
command that will give you state information in HACMP 5.3 is the command
/usr/es/sbin/cluster/utilities/cldump. Finally, you can use the smit path: Problem
Determination Tools -> View Current State.

10-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Checking cluster subsystems (2 of 2)


Check rsct, clcomd, ctrmc subsystems
#
# lssrc a | grep svc

topsvcs
grpsvcs
emsvcs
emaixos

topsvcs
grpsvcs
emsvcs
emsvcs

# lssrc -s clcomdES
Subsystem
Group
clcomdES
clcomdES
# lssrc -s ctrmc
Subsystem
Group
ctrmc
rsct
#

258248
434360
335994
307322

PID
13420

PID
2954

active
active
active
active

Status
active

Status
active

Copyright IBM Corporation 2008

Figure 10-10. Checking cluster processes (2 of 2)

AU548.0

Notes:
Supporting subsystems
Listed here are the additional processes we would expect to find running on an HACMP
cluster node.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-17

Student Notebook

Testing your network connections


To test your IP network:

ping (interfaces)
netstat rn (routing)
host (name resolution)
netstat -i and ifconfig (addresses, subnet mask)

To test your non-IP networks:


Heartbeat over disk:
/usr/sbin/rsct/bin/dhb_read -p hdiskx -r
/usr/sbin/rsct/bin/dhb_read -p hdiskx -t

(receive is done first)

RS232
stty < /dev/tty# (on 2 connected nodes)

Target mode SSA network:


cat < /dev/tmssa#.tm, echo test > /dev/tmssa#.im

Do not perform these tests while HACMP is running

Copyright IBM Corporation 2008

Figure 10-11. Testing your network connections

AU548.0

Notes:
Testing your IP network
-

Ping between all pairs of interfaces on the same subnet.


Check the entries in the routing table on each node (netstat -rn).
Check names are resolvable (host). For example, host node1boot1.
Check addresses and subnet mask (netsat -i, ifconfig).

Testing your non-IP networks


- For Heartbeat over Disk
On one node, execute the command, /usr/sbin/rsct/bin/dhb_read -p hdiskx -r.
This causes the message waiting for response to display.
On the other connected node, execute the command
/usr/sbin/rsct/bin/dhb_read -p hdiskx -t
This causes both nodes to display the message Link operating normally to
display on both nodes.
10-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

- For RS232
On one node, execute the command stty < /dev/tty#. This will hang at the
command line.
On the other connected node, execute the command stty < /dev/tty#.
This causes the tty settings to be displayed on both nodes.
- For Target Mode SSA
On one node, execute the command cat < /dev/tmssa#.tm where the value of #
is the node id of the target ssa router.
On the other connected node, execute the command echo test > \
/dev/tmssa#.im where # is the node id of the source ssa router.
This causes the word test to display on the first node.
These tests can be used to validate that network communications are functioning
between cluster nodes over the defined cluster networks.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-19

Student Notebook

Dead mans switch timeout


888 LED code -> possible DMS timeout.
Why?
Clstrmgr starved of CPU
Excessive I/O traffic
Excessive TCP/IP traffic over an interface

Was it DMS?

Copy the system dump to a file


kdb on the dump file
stat subcommand
Look for 'HACMP dms timeout halting...'

Copyright IBM Corporation 2008

Figure 10-12. Dead man's switch timeout

AU548.0

Notes:
Dead mans switch
The dead mans switch (DMS) is the AIX kernel extension that halts a node when it
enters a hung state that extends beyond a certain time limit. This enables another node
in the cluster to acquire the hung nodes resources in an orderly fashion, avoiding
possible contention problems. If the dead man switch is not reset in time, it can cause a
system panic and dump under certain cluster conditions.
The dead mans switch should not invoke if your cluster is not overloaded with I/O
traffic. There are steps that can be taken to mitigate the chances of the DMS invoking,
but often this is a result of the machine being fundamentally overloaded.

10-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Avoiding dead mans switch timeouts


Steps to avoid DMS timeout problems:
1. Isolate the cause of excessive I/O or TCP/IP traffic and fix it,
and if that does not work...
2. Reduce the failure detection rate for the slowest network
3. Increase the frequency of the syncd, and if that does not
work...
4. Tune I/O pacing, and if that does not work...
5. Buy a bigger machine

Copyright IBM Corporation 2008

Figure 10-13. Avoiding dead mans switch timeouts

AU548.0

Notes:
Causes of DMS timeouts
Most dead mans switch problems are the result of either an extremely overloaded
cluster node or a sequence of truly bizarre cluster configuration misadventures (for
example, DMS timeouts have been known to occur when the disk subsystem is
sufficiently screwed up that AIX encounters difficulties accessing any disks at all).
Large amounts of TCP traffic over an HACMP-controlled service interface might cause
AIX to experience problems when queuing and later releasing this traffic. When traffic is
released, it generates a large CPU load on the system and prevents timing-critical
threads from running, thus causing the Cluster Manager to issue a DMS timeout.
HACMP via Topology Services produces an AIX error if the time gets close. The error
label is TS_DMS_WARNING_ST and you can set an error notify method to notify you
when this occurs.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-21

Student Notebook

The command /usr/sbin/rsct/bin/hatsdmsinfo can be used to see how often the DMS
timer is being reset.
Although we don't recommend changing the DMS time-out value, we are sometimes
asked about how to increase the time-out period on the dead mans switch to make it
less likely that the DMS will pop and crash the node. There is no strict time-out setting;
it is monitored by RSCT and is calculated as twice the value of the longest failure
detection rate of all configured HA network in the cluster. If, for example, you have two
networks, an Ethernet, and a disk heartbeat network, the Ethernet has the longer failure
detection rate, 10 seconds versus 8 for the diskhb network; so the DMS time-out is set
to 2*10, or 20 seconds. If the failure detection rate is being modified to extend the DMS
time-out, it is best to ensure that all networks have the same failure detection period. To
set the DMS timeout value to 30 seconds, while making the failure detection the same
for both networks, the custom NIM settings would be: Ethernet:
Failure Cycle
16
Interval between Heartbeats (seconds)

diskhb: Failure Cycle


8
Interval between Heartbeats (seconds)

This would increase the DMS timeout from 20 seconds to 32. It would also increase the
amount of time necessary to detect a network failure by the same amount. Note that
because the DMS time-out period is directly tied to failure detection rates, increasing
the DMS time-out period will necessarily increase the delay before the secondary node
starts to acquire resources in the event of a node failure, node hang or the loss of all
network connectivity.

10-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Setting performance tuning parameters


Extended Performance Tuning Parameters Configuration
Move cursor to desired item and press Enter.
Change/Show I/O pacing
Change/Show syncd frequency

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure 10-14. Setting performance tuning parameters

AU548.0

Notes:
Extended performance tuning parameter configuration
This is the menu for changing the I/O pacing and syncd frequency.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-23

Student Notebook

Changing the frequency of syncd


The documentation recommends a value of 10. Start with 15.
Change/Show syncd frequency
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
syncd frequency (in seconds)

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[15]

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 10-15. Enabling I/O pacing

AU548.0

Notes:
Setting the syncd frequency
The syncd setting determines the frequency with which the I/O disk-write buffers are
flushed.
Frequent flushing of these buffers reduces the chance of dead man switch time-outs.
The AIX default value for syncd as set in /sbin/rc.boot is 60. It is recommended to
change this value to 15.

10-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Enabling I/O pacing


The HACMP documentation recommends a high water mark of 33 and a
low water mark of 24, but consider
Change/Show I/O pacing
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
HIGH water mark for pending write I/Os per file [33]
+#
LOW water mark for pending write I/Os per file [24]
+#

For <= AIX 5.3


Leave as 0, set as last resort

For >= AIX 6.1


Leave at defaults (hi=8193, lo=4096)
See the AIX 6.1 Differences Guide for more details
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 10-16. Changing the frequency of syncd

AU548.0

Notes:
Setting the I/O pacing values
Remember, I/O pacing and other tuning parameters should only be set to values other
than the defaults after a system performance analysis indicates that doing so will lead to
both the desired and acceptable side effects. This should be the option of last resort.
Consider changing the sensitivity of the network components in HACMP before making
this system-wide change.
Although the most efficient high- and low-water marks vary from system to system, an
initial high-water mark of 33 and a low-water mark of 24 provides a good starting point.
These settings only slightly reduce write times and consistently generate correct
fallover behavior from the HACMP software.
See the AIX 5L Performance Monitoring & Tuning Guide for more information on I/O
pacing.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-25

Student Notebook

SRC halts a node


Under what circumstances does the SRC halt a node?
The cluster manager was killed or has crashed

Proving that SRC halted a node:


Check the AIX error log
Look for abnormal termination of clstrmgr daemon

To avoid SRC halts in the first place:


Do not give untrained staff access to the root password
Consider modifying /etc/cluster/hacmp.term

Copyright IBM Corporation 2008

Figure 10-17. SRC halts a node

AU548.0

Notes:
How SRC halt works
The SRC looks for an entry in the /etc/objrepos/SRCnotify odm file if a subsystem is
killed or crashed. HACMP provides an entry for the clstrmgr. This entry causes clexit.rc
to run which does a halt q by default.

Avoiding SRC halts


Most likely cause is untrained administrator with root privilege. Another possibility is to
modify the /etc/cluster/hacmp.term file. The script clexit.rc will call this script, which
allows you to do something different than halt q.

10-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Partitioned clusters and node isolation


When:
Heartbeats are received from a node that was marked as failed
HACMP ODM configuration is not the same on a joining node as
nodes already active in the cluster
Two clusters with the same ID appear in the same logical network
The rogue recovering or joining node is halted

What happens:
Group Services and clstrmgr exit on some node(s)

Proving that Node Isolation caused the problem:


/tmp/clstrmgr.debug file
AIX error log entry GS_DOM_MERGE_ER

Copyright IBM Corporation 2008

Figure 10-18. Partitioned clusters and node isolation

AU548.0

Notes:
Node isolation
When you have a partitioned cluster, the node or nodes on each side of the partition
detect this and run a node_down for the node or nodes on the opposite side of the
partition. If, while running this or after communication is restored, the two sides of the
partition do not agree on which nodes are still members of the cluster, a decision is
made as to which partition should remain up, and the other partition is shutdown by a
Group Services (GS) merge from nodes in the other partition or by a node sending a GS
merge to itself.
In clusters consisting of more than two nodes, the decision is based on which partition
has the most nodes left in it, and that partition stays up. With an equal number of nodes
in each partition (as is always the case in a two-node cluster), the node or nodes that
remain up are determined by the node number (lowest node number in cluster
remains), which is also generally the first in alphabetical order.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-27

Student Notebook

Role of group services


Group Services domain merge messages indicate that a node isolation problem was
handled to keep the resources as highly available as possible, giving you time to later
investigate the problem and its cause. When a domain merge occurs, Group Services
and the Cluster Manager exit. The clstrmgr.debug file will contain the following error:
"announcementCb: GRPSVCS announcement code=n; exiting"
"CHECK FOR FAILURE OF RSCT SUBSYSTEMS (topsvcs or grpsvcs)"
There is also an entry in the AIX error log GS_DOM_MERGE_ER.

10-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Avoiding partitioned clusters


Have a non IP (serial) network
Have a second non-IP network
Check your non-IP networks before going live
Watch for non-IP network failures in HACMP log files
Do not segment your cluster's IP networks
Avoid multiple switches
Except in carefully designed highly available network configurations

Avoid bridges

Copyright IBM Corporation 2008

Figure 10-19. Avoiding partitioned clusters

AU548.0

Notes:
What can go wrong?
A partitioned cluster can result in data divergence (two cluster nodes each gain access
to half of the disks mirrors and proceed to perform updates on their halves). This is a
scenario that can be extremely difficult to completely recover from because the changes
made by the two nodes might be fundamentally incompatible and impossible to
reconcile.

Avoiding the problem


The best way to avoid a partitioned cluster is to install and configure one or more non-IP
networks.
Test disabling each non-IP network and making sure this is detected by HACMP then
enabling each non-IP network and ensure this is also detected.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-29

Student Notebook

Automatic failure data capture


With HACMP 5.4.1 and later, IBM Support data is gathered
FFDC (First Failure Data Capture) feature automatically captures
diagnostic data
snap data collected after recovery from software or node failure
Can be disabled via an environment variable

FFDC data saved in


/tmp/ibmsupt/hacmp/ffdc.<DateTimeStamp> directory
Message logged in hacmp.out file
Max of five incidents retained
An FFDC message is displayed on screen at next Cluster Services
start

Also implemented for Event Failures and


CONFIG_TOO_LONG error
hacmp.out files from all nodes collected and saved in
/tmp/ibmsupt/hacmp
Copyright IBM Corporation 2008

Figure 10-20. Automatic failure data capture

AU548.0

Notes:
Uses the clsnap command under the covers local collection only. The clsnap utility runs
with the report option first to verify there is enough space.
The user can disable these specific FFDC actions by setting the environment variable
FFDC_COLLECTION to disable before starting cluster services.

10-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Check event status message


Config too long message
Cluster <clustername> has been running event <eventname>
for # seconds. Please check event status.

It means that an event script has failed, is hung, or is taking too


long.

HACMP stops processing events until you resolve this issue.

Copyright IBM Corporation 2008

Figure 10-21. Check event status message

AU548.0

Notes:
The config _too_long event
For each cluster event that does not complete within the specified event duration time,
config_too_long messages are logged in the hacmp.out file and sent to the console
according to the following pattern:
- First five config_too_long messages appear in the hacmp.out file at 30-second
intervals.
- Next set of five messages appears at interval that is double the previous interval
until the interval reaches one hour.
- These messages are logged every hour until the event is complete or is terminated
on that node.
This error can occur if an event script fails or does not complete within a customizable
time period, which by default is 360 seconds.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-31

Student Notebook

Why does it happen?


There are two major reasons this might happen.
1. The event script fails to complete, in which case the message is
sent forever.
2. An event just takes a lot more time such as varying on a lot of disks
or processing dependent resource groups, in which case this error
message eventually stops being generated when the HACMP
event script that was running finally completes.

10-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Changing the timeouts


Change/Show Time Until Warning
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Max. Event-only Duration (in seconds)
[180]
Max. Resource Group Processing Time (in seconds) [180]
Total time to process a Resource Group event
before a warning is displayed

#
#

6 minutes and 0 secon>

NOTE: Changes made to this panel must be


propagated to the other nodes by
Verifying and Synchronizing the cluster
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure 10-22. Changing the timeouts

AU548.0

Notes:
smit menu
smit hacmp -> Extended Configuration -> Extended Event Configuration ->
Change/Show Time Until Warning

How to set the values


Note that the timeouts are specified as two values one for fast events that do not
involve resource group movements and a second value for slow events:
Max. Event-only Duration (in seconds) - This is the amount of time that a fast event is
allowed to take.
Max. Resource Group Processing Time (in seconds) - This is the additional amount
of time to be allowed for slow events. Therefore, the amount of time for Resource Group
Processing is the sum of the Max. Event-only Duration and the Max. Resource Group
Processing Time.
Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-33

Student Notebook

Recovering from an event script failure


1. /<log_dir>/hacmp.out file go to time of first too long
message

Use /var/hacmp/adm/cluster.log to find time of first message

2. Go backwards to find the AIX error messages


3. Manually correct the problem and complete failed event
4. Perform "Recover from Script Failure"
5. Verify that config too long message stops
6. Verify that the cluster is now working properly
Copyright IBM Corporation 2008

Figure 10-23. Recovering from an event script failure

AU548.0

Notes:
Why recovery from script failure is necessary
If an event script fails or takes too long, the Please check event status message starts
to display as described on the previous visual. HACMP stops processing cluster events
until the situation is resolved. If the problem is that an event took too long, then the
problem might soon solve itself. If a HACMP event script has actually failed, then
manual intervention is required.

The procedure
The procedure is outlined in the visual above. Using the /var/hacmp/adm/cluster.log
file with the command grep EVENT /var/hacmp/adm/cluster.log | more makes it
easier to find when the config too long event first occurred. Be sure to find the earliest
AIX error message--not just the first AIX error message. You must manually complete
what the event would have done before doing recover from script failure, which is

10-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

described on the next visual. You can also use the cluster.log in combination with
hacmp.out.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-35

Student Notebook

Recovering from an event failure


Problem Determination Tools
Move cursor to desired item and press Enter.
HACMP Verification
View Current State
HACMP Log Viewing and Management

Recover From HACMP Script Failure


Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Trace Facility
+--------------------------------------------------------------------------+

Select a Node

Move cursor to desired item and press Enter.

usa

uk

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure 10-24. Recovering from an event failure

AU548.0

Notes:
What this procedure does
This SMIT menu entry can be used to recover from a script failure. This does not mean
that HACMP fixes problems in event scripts, but this menu is used to allow the cluster
manager to continue to the next event following an event script failure that you have
identified and manually corrected. Select the node experiencing the problem and press
Enter.

10-36 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

A troubleshooting methodology
Save the log files from all available nodes as soon as possible
Attempt to duplicate the problem
Approach the problem methodically
Distinguish between what you know and what you assume
Keep an open mind
Isolate the problem
Go from the simple to the complex
Make one change at a time
Stick to a few simple troubleshooting tools
Do not neglect the obvious
Watch for what the cluster is not doing
Keep a record of the tests you have completed

Copyright IBM Corporation 2008

Figure 10-25. A troubleshooting methodology

AU548.0

Notes:
Troubleshooting suggestions
Save the log files from every available cluster node while they are still available
Things might get much worse than they already are. Having access to all relevant
cluster log files and application log files could prove very important. These log files
might be overwritten while you are investigating the problem or they might be lost
entirely if more hardware failures occur. Save copies of them very early in the
troubleshooting exercise to ensure that they are not lost.
Attempt to duplicate the problem
While keeping in mind the importance of not making a bad situation worse by causing
even more problems, it is often useful to try to duplicate the circumstances that are
believed to have been in effect when the problem occurred; this can lead to a greater
understanding of exactly what went wrong.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-37

Student Notebook

Approach the problem methodically


Jumping around from idea to idea and just trying whatever comes to mind might be an
entertaining use of your time but it is unlikely to yield a fast solution to the problem at
hand.
Distinguish between what you know and what you assume
It is far too easy to spend quite a while chasing down a path of inquiry that is based on
a faulty assumption. It is frequently necessary to proceed on the basis of an assumption
but be sure that you understand when you are working based on an assumption. When
you have spent twenty minutes to half an hour working on the basis of an assumption
with no apparent progress, it is probably time to start to wonder about the validity of the
assumption. If you spend more than about three quarters of an hour based on an
assumption with still little or no apparent progress, then it is probably time to figure out a
way to determine if the assumption is true or not (devise a test that will indicate if the
assumption is valid and then perform the test).
Keep an open mind
Although related to the issue of knowing if you are working on the basis of an
assumption or a fact, keeping an open mind is much more than that. It means being
careful to not make assumptions which are based on flimsy or non-existent evidence
and it means to be on the lookout for clues that are not compatible with your current
assumptions so that you are able to drop faulty assumptions more rapidly.
Isolate the problem
Consider temporarily simplifying the cluster in order to remove elements which may be
confusing the issue at hand. Keep in mind that your simplifications may change the
situation enough that the problem vanishes. This does not necessarily mean that the
elements which you removed were part of the problems cause as their removal may
simply have changed the relative timing of key events such that the bad sequence of
events no longer occurs.
Go from the simple to the complex
Most problems are actually simple problems. Do not start to develop elaborate theories
of what went wrong until you have demonstrated that the simpler possibilities did not
cause the problem to occur.
Make one change at a time
When you believe that you understand the problem, make small changes to the cluster,
which are each intended to eliminate some aspect of the problem, and then verify that
they had the intended effect. If the small changes are not having the intended effect,
then your diagnosis of what is at fault might be wrong. Also, it is far easier to back out a
few simple changes than to back out a long series of changes if it should turn out that
your diagnosis is wrong.

10-38 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Stick to a few simple troubleshooting tools


Although sophisticated tools are often useful and sometimes even essential, trying to
use tools which you are not extremely comfortable with is likely to increase the time that
it takes to resolve the problem. Stick to the tools that you are comfortable with but be
prepared to learn new tools if it should become necessary to do so (just make sure that
it is truly necessary and not just a chance to try out a new toy).
Do not neglect the obvious
Pay attention to the most obvious indications that you have a problem and, at least
initially, focus on what they seem to suggest as obvious places to start. For example, an
error message about a disk I/O problem or the inability to access a data file is unlikely to
have anything to do with a networking problem. On the other hand, it is possible that
disk I/O problems have caused your non-IP target mode SSA network to fail (in other
words, the problem is usually obvious but not necessarily obvious).
Watch for what the cluster is not doing
Also known as watching out for the dog that didnt bark (a reference to Arthur Conan
Doyles Sherlock Holmes story Silver Blaze, in which a key clue involves a dog that did
not bark during the commission of the crime but would normally have been expected to
do so in the situation at hand). Watch for messages that should appear given your
current assumptions. If they do not appear (in other words, if the dog does not bark),
then your assumptions may be faulty.
Keep a record of the tests you have completed
If the problem is truly simple, then you might be able to find it within a few minutes. If the
search takes longer than about fifteen minutes, then it is probably time to start taking
notes of what you are doing (also include a list of your assumptions so that you can
review them later to see which ones are starting to look doubtful). If finding and fixing
the problem should happen to turn into a major adventure then the ability to look back
on what you did (as opposed to what you vaguely remember doing) could prove
extremely useful.
Important
Finally, remember that many cluster problems are the result of poor cluster design,
untrained cluster administrators or the lack of a proper change control methodology.
Without a doubt, the easiest and fastest way to deal with a problem is to ensure that it
cannot happen in the first place.

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-39

Student Notebook

Contacting IBM for support


Before contacting IBM about a support issue, collect the
following information:
Item

Checked

EXACT error messages that appear in HACMP logs such as


hacmp.out or on the console
Your cluster diagram or Planning Worksheets (updated)
A snapshot of your current cluster configuration (not a photo)
Details of any customization performed to HACMP events
Details of current AIX, HACMP and application software levels
Details of any PTFs applied to HACMP or AIX the cluster
The adapter microcode levels (especially for storage adapters)
Cluster planning worksheets, with all components clearly labeled
A network topology diagram for the network as far as the users
Copies of all HACMP log files (snap e command)
Copyright IBM Corporation 2008

Figure 10-26. Contacting IBM for support

AU548.0

Notes:
What to do when contacting IBM
The visual above summarizes the steps. It is a very good idea to collate as much of this
information in advance of having a problem as is possible, especially snapshots and the
cluster diagram. If you have not already got this information assembled at your office for
your existing clusters, you are strongly recommended to do so as soon as you get back.

Updating your planning worksheets


To update your planning worksheets, if you are using the Online Planning Worksheets,
you can now export the HACMP om (or a snapshot with HACMP 5.3) to the planning
using the smit path Extended Configuration -> Export Definition File for Online
Planning Worksheets (or the path Extended Configuration -> Snapshot
Configuration ->Convert Existing Snapshot For Online Planning Worksheets).
The file should have a name of the form name.haw. The default location is
/var/hacmp/log
10-40 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Checkpoint
1. What is the most common cause of cluster failure?
(Select all that apply.)
a. Bugs in AIX or HACMP
b. Cluster administrator error
c. Marauding space aliens from another galaxy
d. Cosmic rays
e. Poor/inadequate cluster design

2. True or False?
Event emulation can emulate all cluster events.

3. If the cluster manager process dies, what will happen to


the cluster node?
a.
b.
c.
d.

It continues running but without HACMP to monitor and protect it.


It continues running AIX but any resource groups will fallover.
Nobody knows because this has never happened before.
The System Resource Controller sends an e-mail to root and issue a
halt -q.
e. The System Resource Controller sends an e-mail to root and issue a
shutdown -F.

4. True or False?
A non-IP network is strongly recommended. Failure to include a nonIP network can cause the cluster to fail or malfunction in rather ugly
ways.
Copyright IBM Corporation 2008

Figure 10-27. Checkpoint

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Unit 10. Problem determination and recovery

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

10-41

Student Notebook

Unit summary
Having completed this unit, you should be able to:
List reasons why HACMP can fail
Identify configuration and administration errors
Explain why the Dead Man's Switch invokes
Explain when the System Resource Controller will kill a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

Copyright IBM Corporation 2008

Figure 10-28. Unit summary

AU548.0

Notes:

10-42 HACMP Implementation

Copyright IBM Corp. 1998, 2008

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

V4.0
Student Notebook

AP

Appendix A. Checkpoint solutions


Unit 1 - Introduction to HACMP for AIX

Lets review solutions


1.

2.

3.

4.

Which of the following items are examples of topology components in HACMP? (Select
all that apply.)
a. Node
b. Network
c. Service IP label
d. Hard disk drive
True or False?
All nodes in an HACMP cluster must have roughly equivalent performance
characteristics.
Which of the following is a characteristic of high availability?
a. High availability always requires specially designed hardware components.
b. High availability solutions always require manual intervention to ensure
recovery following fallover.
c. High availability solutions never require customization.
d. High availability solutions use redundant standard equipment (no specialized
hardware).
True or False?
A thorough design and detailed planning is required for all high availability solutions.

Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-1

Student Notebook

Unit 1 - Introduction to HACMP for AIX

Checkpoint solutions
1. True or False?
Resource Groups can be moved from node to node.
2. True or False?
HACMP/XD is a complete solution for building
geographically distributed clusters.
3. Which of the following capabilities does HACMP not
provide? (Select all that apply.):
a. Time synchronization
b. Automatic recovery from node and network adapter failure
c. System Administration tasks unique to each node; back-up
and restoration
d. Fallover of just a single resource group

4. True or False?
All nodes in a resource group must have equivalent
performance characteristics.
Copyright IBM Corporation 2008

A-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 2 - Network considerations for high availability

Lets review: Topic 1 solutions


1. How does HACMP use networks? (Select all that apply.)
a. Provide client systems with highly available access to the cluster's
applications
b. Detect failures
c. Diagnose failures
d. Communicate between cluster nodes
e.Monitor network performance
2. Using information from RSCT, HACMP directly handles only three types of failures:
Network interface card (NIC) failures, Node failures, and Network failures.
3. True or False?
Heartbeat packets must be acknowledged or a failure is assumed to have
occurred.
4. True or False?
Clusters should include a non-IP network.
5. True or False?
Each NIC on each physical IP network on each node is required to have an IP
address on a different logical subnet.

Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-3

Student Notebook

Unit 2 - Network considerations for high availability

Lets review: Topic 2 solutions


1. True or False?
Clusters must always be configured with a private IP network for
HACMP communication.

2. Which of the following options are true statements about


communication interfaces? (Select all that apply.)
a.Has an IP address assigned to it using the AIX TCP/IP SMIT screens
b.Might have more than one IP address associated with it
c.Sometimes but not always used to communicate with clients
d.Always used to communicate with clients

3. True or False?
Persistent node IP labels are not supported for IPAT via IP
replacement.

4. True or False?
There are no exceptions to the rule that, on each node, each NIC on
the same LAN must have an IP address in a different subnet.
(The HACMP 5.1 heartbeat over IP aliases feature is the exception to this rule.)
Copyright IBM Corporation 2008

A-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 2 - Network considerations for high availability

Lets review: Topic 3 solutions


1. True or False?
A single cluster can use both IPAT via IP aliasing and IPAT via IP replacement.

2. True or False?
All networking technologies supported by HACMP support IPAT via IP aliasing.

3. True or False?
All networking technologies supported by HACMP support IPAT via IP replacement.

4. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and the
right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2, then
which of the following options are valid service IP addresses if IPAT via IP aliasing is
being used? (Select all that apply.)
a.(192.168.20.3 and 192.168.20.4) or (192.168.21.3 and 192.168.21.4)
b.192.168.20.3 and 192.168.20.4 and 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d.192.168.23.3 and 192.168.24.3

5. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1 and the
right hand node has NICs with the IP addresses 192.168.20.2 and 192.168.21.2, then
which of the following options are valid service IP addresses if IPAT via IP replacement is
being used? (Select all that apply.)
a.(192.168.20.3 and 192.168.20.4) or (192.168.21.3 and 192.168.21.4)
b.192.168.20.3, 192.168.20.4, 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d.192.168.23.3 and 192.168.24.3
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-5

Student Notebook

Unit 2 - Network considerations for high availability

Checkpoint solutions
1. True or False?
Clients are required to exit and restart their application after a
fallover.

2. True or False?
All client systems are potentially directly affected by the ARP cache
issue.

3. True or False?
clinfo must not be run both on the cluster nodes and on the
client systems.

4. If clinfo is run by cluster nodes to address ARP cache


issues, you must add the list of clients to ping to either the
/etc/cluster/ping_client_list or the
/usr/es/sbin/cluster/etc/clinfo.rc file.

Copyright IBM Corporation 2008

A-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 3 - Shared storage considerations for high availability

Lets review: Topic 1 solutions


1. Which of the following statements is true (select all that
apply)?
a. Static application data should always reside on private storage.
b. Dynamic application data should always reside on shared
storage.
c. Shared storage must always be simultaneously accessible in
read-write mode to all cluster nodes.
d. Application binaries should only be placed on shared storage.

2. True or False?

Using RSCT-based shared disk protection results in slower


fallovers.

3. True or False?

Ghost disks must be checked for and eliminated immediately


after every cluster fallover or fallback.

Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-7

Student Notebook

Unit 3 - Shared storage considerations for high availability

Lets review: Topic 2 solutions


1. Which of the following disk technologies are supported by
HACMP?
a.
b.
c.
d.

SCSI
SSA
FC
All of the above

2. True or False?

SSA disk subsystems can support RAID5 (cache-enabled) with HACMP.

3. True or False?

Compatibility must be checked when using different SSA adapters in the


same loop.

4. True or False?

No special considerations are required when using SAN based storage units
(DS8000, ESS, EMC HDS, and so forth).

5. True or False?

hdisk numbers must map to the same PVIDs across an entire HACMP
cluster.
Copyright IBM Corporation 2008

A-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 3 - Shared storage considerations for high availability

Checkpoint solutions
1.True or False?
Lazy update attempts to keep VGDA constructs in sync between
cluster nodes (reserve/release-based shared storage protection).

2.Which of the following commands will bring a volume group


online?
a.getvtg <vgname>
b.mountvg <vgname>
c.attachvg <vgname>
d.varyonvg <vgname>

3.True or False?
Quorum should always be disabled on shared volume groups.

4.True or False?
Filesystem and logical volume attributes cannot be changed while
the cluster is operational.

5.True or False?
An enhanced concurrent volume group is required for the heartbeat
over disk feature.
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-9

Student Notebook

Unit 4 - Planning for applications and resource groups

Checkpoint solutions
1. True or False
Applications are defined to HACMP in a configuration file that lists
what binary to use.

2.What policies would be the best to use for a 2-node activeactive cluster using IPAT to minimize both applications running
on the same node?
a.home, next, never
b.first, next, higher
c.distribution, next, never
d.all, error, never
e.home, next, higher

3.Which type of data should not be placed in private data storage?


a.Application log data
b.License file
c.Configuration files
d.Application binaries

4.Which policy is not a Run-time policy?


a.Settling
b.Delayed Fallback Timer
c.Dynamic Node Priority

Copyright IBM Corporation 2008

A-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 5 - HACMP installation

Lets review solutions


1. What is the first step in implementing a cluster?
a. Order the hardware
b. Plan the cluster
c. Install AIX and HACMP
d. Install the applications
e. Take a long nap
2. True or False?
HACMP 5.4.1 is compatible with any version of AIX V5.x.
3. True or False?
Each cluster node must be rebooted after the HACMP software is
installed.
4. True or False?
You should take careful notes while you install and configure
HACMP so that you know what to test when you are done.
*There is some dispute about whether the correct answer is b or e although a
disconcerting number of clusters are implemented in the order a, b, c, d, e (how can you
possibly order the hardware if you do not yet know what you are going to build?) or even
just a, c, d (cluster implementers who skip step b rarely have time for long naps).
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-11

Student Notebook

Unit 5 - HACMP installation

Checkpoint solutions
1. Which component detects an adapter failure?
a.
b.
c.
d.

Cluster Manager
RSCT
clcomd
clinfo

2. Which component provides SNMP information?


a.
b.
c.
d.

Cluster Manager
RSCT
clsmuxpd
clinfo

3. Which component is required for clstat to work?


a.
b.
c.
d.

Cluster Manager
RSCT
clcomd
clinfo

4. Which component removes requirement for the /.rhosts file?


a.
b.
c.
d.

Cluster Manager
RSCT
clcomd
clinfo
Copyright IBM Corporation 2008

A-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 6 - Initial cluster configuration

Checkpoint solutions
1.

True or False?
It is possible to configure a recommended simple two-node cluster environment
using just the standard configuration path.
You cant create the non-IP network from the standard path.

2.

In which of the top-level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a.
b.
c.
d.

3.

In which of the top-level HACMP menu choices is the menu for defining a nonIP heartbeat network?
a.
b.
c.
d.

4.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

True or False?
It is possible to configure HACMP faster by having someone help you on the other
node.

5.

True or False?
You must specify exactly which filesystems you want mounted when you put
resources into a resource group.
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-13

Student Notebook

Unit 7 - Basic HACMP administration

Lets review: Topic 1 solutions


1. True or False?
You cannot add a node while HACMP is running.

2. You have decided to add a third node to your existing twonode HACMP cluster. What very important step follows
adding the node definition to the cluster configuration
(whether through Standard or Extended Path)?
a. Take a well deserved break, bragging to co-workers about
your success.
b. Install HACMP software.
c. Configure a non-IP network.
d. Start Cluster Services on the new node.
e. Add a resource group for the new node.

3. Why would you choose to use the Extended Path to add


resources to a resource group versus the Standard Path?
If you need access to the fields that are not shown in the Standard Path (like for
NFS or to set Filesystems mounted before IP configured).

__________________________________________________
Copyright IBM Corporation 2008

A-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 7 - Basic HACMP administration

Lets review: Topic 2 solutions


1.

True or False?
Using C-SPOC reduces the likelihood of an outage by reducing the
likelihood that you will make a mistake.

2.

True or False?

3.

C-SPOC cannot do which of the following administration tasks?

C-SPOC reduces the need for a change management process.


a.
b.
c.
d.
e.
f.

4.

Add a user to the cluster


Change the size of a filesystem
Add a physical disks to the cluster
Add a shared volume groups to the cluster
Synchronize existing passwords
None of the above

True or False?
It does not matter which node in the cluster is used to initiate a C-SPOC
operation.

5.

Which log file provides detailed output on HACMP event script


execution?
a. /tmp/clstrmgr.debug
b. /tmp/hacmp.out
c. /var/adm/cluster.log
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-15

Student Notebook

Unit 7 - Basic HACMP administration

Lets review: Topic 3 solutions


1. True or False?
DARE operations can be performed while the cluster is running.
2. Which operations can DARE not perform (select all that apply)?
a.
b.
c.
d.

Changing the name of the cluster


Removing a node from the cluster
Changing a resource in a resource group
Change whether a network uses IPAT via IP aliasing or via IP
replacement

3. True or False?
It is possible to roll back from a successful DARE operation using an
automatically generated snapshot.
4. True or False?
Running a DARE operation requires three separate copies of the
HACMP ODM.
5. True or False?
Cluster snapshots can be applied while the cluster is running.
6. What is the purpose of the dynamic reconfiguration lock?
a. To prevent unauthorized access to DARE functions
b. To prevent further changes being made until a DARE operation has
completed
c. To keep a copy of the previous configuration for easy rollback
Copyright IBM Corporation 2008

A-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 7 - Basic HACMP administration

Checkpoint solutions
1.
2.
3.
4.

5.

True or False?
A star configuration is a good choice for your non-IP networks.
True or False?
Using DARE, you can change from IPAT via aliasing to IPAT via
replacement without stopping the cluster.
True or False?
RSCT will automatically update /etc/filesystems when using enhanced
concurrent mode volume groups
True or False?
With HACMP V5.4, a resource groups priority override location can be
cancelled by selecting a destination node of
Restore_Node_Priority_Order.
You want to create an Enhanced Concurrent Mode Volume Group that
will be used in a Resource Group that will have an Online on Home
Node Startup policy. Which C-SPOC menu should you use?
a. HACMP Logical Volume Management
b. HACMP Concurrent Logical Volume Management

6.

You want to add a logical volume to the volume group you created in the
question above. Which C-SPOC menu should you use?
a. HACMP Logical Volume Management
b. HACMP Concurrent Logical Volume Management

Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-17

Student Notebook

Unit 8 - Events

Lets review solutions


1.

Which of the following are examples of primary HACMP events (select all that
apply)?
a.
b.
c.
d.
e.

2.

node_up
node_up_local
node_up_complete
start_server
Rg_up

When a node joins an existing cluster, what is the correct sequence for these
events?
a.
b.
c.
d.

node_up on new node, node_up on existing node, node_up_complete on new


node, node_up_complete on existing node
node_up on existing node, node_up on new node, node_up_complete on new
node, node_up_complete on existing node
node_up on new node, node_up on existing node, node_up_complete on
existing node, node_up_complete on new node
node_up on existing node, node_up on new node, node_up_complete on
existing node, node_up_complete on new node

Copyright IBM Corporation 2008

A-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 8 - Events

Checkpoint solutions
1. Which of the following runs if an HACMP event script fails?
(select all that apply)
a.Pre-event scripts
b.Post-event scripts
c.Error notification methods
d.Recovery commands
e.Notify methods

2. How does an event script get started?


a.Manually by an administrator
b.Called by the SNMP SMUX (clsmuxpd)
c.Called by the cluster manager using a recovery program
d.Called by the topology services daemon

3. True or False?
Pre-event scripts are automatically synchronized.

4. True or False?
Writing error notification methods is a normal part of
configuring a cluster.
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-19

Student Notebook

Unit 9 - Integrating NFS into HACMP

Checkpoint solutions
1.

True or False? *

2.

Which of the following is a special consideration when using HACMP to NFS


export filesystems? (select all that apply)

HACMP supports all NFS export configuration options.


a.
b.
c.
d.

3.

NFS exports must be read-write.


Secure RPC must be used at all times.
A cluster may not use NFS cross-mounts if there are client systems accessing the
NFS exported filesystems.
A volume group that contains filesystems that are NFS exported must have
the same major device number on all cluster nodes in the resource group.

What does [/abc;/xyz] mean when specifying a directory to cross-mount?


a.
b.

/abc is the name of the filesystem that is exported and /xyz is where it should be
mounted
/abc is where the filesystem should be mounted, and /xyz is the name of the
filesystem that is exported

4.

True or False? **

5.

True or False?

HACMP's NFS exporting feature supports only clusters of two nodes.


IPAT is required in resource groups that export NFS filesystems.

*/usr/es/sbin/cluster/exports must be used to specify NFS export options if the


default of "read write to the world" is not acceptable.
**Resource groups larger than two nodes that export NFS filesystems do not
provide full NFS functionality (for example, NFS file locks are not preserved
across a fallover).
Copyright IBM Corporation 2008

A-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit 10 - Problem determination and recovery

Checkpoint solutions
1.

What is the most common cause of cluster failure? (Select all that apply.)
a. Bugs in AIX or HACMP
b. Cluster administrator error
c. Marauding space aliens from another galaxy
d. Cosmic rays
e. Poor/inadequate cluster design
2. True or False?
Event emulation can emulate all cluster events.
3. If the cluster manager process dies, what will happen to the cluster
node?
a. It continues running but without HACMP to monitor and protect it.
b. It continues running AIX but any resource groups will fallover.
c. Nobody knows because this has never happened before.
d. The System Resource Controller sends an e-mail to root and
issue a halt -q.
e. The System Resource Controller sends an e-mail to root and issue a
shutdown -F.
4. True or False?
A non-IP network is strongly recommended. Failure to include a nonIP network can cause the cluster to fail or malfunction in rather ugly
ways.
*The correct answer is almost certainly "cluster administrator error" although
"poor/inadequate cluster design" would be a very close second.
Copyright IBM Corporation 2008

Copyright IBM Corp. 1998, 2008

Appendix A. Checkpoint solutions

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

A-21

Student Notebook

Appendix C - IPAT via IP replacement

Checkpoint solutions
1. For IPAT via replacement (select all that apply)
a. Each service IP address must be in the same subnet as one of
the non-service addresses
b. Each service IP address must be in the same subnet
c. Each service IP address cannot be in any non-service address subnet

2.

True or False?
If the takeover node is not the home node for the resource group and
the resource group does not have a Startup policy of Online Using
Distribution Policy, the service IP address replaces the IP address of a
NIC with an IP address in the same subnet as the subnet of the
service IP address.

3.

True or False?
In order to use HWAT, you must enable and complete the
ALTERNATE ETHERNET address field in the SMIT devices menu.

4.

True or False?
You must stop the cluster in order to change from IPAT via aliasing to
IPAT via replacement.
Copyright IBM Corporation 2008

A-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Appendix B. Release Notes for HACMP 5.4.1


====================================================================
Release Notes for IBM High Availability Cluster Multi-Processing
(HACMP) for AIX 5L, Release 5.4.1, November, 2007
Last updated, 09/20/2007
====================================================================
These Release Notes contain the latest information about the HACMP software. The
following topics are discussed:
Enhancements of the HACMP software
Installation and Migration Notes
HACMP Configuration Restrictions
Notes on Functionality
Required Release of AIX 5L for HACMP 5.4.1
HACMP Configuration Restrictions
HACMP 5.4.1 Documentation
Product Directories Loaded
Product Man Pages
Accessing IBM on the Web
Feedback

==========================================
Enhancements of the HACMP Software
==========================================

-----------------------------------5.4.1 Enhancements
------------------------------------

Integrated support for utilizing AIX Workload Partition (WPAR) to maintain high
availability for your applications by configuring them as a resource group and assigning
the resource group to an AIX WPAR. By using HACMP in combination with AIX WPAR,
you can leverage the advantages of application environment isolation and resource

Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-1

Student Notebook

control provided by AIX WPAR along with the high availability feature of HACMP
V5.4.1.
HACMP/XD support of PPRC Consistency Groups to maintain data consistency for
application-dependent writes on the same logical subsystem (LSS) pair or across
multiple LSS pairs. HACMP/XD responds to PPRC consistency group failures by
automatically freezing the pairs and managing the data mirroring.
A new Geographical Logical Volume Manager (GLVM) Status Monitor that provide the
ability to monitor GLVM status and state. These monitors enable you to keep better
track of the status of your application data when using the HACMP/XD GLVM option for
data replication.
Improved support for NFS V4, which includes additional configuration options, as well
as improved recovery time. HACMP can support both NFS V4 and V2/V3 within the
same high availability environment.
Usability improvements for the WebSMIT Graphical User Interface, which include the
ability to customize the color and appearance of the display. Improvements to First
Failure Data Capture and additional standardized logging increase the reliability and
serviceability of HACMP 5.4.1.
New options for detecting and responding to a partitioned cluster. Certain failures or
combinations of failures can lead to a partitioned cluster, which, in the worse case, can
lead to data divergence (out of sync data between the primary and backup nodes in a
cluster). HACMP V5.4.1 introduces new features for detecting a partitioned cluster and
avoiding data divergence through earlier detection and reporting.
Serviceability Improvements for HACMP. New log files have been added. The default
locations of all managed log files have been moved to a subdirectory of /var/hacmp.

-----------------------------------5.4.0 Enhancements
------------------------------------

HACMP for Linux 5.4


For more information, see the HACMP for Linux 5.4 Installation and Administration
Guide or the HACMP for Linux 5.4 release notes.
HACMP Smart Assist Programs now support Automatic Discovery.
For more information, see the Smart Assist guides or the Release Notes for HACMP for
AIX 5L version 5.4 Smart Assists.
Improved management of Stopping and Starting HACMP Cluster Services:
- Start cluster services without stopping applications. This is also referred to as
nondisruptive startup.
B-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

- Start and restart cluster services automatically according to how you define the
resources.
- Stop cluster services and also bring the resources and applications offline, move
them to other nodes, or keep them running on the same nodes (but stop managing
them for high availability).
- Terminology that describes stopping cluster services has changed:
Instead of stopping cluster services gracefully, this option is known as stopping
cluster services and bringing resource groups offline. The cluster services are
stopped.
Instead of stopping cluster services gracefully with takeover, this option is known
as stopping cluster services and moving the resource groups to other nodes.
Instead of a forced down, this option is known as stopping cluster services
immediately and placing resource groups in an unmanaged state. This option
leaves resource groups on the local node active.
Resource Group Management (clRGmove) improvements
- Improved SMIT interface.
- Easier to move the resource groups for cluster management.
- When you move a resource group, you can move it without setting the Priority
Override Location (POL) for the node to which it was moved. POL is a setting you
had to specify for manually moved resource groups in releases prior to HACMP 5.4.
- Improved handling of non-concurrent resource groups with No Fallback resource
group and site policies.
- Clear method to maintain the previously configured behavior for a resource group.
- Improved status and troubleshooting with WebSMIT and clRGinfo.
Verification enhancements
- The final verification report lists any nodes, networks and/or network interfaces that
are in the 'failed' state at the time that cluster verification is run. The final verification
report also lists other 'failed' components, if accessible from the Cluster Manager,
such as applications, resource groups, sites, and application monitors that are in the
suspended state.
- Volume group verification checks have been restructured for faster processing.
- Messages have been reformatted for consistency and to remove repetitious entries.
- New Verification checks:
Can each node reach each other node in the cluster through non-IP
connections?
Are netmasks and broadcast addresses valid?

Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-3

Student Notebook

Are all Volume Groups and PVIDs on the vpath devices?


Is the distribution preference collocation or anti-collocation with persistent label
used when persistent labels have not been defined?
WebSMIT Application
- New WebSMIT framework for the user interface
- Graphical representation of resource groups and their dependencies
- Graphical representation of cluster site, network, and node information
- Ability to view the cluster configuration and the cluster status simultaneously
- Ability to navigate the running cluster
- Assisted WebSMIT set up
- Full support for Mozilla-based browsers Mozilla 1.7.3 for AIX and FireFox 1.0.6
- Ability to specify a group of users who can view but not modify the configuration.
Cluster Test Tool Enhancements
- Supports more cluster events.
- Has specific test plans for running site tests, non-IP network tests, IP network tests,
and volume group tests, and extends the logic in the automated test tool to run
these test plans as appropriate based on the cluster configuration.
Fast Failure Detection Method Enhancements
- Improves the speed and reliability of the detection of a node failure.
- Fast failure detection can be turned on in SMIT. It is disabled by default. See
Chapter 13 in the Administration Guide for information on how to enable it.
Geographic Distance Capability Enhancements for clusters with HACMP/XD for GLVM
(See separate XD Release Notes for full description)
- Enhanced Concurrent Volume Groups within a site
- The ability to have up to four XD_data data mirroring networks improves reliability
and mirroring performance in an HACMP cluster
- IP Address Takeover (IPAT) via IP Aliases (default) on XD networks
High Availability Cluster Multi-Processing Extended Distance (HACMP/XD) for Metro
Mirror
- Increased data availability for IBM TotalStorage Enterprise Storage Server (DS and
ESS) volumes that use Peer-to-Peer Remote Copy (PPRC) to copy data to a remote
site for disaster recovery purposes. You can now use an intermix of supported DS
units.
GEO_primary and GEO_secondary networks (HAGEO) - automatically changed to XD
Networks when you upgrade to HACMP 5.4.
B-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

- You can configure multiple XD_data networks for the mirroring function. IPAT via IP
Aliases is not allowed for HAGEO.
- You can configure multiple XD_rs232 networks for cluster heartbeating.

====================================
Installation and Migration Notes
====================================

For HACMP version 5.4, planning and installation information is split into two separate
guides: the Planning Guide and the Installation Guide.
HACMP for Linux: Installation and Administration Guide v5.4 is the first edition of a new
manual.
The Online Planning Worksheets (OLPW) application is now available for download
from the installable image, worksheets.jar, that is located at this URL:
http://www.ibm.com/systems/p/ha/ha_olpw.html
Once you accept the license agreement, locate the worksheets.jar file and click on it.
Or, run the following command from the AIX 5L command line:
java -jar worksheets.jar
You can apply a PTF to HACMP 5.4.1 on an individual node using rolling migration,
while your critical applications and resources continue running on that node although
they will not be highly available during the upgrade.
Methods of installation and migration supported in previous releases of HACMP are still
supported.

-----------------------------------5.4.1 migration restriction


------------------------------------

HACMP 5.4.1 is a modification release. There are both base-level filesets and update (ptf)
images. Users should use a consistent method for upgrading their HACMP cluster nodes.
Do not mix base-level filesets on some nodes and update (ptf) images on others in the
same cluster.

-------------------------------------------------------------------------

Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-5

Student Notebook

Enable grace periods and restart nfsd after installing cluster.es.nfs.rte


-------------------------------------------------------------------------

This note applies only to 64-bit systems. You may ignore this note if all of your cluster
nodes are 32 bit.

The NFS daemon nfsd must be restarted on each cluster node with grace periods enabled
after installing cluster.es.nfs.rte before configuring NFSv4 exports. This step is required;
otherwise, NFSv4 exports will fail to export with the misleading error message
exportfs: <export_path>: No such file or directory

The following commands enable grace periods and restart the NFS daemon.
chnfs -I -g on -x
stopsrc -s nfsd
startsrc -s nfsd

Please note that this will impact the availability of all exported filesystems on the machine,
therefore the best time to perform this step is when all resource groups with NFS exports
are offline or failed over to another node in the resource group.

----------------------------------------------------Clstat cluster node status for 'forced down' nodes


-----------------------------------------------------

The behavior of stopping a cluster node with the option to unmanage resource groups
(previously known as the force option) was significantly modified with HACMP 5.4.0. Prior
to the HACMP 5.4.0 release, this operation did bring the cluster manager daemon to a
stopped state and this was reflected by clstat showing the cluster node's status as
DOWN.
The modifications to this feature in HACMP 5.4.0 necessitated leaving the cluster manager
daemon in an online state (there were multiple motivations for this change in behavior-one was that it was required to allow Enhanced Concurrent Mode Volume Groups to
remain online). Consequently, clstat run on an HACMP 5.4.0 or later cluster will display
such cluster node's status as UP instead of DOWN as they were displayed before HACMP
5.4.0.

B-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

One other thing to keep in mind is that when migrating a cluster from before HACMP 5.4.0
to HACMP 5.4.0 or later, where some nodes are pre- 5.4.0 and others are 5.4.0 or later,
clstat run on cluster node A will display cluster node B's status following the conventions of
cluster node A. For example, if clstat is run on a 5.4.1 cluster node, it will display all forced
down nodes as UP whereas running the same clstat command on an HACMP 5.3.0 cluster
node will show those same nodes as DOWN.

-----------------------------------IMPORTANT NOTE ON UPGRADING


------------------------------------

Install the HACMP 5.3 APAR IY85489 to avoid having to start a 5.3 node from a 5.4 node.
Unless you have this APAR, when you have upgraded any node to HACMP 5.4, if you need
to start a 5.3 node while any 5.4 nodes are active, you must start the 5.3 node from a 5.4
node.
If you are upgrading and have nodes that are 5.2 or earlier and must start the 5.2 or earlier
node, start it from a downlevel node of 5.2 or lower.

==============================================
Required Release of AIX 5L for HACMP 5.4.1
==============================================

AIX 5L 5.2 ML8 with RSCT version 2.3.9 (APAR IY84921) or higher
AIX 5L 5.3 ML4 with RSCT version 2.4.5 (APAR IY84920) or higher
AIX 5L 6.1 with RSCT version 2.5.0 or higher

==============================================
HACMP Configuration Restrictions
==============================================

HACMP configuration restrictions remain the same as in previous releases and are as
follows:
Maximum nodes per cluster: 32
Maximum number of sites: 2
Minimum number of nodes per site: 1
Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-7

Student Notebook

======================
Notes on Functionality
======================

Fast failure detection


- The fast failure detection function is restricted to use with DS8000, DS6000, ES 800,
SVC, and DS4000 disk types. This function is not supported on SSA disks.
SNMP
- HACMP updates SNMP during installation. Therefore, HACMP stops and starts the
SNMP daemon during installation and deinstallation. This will require that you
restart any SNMP client applications that communicate with the node on which you
are installing HACMP.

===================================
HACMP 5.4.1 Documentation
===================================
-------------------------------Order Numbers and Document Names
-------------------------------Order numbers for 5.4.1 documentation are as follows:

Concepts and Facilities Guide

SC23-4864-10

Planning Guide

SC23-4861-10

Installation Guide

SC23-5209-01

Administration Guide

SC23-4862-10

Troubleshooting Guide

SC23-5177-04

Master Glossary

SC23-4867-09

Programming Client Applications

SC23-4865-10

HACMP/XD: Metro Mirror Planning and Administration Guide

SC23-4863-11

HACMP/XD GLVM Planning and Administration Guide

SA23-1338-06

Smart Assist for WebSphere User's Guide

SC23-4877-08

Smart Assist for Oracle

SC23-5178-04

B-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Smart Assist for DB2

SC23-5179-04

Smart Assist Developer's Guide

SC23-5210-01

HACMP for LINUX Installation and Administration Guide

SC23-5211-01

Important Note: Online Documentation for HACMP 5.4.1


------------------------------------------------------

The HACMP 5.4.1 documentation found on the product media and at


http://www-03.ibm.com/systems/p/library/hacmp_docs.html is only delivered in PDF format
this release.

------------------------------------------------------

Documentation for HACMP 5.4.1 is supplied in PDF format. You may want to install the
documentation before doing the full install of the product, to read the chapters on
installation procedures or the description of migration.

Viewing and installing the documentation files


---------------------------------------------You can view the PDF documentation before installing the product.

Take the following steps to install the documentation:


1. At the command line, enter: smit install_selectable_all
SMIT asks for the input device/directory for software.
2. Select the CD-ROM drive from the picklist and press Enter.
3. On the next SMIT screen with the cursor on Software to Install, press the F4 key.
4. SMIT lists the image cluster.doc.en_US fileset with its subdirectories:
The individual lines under the image name (for example, cluster.doc.en_US.es.pdf) are the
filesets that can be installed.

Image cluster.doc.en_US.es
-------------------------cluster.doc.en_US.es.html

HAES Web-based HTML

Documentation - U.S. English


Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-9

Student Notebook

cluster.doc.en_US.es.pdf

HAES PDF

Documentation - U.S. English

Image cluster.doc.en_US.glvm
---------------------------cluster.doc.en_US.glvm.html HACMP GLVM HTML Documentation
- U.S. English
cluster.doc.en_US.glvm.pdf

HACMP GLVM PDF Documentation

- U.S. English

Image cluster.doc.en_US.pprc
---------------------------cluster.doc.en_US.pprc.html PPRC Web-based HTML
Documentation - U.S. English
cluster.doc.en_US.pprc.pdf

PPRC PDF Documentation

- U.S. English

Image cluster.doc.en_US.assist
-----------------------------cluster.doc.en_US.assist.db2.html

HACMP Smart Assist for

DB2 HTML Documentation


- U.S. English
cluster.doc.en_US.assist.db2.pdf

HACMP Smart Assist for

DB2 PDF Documentation


- U.S. English
cluster.doc.en_US.assist.oracle.html

HACMP Smart Assist for

Oracle HTML Documentation


- U.S. English
cluster.doc.en_US.assist.oracle.pdf

HACMP Smart Assist for

Oracle PDF Documentation


- U.S. English
cluster.doc.en_US.assist.websphere.html HACMP Smart Assist for
B-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

WebSphere HTML

AP

Documentation
- U.S. English
cluster.doc.en_US.assist.websphere.pdf

HACMP Smart Assist for

WebSphere PDF
Documentation
- U.S. English

After you install the documentation, store it on a server that is accessible through the
Internet. You can view the documentation in the Mozilla Firefox browser.

Note: Installing all of the documentation requires about 46 MB of space in the /usr
filesystem. (PDF files = 26 MB, HTML files = 20 MB.)

5. Select all filesets that you wish to install and execute the command.

The documentation is installed in the following directory:

/usr/share/man/info/en_US/cluster/HAES

The titles of the HACMP for AIX 5L products, Version 5.4.1, documentation set are:

HACMP Version 5.4.1: Concepts and Facilities Guide (filename = ha_concepts)


HACMP Version 5.4.1: Planning Guide (filename = ha_plan)
HACMP Version 5.4.1: Installation Guide (filename = ha_install)
HACMP Version 5.4.1: Administration Guide (filename = ha_admin)
HACMP Version 5.4.1: Troubleshooting Guide (filename = ha_troubleshoot)
HACMP Version 5.4.1: Programming Client Applications (filename = ha_clients)
HACMP Version 5.4.1: Glossary (filename = ha_glossary)
HACMP/XD for MetroMirror: Planning and Administration Guide
(filename = ha_xd_pprc)
HACMP/XD for GLVM: Planning and Administration Guide (filename = ha_xd_glvm)

Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-11

Student Notebook

HACMP for LINUX Installation and Administration Guide (filename = ha_linux)

--------------------------------------------Accessing Documentation
---------------------------------------------

You can access the documentation in PDF format.

NOTE: The Smart Assist Guides and the Smart Assist Developer's Guide are installed with
the base fileset. They are described in a separate Smart Assists release notes

Use the following command to determine the exact files loaded into product directories
when installing the HACMP for AIX 5L, version 5.4.1:

lslpp -f cluster*

==================
PRODUCT MAN PAGES
==================

Man pages for HACMP commands and utilities are installed in the following directory:

/usr/share/man/cat1

Execute man [command-name] to read the information.

========================
Accessing IBM on the Web
========================

Access IBM's home page at:


http://www.ibm.com

B-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP
========
Feedback
========

IBM welcomes your comments. You can send any comments via e-mail to:

hafeedbk@us.ibm.com

Copyright IBM Corp. 1998, 2008

Appendix B. Release Notes for HACMP 5.4.1

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

B-13

Student Notebook

B-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Appendix C. IPAT via IP replacement


What this unit is about
This unit describes the HACMP IP Address Takeover via IP
replacement function.

What you should be able to do


After completing this unit, you should be able to:
Explain and configure IP Address Takeover (IPAT) via IP
replacement

How you will check your progress


Accountability:
Checkpoint
Machine exercises

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html

HACMP manuals

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Explain and set up IP Address Takeover (IPAT) via IP
replacement

Copyright IBM Corporation 2008

Figure C-1. Unit objectives

AU548.0

Notes:

C-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

IPAT via IP replacement configuration


Define each networks boot IP addresses in the AIX ODM.
Each interface IP address on a given node must be in a different logical IP subnet* and there must be a common subnet
among the nodes
Define these address in the /etc/hosts file and configure them in HACMP topology

Define service IP addresses in /etc/hosts and HACMP resources


The address must be in the SAME subnet as a common interface subnet
HACMP configures them to AIX as required

Before starting the application resource group

9.47.10.1 (ODM)

9.47.11.1 (ODM)

9.47.10.2 (ODM)

9.47.11.2 (ODM)

* See earlier discussion of heartbeating and failure diagnosis for explanation of why
Copyright IBM Corporation 2008

Figure C-2. IPAT via IP replacement configuration

AU548.0

Notes:
Requirements
Keep the following items in mind when you configure a network for IPAT via IP
replacement:
- There must be at least one logical IP subnet that has a communication interface
(NIC) on each node. (In HACMP 4.5 terminology, these were called boot adapters.)
- Each service IP address must be in the same logical IP subnet as one of the
non-service addresses. Contrast with IPAT via IP aliasing, where service addresses
are required to not be in a boot subnet.
- If you have more than one service IP address, they must all be in the same subnet.
The reason for this will become clear when we discuss what happens during a
takeover, see IPAT via IP replacement after a node fails on page C-8.
- None of the other non-service addresses may be in the same subnet as the service
IP address (this is true regardless of whether IPAT via IP replacement is being used
Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-3

Student Notebook

because the NICs on each node are required to be on different IP subnets in order
to support heartbeating).
- All network interfaces must have the same subnet mask.

IPAT via IP replacement subnet rules example


Each service IP address must be in one, and only one, of the non-service subnets.
All service IP addresses must be in the same subnet.
Each non-service IP address on each node must be in a separate subnet.
For example, in a cluster with one network using IPAT via replacement, where each
node has two communication interfaces and two service IP labels, the network will
require two subnets:
Node name
node1
node1
node2
node2

NIC
en0
en1
en0
en1

Service address
Service address

subnet
192.168.10/24
192.168.11/24

C-4

HACMP Implementation

IP Label
n1-if1
n1-if2
n2-if1
n2-if2
appA-svc
appB-svc

IP Address
192.168.10.1
192.168.11.1
192.168.10.2
192.168.11.2
192.168.10.22
192.168.10.25

IP labels
n1-if1, n2-if1, appA-svc,
appB-svc
n1-if2, n2-if2

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

IPAT via IP replacement in operation


When the resource group comes up on a node, HACMP replaces
an boot (ODM) IP label with the service IP label
It replaces the boot IP label on the same subnet if the resource group is on its startup
node or if the distribution startup policy is used.
It replaces a boot IP label on a different subnet otherwise

After starting the application resource group

9.47.10.22 (service)

9.47.11.1 (ODM)

9.47.10.2 (ODM)

9.47.11.2 (ODM)

Copyright IBM Corporation 2008

Figure C-3. IPAT via IP replacement in operation

AU548.0

Notes:
Operation
When the resource group comes up on its home node, the resource groups service IP
address replaces the interface IP address of the NIC (AIX ODM), which is in the same
subnet as the service IP label (that is, the boot adapter in HACMP 4.x terminology).
Note that this approach implies that there cannot be two resource groups in the cluster
that both use IPAT via IP replacement and use the same node as their home node
unless their respective service IP addresses are in different subnets (in other words,
associated with different physical networks).
Also, since the service IP address replaces the existing IP address on the NIC, it is not
possible to have two or more service IP addresses in the same resource group, which
are in the same IP subnet (as there will not be an adapter to assign the second service
IP address to).
When the resource group comes up on any node other than its home node, the
resource groups service IP address replaces the interface IP address of one of the
Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-5

Student Notebook

NICs which is not in the same subnet as the service IP address (this is primarily to allow
some other resource group to use the node as its home node).

C-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

IPAT via IP replacement after an I/F fails


If the communication interface being used for the service IP
label fails, HACMP swaps the service IP label with a boot
(ODM) IP label on one of the node's remaining available (that
is, currently functional) communication interfaces
The IP labels remain swapped when the failed interface
recovers

NIC A
9.47.11.1 (ODM)

NIC B
9.47.10.22 (service)

9.47.10.2 (ODM)

9.47.11.2 (ODM)

Copyright IBM Corporation 2008

Figure C-4. IPAT via IP replacement after an I/F fails

AU548.0

Notes:
Interface failure
If a communications interface (NIC A), which is currently assigned an IPAT via IP
replacement service IP address, fails, then HACMP moves the service IP address to
one of the other communication interfaces (NIC B) on the same node (to one of the
standby adapters using HACMP 4.x terminology).
If there are no available (that is, functional) NICs left, the relevant network then HACMP
initiates a fallover.

Interface swap
The failed communications interface (NIC A) is then reconfigured with the address of
the communication interface (NIC B) as this allows the heartbeat mechanism to watch
for when the failed communication interface (NIC A) recovers.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-7

Student Notebook

IPAT via IP replacement after a node fails


If the resource group's node fails, HACMP moves the resource
group to a new node and replaces an interface IP label with the
service IP label:
If the resource group is on its startup node or if the Startup policy is distribution, it replaces
the interface (ODM) IP label in the same subnet
Else it replaces an interface (ODM) IP label in a different subnet
Or fails if there isn't an available interface

9.47.10.2 (ODM)

9.47.10.22 (service)

Copyright IBM Corporation 2008

Figure C-5. IPAT via IP replacement after a node fails

AU548.0

Notes:
Node failure
If the node currently responsible for an IPAT via IP replacement using resource group
fails, then HACMP initiates a fallover. When the resource group comes up on the
takeover node, the service IP addresses are assigned to NICs on the fallover node:
- Home node or Startup policy of Online Using Distribution Policy (rotate in
HACMP 4.x terminology)
If the takeover node is the home node for the resource group or the resource group
has a Startup policy of Online Using Distribution Policy (rotate in HACMP 4.x
terminology), the Service IP addresses replace the IP addresses of a
communications interface (NIC) with an IP address in the same subnet as the
service IP address.
- Not the home node and not Online Using Distribution Policy
If the takeover node is not the home node for the resource group and the resource
group does not have a Startup policy of Online Using Distribution Policy, the
C-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

service IP addresses replace the IP addresses of a communications interface (NIC)


with an IP address in a different subnet than the subnet of the service IP address (a
standby adapter in HACMP 4.x terminology). This is primarily to allow some other
resource group to use the node as its home node.
Note: This explains why all service IP addresses must be in the same subnet when
using IPAT via replacement.

Home node and startup policy


The home node (or the highest priority node for this resource group) is the first node
that is listed in the participating nodelist for a non-concurrent resource group. The home
node is a node that normally owns the resource group. Note that the takeover node
might actually be the home node since a resource group can be configured to not
always run on the highest priority available node.
Resource groups have three policies that HACMP uses to determine which nodes will
start a which resource groups. A Startup policy of Online Using Distribution
Policy (also called a distributed policy) specifies that only one resource group can be
active on a given node. If the first node in the resource groups list of nodes already has
another resource group started on it then the next node in the list of nodes is tried.
These concepts will be discussed in detail in the unit on resource groups.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-9

Student Notebook

IPAT via IP replacement summary


Configure each node with up to eight communication interfaces
(each on a different subnet)
Assign service IP labels to resource groups as appropriate
Each node can be the most preferred node for at most one resource group
No limit on number of service IP labels per resource group but each service IP label
must be on a different physical network

HACMP replaces non-service IP labels with service IP labels on


the same subnet as the service IP label when the resource group
is running on its most preferred node or if the Startup Policy
is distributed
HACMP replaces non-service IP labels with service IP labels on
a different subnet from the service IP label when the resource
group is moved to any other node
IPAT via IP replacement supports hardware address
takeover
Copyright IBM Corporation 2008

Figure C-6. IPAT via IP replacement summary

AU548.0

Notes:
Advantages
Probably the most significant advantage of IPAT via IP replacement is that it supports
hardware address takeover (HWAT), which will be discussed in a few pages.
Another advantage is that it requires fewer subnets. If you are limited in the number of
subnets available for your cluster, this may be important.
Note: Another alternative, if you are limited on the number of subnets you have
available, is to use heartbeating via IP aliases. See Heartbeating Over IP Aliases in the
HACMP for AIX, Version 5.4.1 Planning Guide.

Disadvantages
Probably the most significant disadvantages are that IPAT via IP replacement limits the
number of service IP labels per subnet per resource group on one communications
C-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

interface to one and makes it rather expensive (and complex) to support lots of
resource groups in a small cluster. In other words, you need more network adapters to
support more applications.
Also, IPAT via replacement usually takes more time than IPAT via aliasing.
Note that HACMP tries to keep the service IP Labels available by swapping IP
addresses with other communication interfaces (standby adapters in HACMP 4.x
terminology) even if there are no resource groups currently on the node that uses IPAT
via IP replacement.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-11

Student Notebook

Gratuitous ARP support issues


Gratuitous ARP is supported by AIX on the following network
technologies:

Ethernet (all types and speeds)


Token-Ring
FDDI
SP Switch 1 and SP Switch 2

Gratuitous ARP is not supported on ATM


Operating systems are not required to support gratuitous ARP
packets
Practically every operating system does support gratuitous ARP
Some systems (for example, certain routers) can be configured to respect or ignore
gratuitous ARP packets

Copyright IBM Corporation 2008

Figure C-7. Gratuitous ARP support issues

AU548.0

Notes:
Review
When using IPAT via aliasing, you can use AIXs gratuitous ARP features to update
client and router ARP caches after a takeover. However, there may be issues.

Gratuitous ARP issues


Not all network technologies provide the appropriate capabilities to implement
gratuitous ARP. In addition, operating systems which implement TCP/IP are not
required to respect gratuitous ARP packets (although practically all modern operating
systems do).
Finally, support issues aside, an extremely overloaded network or a network that is
suffering intermittent failures might result in gratuitous ARP packets being lost. (A
network that is sufficiently overloaded to be losing gratuitous ARP packets or that is
suffering intermittent failures, which result in gratuitous ARP packets being lost is likely
C-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

to be causing the cluster and the cluster administrator far more serious problems than
the ARP cache issue involves.)

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-13

Student Notebook

What if gratuitous ARP is not supported?


If the local network technology doesn't support gratuitous ARP or
there is a client system or router on the local physical network
which must communicate with the cluster and which does not
support gratuitous ARP packets:
clinfo can used on the client to receive updates of changes.
clinfo can be used on the servers to ping a list of clients, forcing an update to their
ARP caches.
HACMP can be configured to perform Hardware Address Takeover (HWAT).

Suggestion:
Do not get involved with using either clinfo or HWAT to deal with
ARP cache issues until you've verified that there actually are ARP
issues which need to be dealt with.

Copyright IBM Corporation 2008

Figure C-8. What if gratuitous ARP is not supported?

AU548.0

Notes:
If gratuitous ARP is not supported
HACMP supports three alternatives to gratuitous ARP. The first two are discussed in
Unit 3. We will discuss the third option here.

Dont add unnecessary complexity


Cluster configurators should probably not simply assume that gratuitous ARP wont
provide a satisfactory solution as each of the alternatives introduces additional, possibly
unnecessary complexity into the cluster.
If the cluster administrator or configurator decides that the probability of a gratuitous
ARP update packet being lost is high enough to be relevant, then they should proceed
as though their context does not support gratuitous ARP.

C-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Option 3: Hardware address takeover


HACMP can be configured to swap a service IP label's hardware
address between network adapters.
HWAT is incompatible with IPAT via IP aliasing because each
service IP address must have its own hardware address and a NIC
can support only one hardware address at any given time.
Cluster implementer designates a Locally Administered Address
(LAA) which HACMP assigns to the NIC which has the service IP
label

Copyright IBM Corporation 2008

Figure C-9. Option 3: Hardware address takeover

AU548.0

Notes:
Hardware address takeover
Hardware Address Takeover (HWAT) is the most robust method of dealing with the ARP
cache issue as it ensures that the hardware address associated with the service IP
address does not change (which avoids the whole issue of whether the client systems
ARP cache is out-of-date).
The essence of HWAT is that the cluster configurator designates a hardware address
that is to be associated with a particular service IP address. HACMP then ensures that
whichever NIC the service IP address is on also has the designated hardware address.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-15

Student Notebook

HWAT considerations
There are a few points which must be kept in mind when contemplating HWAT:
- The hardware address that is associated with the service IP address must be unique
within the physical network that the service IP address is configured for.
- HWAT is not supported by IPAT via IP aliasing because each NIC can have more
than one IP address, but each NIC can only have one hardware address.
- HWAT is only supported for Ethernet, token ring, and FDDI networks (MCA FDDI
network cards do not support HWAT). ATM networks do not support HWAT.
- HWAT increases the takeover time (usually by just a few seconds).
- HWAT is an optional capability which must be configured into the HACMP cluster
(we will see how to do that in a few minutes).
- Cluster nodes using HWAT on token ring networks must be configured to reboot
after a system crash because the token ring card will continue to intercept packets
for its hardware address until the node starts to reboot.

C-16 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Hardware address takeover (1 of 3)


tr1

tr0

hudson-if1
9.47.9.2
255.255.255.0
00:04:ac:62:72:61

bondar-if1
9.47.9.1
255.255.255.0
00:04:ac:48:22:f4

bondar-if2
9.47.5.3
255.255.255.0
00:04:ac:62:72:49

tr1

Before
resource
group is
started

hudson-if2
9.47.5.2
255.255.255.0
00:04:ac:48:22:f6

tr0

tr0

Hudson

Bondar

tr1

tr1

hudson-if1
9.47.9.2
255.255.255.0
00:04:ac:62:72:61

bondar-if1
9.47.9.1
255.255.255.0
00:04:ac:48:22:f4

xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

After
resource
group is
started

hudson-if2
9.47.5.2
255.255.255.0
00:04:ac:48:22:f6

tr0

Hudson

Bondar
Copyright IBM Corporation 2008

Figure C-10. Hardware address takeover (1 of 3)

AU548.0

Notes:
Hardware address takeover Boot time
At boot time, the interfaces are assigned their normal hardware addresses.

HWAT: resource group started


When HACMP starts the resource group, the service IP address replaces the
non-service IP address of the interface and the alternate hardware address replaces
the normal hardware address for that NIC.
The alternate hardware address is usually referred to as a Locally Administered
Address or LAA.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-17

Student Notebook

Hardware address takeover (2 of 3)


LAA is moved along with the service IP label
xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

hudson-if1
9.47.9.2
255.255.255.0
00:04:ac:62:72:61

tr1

Interface
failure

xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

hudson-if2
9.47.5.2
255.255.255.0
00:04:ac:48:22:f6

Hudson

Bondar

xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

bondar-if1
9.47.9.1
255.255.255.0
00:04:ac:48:22:f4

Node failure
xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

hudson-if2
9.47.5.2
255.255.255.0
00:04:ac:48:22:f6

Bondar

Hudson
Copyright IBM Corporation 2008

Figure C-11. Hardware address takeover (2 of 3)

AU548.0

Notes:
HWAT: interface or node failure
If a NIC (with a service IP address that has an LAA) fails, HACMP moves the IP
address to a NIC on the takeover node. It also moves the LAA (alternative hardware
address) to the same NIC.
If a node fails, the service IP address, and its associated LAA, are moved to another
node.
The result, in both of these cases, is that the local clients ARP caches are still up to
date because the HW address associated with the IP address has not changed.

C-18 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Hardware address takeover (3 of 3)


tr1

tr0

xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

bondar-if1
9.47.9.1
255.255.255.0
00:04:ac:48:22:f4

bondar-if2
9.47.5.3
255.255.255.0
00:04:ac:62:72:49

When a failed node


comes back to life,
the burned in ROM
Address is used on
the service network
adapter.

hudson-if2
9.47.5.2
255.255.255.0
00:04:ac:48:22:f6

tr0

tr0

Hudson

Bondar

tr1

tr1

bondar-if1
9.47.9.1
255.255.255.0
00:04:ac:48:22:f4

xweb
9.47.5.1
255.255.255.0
40:04:ac:62:72:49

hudson-if1
9.47.9.2
255.255.255.0
00:04:ac:62:72:61

After HACMP is
started the node
reintegrates
according to its
resource group
parameters

hudson-if2
9.47.5.2
255.255.255.0
00:04:ac:48:22:f6

tr1
tr0

Hudson

Bondar
Copyright IBM Corporation 2008

Figure C-12. Hardware address takeover (3 of 3)

AU548.0

Notes:
HWAT: node recovery
When the failed node reboots, AIX must be configured to leave the network cards
factory-defined hardware address in place. If AIX is configured to set the network cards
HW address to the alternate hardware address at boot time, then two NICs on the same
network have the same hardware address (weird things happen when you do this).

HWAT: resource moved back to home node


If HACMP ultimately moves the resource group back to the now recovered node, then
the hardware address of the NIC on the backup node is restored to its factory setting,
and the LAA associated with the service IP address lands on the same NIC on the
recovered node as the service IP address lands on.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-19

Student Notebook

Implementing hardware address takeover


Someone just got a great deal on a dozen used FOOL-97x
computers for the summer students to use
They run some strange proprietary operating system which refuses
to update its ARP cache in response to either ping or gratuitous ARP
packets

bondar

hudson

Copyright IBM Corporation 2008

Figure C-13. Implementing hardware address takeover

AU548.0

Notes:
Hardware address takeover
In this scenario, we will implement HWAT to support the new computers discussed in
the visual.
Just imagine how much money they have saved once they realize that these new
computers dont do what the summer students need done!
In the meantime, it looks like we need to implement hardware address takeover to
support these FOOL-97Xs.

Reality check
A side note is probably in order: although most TCP/IP-capable systems respect
gratuitous ARP, there are strange devices out there that do not. This scenario is phony
but it presents a real if rather unlikely problem. For example, the ATM network does not
support gratuitous ARP and so could be a candidate for the use of HWAT.
C-20 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Our plan for implementing HWAT


1. Stop cluster services on both cluster nodes
Use the graceful shutdown option to bring down the resource groups and their
applications

2. Remove the alias service labels from the Resources


They are in the wrong subnet for replacement
They are automatically removed from the RG

3. Convert the net_ether_01 Ethernet network to use IPAT via IP


replacement:
Disable IPAT via IP aliasing on the Ethernet network.
Update /etc/hosts on both cluster nodes to describe service IP labels and addresses
on the 192.168.15.0 subnet
Use the procedure described in the networking to select the (Locally Administered
Addresses (LAA) addresses
Configure new service IP labels with these LAA addresses in the HACMP SMIT
screens

4. Define resource groups to use the new service IP labels.


5. Synchronize the changes
6. Restart cluster services on the two nodes.
Copyright IBM Corporation 2008

Figure C-14. The plan for implementing HWAT

AU548.0

Notes:
Implementing HWAT
To use HWAT, we must use IPAT via replacement.

Stop cluster services


Changing from IPAT via aliasing to IPAT via replacement cannot be done dynamically,
we must stop the cluster.

Remove existing service IP labels


The service IP labels used for IPAT via aliasing cannot be used for IPAT via
replacement. They are on the wrong subnet. We will either need to change our service
addresses or change our non-service addresses. In this scenario, we choose to change
the service addresses.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-21

Student Notebook

Convert the network to use IPAT via replacement


In addition to the obvious step of disabling IPAT via aliasing, we also need to update our
name resolution for the new service IP labels and we need to create an alternate
hardware address or Locally Administered Address (LAA) for each service IP label.

Name resolution changes


One slight problem with the above procedure is that it requires the users (or the DNS
administrator) to change the service IP address that they are using. It would arguably
be better if we preserved the service IP address. However, this would require more
network reconfiguration work and it isnt totally clear that the difference is significant in
the grand scheme of things. Note that either approach requires the cooperation of the
network administrators as we will require IP addresses and probably DNS changes.

C-22 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Stopping HACMP
# smit clstop
Stop Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
now
[bondar,hudson]
true
graceful

* Stop now, on system restart or both


Stop Cluster Services on these nodes
BROADCAST cluster shutdown?
* Shutdown mode

+
+
+
+

Content of this menu will very depending on the HACMP version.


Choose the stop option that takes the resources offline.
F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure C-15. Stopping HACMP

AU548.0

Notes:
Stop HACMP
Make sure that HACMP is shut down gracefully, as we cant have the application
running while we are changing service IP addresses.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-23

Student Notebook

Removing a service IP label


Press Enter here and you will be prompted to confirm the removal.
Configure HACMP Service IP Labels/Addresses
Move cursor to desired item and press Enter.
Add a Service IP Label/Address
Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

+--------------------------------------------------------------------------+

Select Service IP Label(s)/Address(es) to Remove

Move cursor to desired item and press F7.

ONE OR MORE items can be selected.

Press Enter AFTER making all selections.

xweb

yweb

zweb

F1=Help
F2=Refresh
F3=Cancel

F7=Select
F8=Image
F10=Exit

F1 Enter=Do
/=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Repeat for both service IP labels.


Copyright IBM Corporation 2008

Figure C-16. Removing a service IP label

AU548.0

Notes:
Remove any service labels configure for IPAT via aliasing
An attempt to convert the network to IPAT via IP replacement fails if there are any
service IP labels that dont conform to the IPAT via IP replacement rules.

C-24 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Disable IPAT via aliases


Set the "Enable IP Address Takeover via IP Aliases" setting to "No"
and press Enter.
Change/Show an IP-Based Network in the HACMP Cluster
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Network Name
New Network Name
* Network Type
* Netmask
* Enable IP Address Takeover via IP Aliases
IP Address Offset for Heartbeating over IP Aliases

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
net_ether_01
[]
[ether]
[255.255.255.0]
[No]
[]

F3=Cancel
F7=Edit
Enter=Do

+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure C-17. Disable IPAT via aliases

AU548.0

Notes:
Introduction
Here we change the net_ether_01 network to disable IPAT via aliasing.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-25

Student Notebook

The updated /etc/hosts


Here's the key portion of the /etc/hosts file with the service IP
labels moved to the 192.168.15.0 subnet:
192.168.5.29
192.168.15.29
192.168.16.29
192.168.5.31
192.168.15.31
192.168.16.31
192.168.15.92

bondar
bondar-if1
bondar-if2
hudson
hudson-if1
hudson-if2
xweb

192.168.15.70

yweb

#
#
#
#
#
#
#
#
#
#

persistent node IP label on bondar


bondar's first boot IP label
bondar's second boot IP label
persistent node IP label on hudson
hudson's first boot IP label
hudson's second boot IP label
the IP label for the application normally
resident on bondar
the IP label for the application normally
resident on hudson

Note that neither bondar or hudson's network configuration (as


defined with the AIX TCP/IP smit screens) needs to be changed
Note that we are not renaming the interface IP labels to something like
bondar_boot and bondar_standby as changing IP labels in an HACMP
cluster can be quite a bit of work (it is often easier to delete the cluster
definition and start over)
Copyright IBM Corporation 2008

Figure C-18. The updated /etc/hosts

AU548.0

Notes:
IPAT via replacement rules
Remember the rules for IP addresses for IPAT via IP replacement networks (slightly
reworded):
a. The service IP labels must all be on the same subnet.
b. There must be one NIC on each host that has an IP address on the same subnet as
the service IP labels (in HACMP 4.x terminology, these NICs are boot adapters).
c. The other NICs on each node must each be in a different subnet than the service IP
labels (in HACMP 4.x terminology, these NICs are standby adapters).
In a cluster with only two NICs per node, NIC IP addresses that conform to the IPAT via
IP aliasing rules also conform to the IPAT via replacement; so only the service IP labels
need to be changed.

C-26 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Creating a
locally administered address (LAA)
Each service IP label using HWAT will need an LAA
The LAA must be unique on the cluster's physical network
The MAC address based technologies (Ethernet, Token ring and
FDDI) use six byte hardware addresses of the form:
xx.xx.xx.xx.xx.xx
The factory-set MAC address of the NIC will start with 0, 1, 2 or 3
A MAC address that starts with 0, 1, 2 or 3 is called a Globally Administered Address
(GAA) because it is assigned to the NIC's vendor by a central authority

Incrementing this first digit by 4 transforms the GAA into a Locally


Administered Address (LAA) which will be unique worldwide
(unless someone has already used the same GAA to create an
LAA which isn't likely since GAAs are unique worldwide)

Copyright IBM Corporation 2008

Figure C-19. Creating a locally administered address (LAA)

AU548.0

Notes:
Hardware addresses
Hardware addresses must be unique, at a minimum, on the local network to which they
are connected. The factory set hardware address for each network interface card (NIC)
is administered by a central authority and should be unique in the world. These
addresses are called Globally Administered Addresses (GAAs).

Locally administered addresses


Incrementing the first nibble of the GAA by 4 transforms it into an LAA.
Using this method to create an alternate address should provide you with an address
that is also globally unique, as noted in the visual.
Note: According to the IEEE 802 standard for LAN MAC addresses, the second bit
transmitted on the LAN medium (the 4 bit) is the local/global bit. If this bit is zero, the
address is a GAA. Setting this bit to one indicates this address is locally administered.
Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-27

Student Notebook

Creating two LAAs for our cluster


Here are two Globally Administered Addresses (GAAs)
taken from Ethernet adapters in the cluster:
0.4.ac.17.19.64
0.6.29.ac.46.8

First we make sure that each number is two digits long by


adding leading zeros as necessary:
00.04.ac.17.19.64
00.06.29.ac.46.08

Verify that the first digit is 0, 1, 2 or 3:


Yep!

Add 4 to the first digit of each GAA:


40.04.ac.17.19.64
40.06.29.ac.46.08

Done! These two addresses are now LAAs

Copyright IBM Corporation 2008

Figure C-20. Creating two LAAs for our cluster

AU548.0

Notes:

C-28 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Hardware address takeover issues


Do not enable the ALTERNATE hardware address field in
the SMIT devices menu
Causes the adapter to boot on your chosen LAA rather than the burned in
ROM address.
Causes serious communications problems and puts the cluster in to an
unstable state.
Correct method is to enter your chosen LAA in to the smit HACMP menus
(remove the periods or colons before entering it into the field).

The Token-Ring documentation states that the LAA must start


with 42
The FDDI documentation states that the first nibble (digit) of
the first byte of the LAA must be 4, 5, 6 or 7 (which is
compatible with the method for creating LAAs described
earlier)
Token-Ring adapters do not release the LAA if AIX crashes.
AIX must be set to reboot automatically after a system crash
(see smitty chgsys)

Copyright IBM Corporation 2008

Figure C-21. Hardware address takeover issues

AU548.0

Notes:
Issues
The main thing to remember is that you do NOT configure the ALTERNATE hardware
address field in the SMIT devices panel.
You must leave that blank and configure this using the SMIT HACMP menus.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-29

Student Notebook

Redefining the service IP labels for HWAT


Redefine the two service IP labels. Note that the periods are stripped
out before the LAA is entered into the HW Address field.
Add a Service IP Label/Address configurable on Multiple Nodes (extended)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* IP Label/Address
* Network Name
Alternate HW Address to accompany IP Label/Address

[Entry Fields]
[xweb]
+
net_ether_01
[4004ac171964]

You probably shouldn't use the particular LAAs


shown on these visuals in your cluster. Select
your own LAAs using the procedure described
earlier.

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

F4=List
F8=Image

Don't forget to specify the second LAA for the second service IP label.
Copyright IBM Corporation 2008

Figure C-22. Redefining the service IP labels for HWAT

AU548.0

Notes:
Redefining the service IP labels
Define each of the service IP labels making sure to specify a different LAA address for
each one.
The Alternate HW Address to accompany IP Label/Address is specified as a series
of hexadecimal digits without intervening periods or any other punctuation.
If IPAT via IP replacement is specified for the network, which it is in this case, you get an
error or a warning from this screen if you try to define service IP labels which do not
conform to the rules for service IP labels on IPAT via IP replacement networks.

C-30 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Synchronize your changes


Synchronize the changes and run through the test plan.
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Verify, Synchronize or Both
[Both]
Force synchronization if verification fails? [No]
* Verify changes only?
[No]
* Logging
[Standard]

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure C-23. Synchronize your changes

AU548.0

Notes:
Synchronize
Dont forget to synchronize.

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-31

Student Notebook

Checkpoint
1. For IPAT via replacement (select all that apply)
a. Each service IP address must be in the same subnet as one of the
non-service addresses
b. Each service IP address must be in the same subnet
c. Each service IP address cannot be in any non-service address subnet

2.

True or False?
If the takeover node is not the home node for the resource group and
the resource group does not have a Startup policy of Online Using
Distribution Policy, the service IP address replaces the IP address of a
NIC with an IP address in the same subnet as the subnet of the
service IP address.

3.

True or False?
In order to use HWAT, you must enable and complete the
ALTERNATE ETHERNET address field in the SMIT devices menu.

4.

True or False?
You must stop the cluster in order to change from IPAT via aliasing to
IPAT via replacement.
Copyright IBM Corporation 2008

Figure C-24. Checkpoint

AU548.0

Notes:

C-32 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

AP

Unit summary
Key points from this unit:
IPAT via IP replacement:
May require fewer subnets than IPAT via aliasing
May require more NICs than IPAT via aliasing
Supports hardware address takeover

HACMP replaces non-service IP labels with service IP labels on the


same subnet as the service IP label when the resource group is started
on its home node or if the Startup Policy is distributed
HACMP replaces non-service IP labels with service IP labels on a
different subnet from the service IP label when the resource group is
moved to any other node
IPAT via IP replacement configuration issues
Service IP address must be the same subnet as one of the non-service subnets
All service IP addresses must be in the same subnet
You must have at least as many NICs on each node as service IP addresses

Hardware Address Takeover (HWAT) issues


Alternate hardware address (Locally Administered Address or LAA) must be configured
in HACMP. Do NOT use standard SMIT field.
Alternate hardware address must be unique.
Copyright IBM Corporation 2008

Figure C-25. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Appendix C. IPAT via IP replacement

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

C-33

Student Notebook

C-34 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Appendix D. Configuring target mode SSA


What this unit is about
This appendix describes steps required to configure Target mode SSA
(TMSSA).

What you should be able to do


After completing this unit, you should be able to:
Configure Target Mode SSA

How you will check your progress


Accountability:
Self-guided implementation

References
SC23-5209-01 HACMP for AIX, Version 5.4.1: Installation Guide
SC23-4864-10 HACMP for AIX, Version 5.4.1:
Concepts and Facilities Guide
SC23-4861-10 HACMP for AIX, Version 5.4.1: Planning Guide
SC23-4862-10 HACMP for AIX, Version 5.4.1: Administration Guide
SC23-5177-04 HACMP for AIX, Version 5.4.1: Troubleshooting Guide
SC23-4867-09 HACMP for AIX, Version 5.4.1: Master Glossary
http://www-03.ibm.com/systems/p/library/hacmp_docs.html

HACMP manuals

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-1

Student Notebook

Unit objectives
After completing this unit, you should be able to:
Perform the steps necessary to configure Target Mode SSA

Copyright IBM Corporation 2008

Figure D-1. Unit objectives

AU548.0

Notes:

D-2

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Implementing target mode SSA


The serial cable being used to implement the rs232 non-IP network has
been borrowed by someone and nobody noticed
A decision has been made to implement a target mode SSA (tmssa)
non-IP network as it won't fail unless complete access to the shared
SSA disks is lost by one of the nodes (and someone is likely to notice
that)
bondar

hudson
D

Copyright IBM Corporation 2008

Figure D-2. Implementing target mode SSA

AU548.0

Notes:
Target mode SSA or heartbeat over disk networks
Sadly, the premise behind this scenario is all too real. The problem with rs232 non-IP
networks is that if they become disconnected or otherwise disabled, then it is entirely
possible that nobody notices even though HACMP logs the failure of the connection
when it happens and reports it in the logs if it is down at HACMP startup time. In
contrast, a target mode SSA network or heartbeat on disk network wont fail until all
paths between the two nodes fail. Since such a failure will cause one or both nodes to
lose access to some or all of the shared disks, such a failure is MUCH less likely to go
unnoticed. We focus on SSA in this scenario as we have discussed heartbeat over disk
earlier in the course.

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-3

Student Notebook

Setting the SSA node number


The first step is to give each node a unique SSA node number.
We'll set bondar's ssa node number to 1 and hudson's to 2.
Change/Show the SSA Node Number For This System
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
[1]

SSA Node Number

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

F3=Cancel
F7=Edit
Enter=Do

+#

F4=List
F8=Image

Use the smitty ssaa fastpath to get to AIX's SSA Adapters menu.
Copyright IBM Corporation 2008

Figure D-3. Setting the SSA node number

AU548.0

Notes:
Required software
Target mode SSA support requires that the devices.ssa.tm.rte file set be installed on
all cluster nodes.

SSA node number and HACMP node ID


The first step in configuring a target mode SSA network is to assign a unique SSA node
number to each node. Earlier versions of HACMP required that the SSA node number
be the same as the nodes HACMP node id. HACMP 5.1 (and above) does not have
this requirement (and does not expose the HACMP node id to the administrator). We
assign 1 as the SSA node number for bondar and 2 as the SSA node number for
hudson.

D-4

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Requirements for SSA node numbers


The minimum requirements for HACMP 5.x are that the SSA node numbers be
non-zero and unique for each node within the cluster. Strictly speaking, the SSA node
numbers must also be unique across all systems with shared access to the SSA
subsystem. This is usually not a concern as allowing non-cluster nodes to have any
form of access to a clusters shared disks is an unnecessary risk that few cluster
administrators would ever accept.

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-5

Student Notebook

Configuring the tmssa devices


This is a three-step process for a two-node cluster as each
node needs tmssa devices which refer to the other node:
1. run cfgmgr on one of the nodes (bondar)

bondar is now ready to respond to tmssa queries

2. run cfgmgr on the other node (hudson)

hudson is now ready to respond to tmssa queries


hudson also knows that bondar supports tmssa and has created the
tmssa devices (/dev/tmssa1.im and /dev/tmssa1.tm) which refer to
bondar

3. run cfgmgr again on the first node (bondar)

bondar now also knows that hudson supports tmssa and has created
the tmssa devices (/dev/tmssa2.im and /dev/tmssa2.tm) which refer
to hudson

bondar now has /dev/tmssa2.im /dev/tmssa2.tm devices


which refer to hudson

Copyright IBM Corporation 2008

Figure D-4. Configuring the tmssa devices

AU548.0

Notes:
Introduction
Once each node has a unique SSA node number, the AIX configuration manager needs
to be used to define the tmssa devices. Each node must have tmssa devices which
refer to each of the other nodes that they can see via the SSA loops. When cfgmgr is
run on a node, it sets up the node to accept tmssa packets, and it then defines tmssa
devices referring to any other nodes which respond to tmssa packets. In order for this to
all work, the other nodes must all be set up to accept and respond to tmssa packets.

Procedure
The end result is that the following procedure gets all the required tmssa devices
defined:

D-6

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

1. Run cfgmgr on each cluster node in turn. This sets up each node to handle tmssa
packets, and defines the tmssa devices on each node to refer to nodes which have
already been setup for tmssa.
2. Run cfgmgr on each node in turn again (depending upon exactly what order you do this
in, it is actually possible to skip running cfgmgr on one of the nodes, but it is probably
not worth the trouble of being sure that the last cfgmgr run wasnt required).
3. Verify the tmssar devices exist:
Run
# lsdev -C | grep tmssa
on each node. There should be a tmssar device (which is actually a target mode SSA
router acting as a pseudo device) configured on each node.
4. Verify the tmssa devices exist:
Run
# ls /dev/tmssa*
on each node. Note that each node has target mode SSA devices called
/dev/tmssa#.im and /dev/tmssa#.tm where # refers to the other nodes node number.
5. Test the target mode connection:
Enter the following command on the node with id 1 (make sure you specify the tm suffix
and not the im suffix):
# cat < /dev/tmssa2.tm
(This command should hang)
On the node with ID 2, enter the following command (make sure that you specify the im
suffix and not the tm suffix):
# cat /etc/hosts > /dev/tmssa1.im
(The /etc/hosts file should be displayed on the first node)
This validates that the target mode serial network in functional. Please note that any
text file may be substituted for /etc/hosts and you have to specify different tmssa
device names if you configured different SSA node numbers for each node. This is
simply an example.

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-7

Student Notebook

Rediscover the HACMP information


Next, we need to get HACMP to know about the new communication
devices so we run the auto-discovery procedure again on one of the nodes.

Extended Configuration
Move cursor to desired item and press Enter.
Discover
Extended
Extended
Extended
Extended
Security
Snapshot

HACMP-related Information from Configured Nodes


Topology Configuration
Resource Configuration
Event Configuration
Performance Tuning Parameters Configuration
and Users Configuration
Configuration

Extended Verification and Synchronization

F1=Help
F9=Shell

F2=Refresh
F10=Exit

F3=Cancel
Enter=Do

F8=Image

Copyright IBM Corporation 2008

Figure D-5. Rediscover the HACMP information

AU548.0

Notes:
HACMP discover
By discovering the new devices, they will appear in SMIT pick lists when we configure
the tmssa non-IP network. Strictly speaking, it is not necessary to rerun the HACMP
discovery as it is possible to configure tmssa networks by entering in the tmssa device
names explicitly. As this is a rather error-prone process, it is probably best to use the
HACMP discovery mechanism to discover the devices for us.

D-8

HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Defining a non-IP tmssa network (1 of 3)


This should look very familiar as it is the same procedure that was
used to define the non-IP rs232 network earlier.
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+

Select a category

Move cursor to desired item and press Enter.

Add Discovered Communication Interface and Devices

Add Predefined Communication Interfaces and Devices

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure D-6. Defining a non-IP tmssa network (1 of 3)

AU548.0

Notes:
Defining a non-IP tmssa network
The procedure for defining a non-IP tmssa network is pretty much identical to the
procedure used earlier to define the non-IP rs232 network.

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-9

Student Notebook

Defining a non-IP tmssa network (2 of 3)


Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+

Select a category

Move cursor to desired item and press Enter.

# Discovery last performed: (Feb 12 18:20)

Communication Interfaces

Communication Devices

F1=Help
F2=Refresh
F3=Cancel

F8=Image
F10=Exit
Enter=Do

F1 /=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure D-7. Defining a non-IP tmssa network (2 of 3)

AU548.0

Notes:

D-10 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Defining a non-IP tmssa network (3 of 3)


Now, we need to define the tmssa network using a process
Configure HACMP Communication Interfaces/Devices
Move cursor to desired item and press Enter.
Add Communication Interfaces/Devices
+--------------------------------------------------------------------------+
Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7. Use arrow keys to scroll.

ONE OR MORE items can be selected.

Press Enter AFTER making all selections.

# Node
Device
Device Path
Pvid

>
hudson
tmssa1
/dev/tmssa1

>
bondar
tmssa2
/dev/tmssa2

bondar
tty0
/dev/tty0

hudson
tty0
/dev/tty0

bondar
tty1
/dev/tty1

hudson
tty1
/dev/tty1

F1=Help
F2=Refresh
F3=Cancel

F7=Select
F8=Image
F10=Exit

F1 Enter=Do
/=Find
n=Find Next

F9+--------------------------------------------------------------------------+

Copyright IBM Corporation 2008

Figure D-8. Defining a non-IP tmssa network (3 of 3)

AU548.0

Notes:
Final step
Select the tmssa devices on each node and press Enter to define the network.
Refer to Chapter 13 of the HACMP v5.3 Planning and Installation Guide for information
on configuring all supported types of non-IP networks.

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-11

Student Notebook

Synchronize your changes


Synchronize the changes and run through the test plan
HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Verify, Synchronize or Both
Force synchronization if verification fails?
* Verify changes only?
* Logging

F1=Help
F5=Reset
F9=Shell

F2=Refresh
F6=Command
F10=Exit

[Entry Fields]
[Both]
[No]
[No]
[Standard]

F3=Cancel
F7=Edit
Enter=Do

+
+
+
+

F4=List
F8=Image

Copyright IBM Corporation 2008

Figure D-9. Synchronize your changes

AU548.0

Notes:

D-12 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0
Student Notebook

Uempty

Unit summary
Key points from this unit:
This unit showed the steps necessary to configure Target
Mode SSA

Copyright IBM Corporation 2008

Figure D-10. Unit summary

AU548.0

Notes:

Copyright IBM Corp. 1998, 2008

Appendix D. Configuring target mode SSA

Course materials may not be reproduced in whole or in part


without the prior written permission of IBM.

D-13

Student Notebook

D-14 HACMP Implementation

Copyright IBM Corp. 1998, 2008


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

V4.0

backpg

Back page



Das könnte Ihnen auch gefallen