Beruflich Dokumente
Kultur Dokumente
Reference Document for the High Availability Add-On for Red Hat
Enterprise Linux 7
Reference Document for the High Availability Add-On for Red Hat
Enterprise Linux 7
Legal Notice
Co pyright 20 15 Red Hat, Inc. and o thers.
This do cument is licensed by Red Hat under the Creative Co mmo ns Attributio n-ShareAlike 3.0
Unpo rted License. If yo u distribute this do cument, o r a mo dified versio n o f it, yo u must pro vide
attributio n to Red Hat, Inc. and pro vide a link to the o riginal. If the do cument is mo dified, all Red
Hat trademarks must be remo ved.
Red Hat, as the licenso r o f this do cument, waives the right to enfo rce, and agrees no t to assert,
Sectio n 4 d o f CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shado wman lo go , JBo ss, MetaMatrix, Fedo ra, the Infinity
Lo go , and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o ther
co untries.
Linux is the registered trademark o f Linus To rvalds in the United States and o ther co untries.
Java is a registered trademark o f Oracle and/o r its affiliates.
XFS is a trademark o f Silico n Graphics Internatio nal Co rp. o r its subsidiaries in the United
States and/o r o ther co untries.
MySQL is a registered trademark o f MySQL AB in the United States, the Euro pean Unio n and
o ther co untries.
No de.js is an o fficial trademark o f Jo yent. Red Hat So ftware Co llectio ns is no t fo rmally
related to o r endo rsed by the o fficial Jo yent No de.js o pen so urce o r co mmercial pro ject.
The OpenStack Wo rd Mark and OpenStack Lo go are either registered trademarks/service
marks o r trademarks/service marks o f the OpenStack Fo undatio n, in the United States and o ther
co untries and are used with the OpenStack Fo undatio n's permissio n. We are no t affiliated with,
endo rsed o r spo nso red by the OpenStack Fo undatio n, o r the OpenStack co mmunity.
All o ther trademarks are the pro perty o f their respective o wners.
Abstract
Red Hat High Availability Add-On Reference pro vides reference info rmatio n abo ut installing,
co nfiguring, and managing the Red Hat High Availability Add-On fo r Red Hat Enterprise Linux 7.
T able of Contents
. .hapt
C
. . . .er
. .1. .. Red
. . . . Hat
. . . .High
. . . . Availabilit
. . . . . . . . . y. .Add. . . .O. n
. . Configurat
. . . . . . . . . .ion
. . .and
. . . .Management
. . . . . . . . . . . .Reference
..........................
O verview
3
1.1. New and Chang ed Features
3
1.2. Ins talling Pac emaker c o nfig uratio n to o ls
4
1.3. Co nfig uring the ip tab les Firewall to Allo w Clus ter Co mp o nents
5
1.4. The Clus ter and Pac emaker Co nfig uratio n Files
5
. .hapt
C
. . . .er
. .2. .. T. he
. . . pcs
. . . .Command
. . . . . . . . .Line
. . . . Int
. . .erface
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . .
2 .1. The p c s Co mmand s
6
2 .2. p c s Us ag e Help Dis p lay
6
2 .3. Viewing the Raw Clus ter Co nfig uratio n
7
2 .4. Saving a Co nfig uratio n Chang e to a File
7
2 .5. Dis p laying Status
7
2 .6 . Dis p laying the Full Clus ter Co nfig uratio n
8
2 .7. Dis p laying The Current p c s Vers io n
8
2 .8 . Bac king Up and Res to ring a Clus ter Co nfig uratio n
8
. .hapt
C
. . . .er
. .3.
. .Clust
. . . . .er. .Creat
. . . . ion
. . . .and
. . . .Administ
. . . . . . . rat
. . .ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . .
3 .1. Clus ter Creatio n
3 .2. Manag ing Clus ter No d es
9
12
14
16
16
. .hapt
C
. . . .er
. .4. .. Fencing:
. . . . . . . . Configuring
. . . . . . . . . . . ST
. . .O. NIT
. . .H
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 7. . . . . . . . . .
4 .1. Availab le STO NITH (Fenc ing ) Ag ents
17
4 .2. G eneral Pro p erties o f Fenc ing Devic es
4 .3. Dis p laying Devic e-Sp ec ific Fenc ing O p tio ns
4 .4. Creating a Fenc ing Devic e
17
18
19
4 .5. Co nfig uring Sto rag e-Bas ed Fenc e Devic es with unfenc ing
4 .6 . Dis p laying Fenc ing Devic es
19
19
20
20
20
23
24
. .hapt
C
. . . .er
. .5.
. .Configuring
. . . . . . . . . . .Clust
. . . . .er. .Resources
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. 5. . . . . . . . . .
5 .1. Res o urc e Creatio n
25
5 .2. Res o urc e Pro p erties
26
5 .3. Res o urc e-Sp ec ific Parameters
5 .4. Res o urc e Meta O p tio ns
5 .5. Res o urc e G ro up s
5 .6 . Res o urc e O p eratio ns
5 .7. Dis p laying Co nfig ured Res o urc es
26
27
29
31
33
33
34
34
35
. .hapt
C
. . . .er
. .6. .. Resource
. . . . . . . . .Const
. . . . . raint
....s
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
...........
6 .1. Lo c atio n Co ns traints
6 .2. O rd er Co ns traints
36
38
41
43
. .hapt
C
. . . .er
. .7. .. Managing
. . . . . . . . . Clust
. . . . .er
. . Resources
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. 4. . . . . . . . . .
7 .1. Manually Mo ving Res o urc es Aro und the Clus ter
7 .2. Mo ving Res o urc es Due to Failure
7 .3. Mo ving Res o urc es Due to Co nnec tivity Chang es
7 .4. Enab ling , Dis ab ling , and Banning Clus ter Res o urc es
7 .5. Dis ab ling a Mo nito r O p eratio ns
44
45
46
47
48
48
. .hapt
C
. . . .er
. .8. .. Advanced
. . . . . . . . . Resource
. . . . . . . . . t.ypes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
...........
8 .1. Res o urc e Clo nes
50
8 .2. Multi-State Res o urc es : Res o urc es That Have Multip le Mo d es
52
8 .3. Event No tific atio n with Mo nito ring Res o urc es
54
8 .4. The p ac emaker_remo te Servic e
56
. .hapt
C
. . . .er
. .9. .. Pacemaker
. . . . . . . . . .Rules
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. 0. . . . . . . . . .
9 .1. No d e Attrib ute Exp res s io ns
60
9 .2. Time/Date Bas ed Exp res s io ns
61
9 .3. Date Sp ec ific atio ns
9 .4. Duratio ns
61
62
62
62
63
. .hapt
C
. . . .er
. .1. 0. .. Pacemaker
. . . . . . . . . .Clust
. . . . .er. .Propert
. . . . . . ies
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. 4. . . . . . . . . .
10 .1. Summary o f Clus ter Pro p erties and O p tio ns
10 .2. Setting and Remo ving Clus ter Pro p erties
64
66
66
. .hapt
C
. . . .er
. .1. 1. .. T
. .he
. . pcsd
. . . . . Web
. . . . UI
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. 8. . . . . . . . . .
11.1. p c s d Web UI Setup
11.2. Manag ing Clus ters with the p c s d Web UI
68
68
69
69
69
70
. .ppendix
A
. . . . . . . A.
. . Clust
. . . . . er
. . Creat
. . . . .ion
. . . in
. . Red
. . . . Hat
. . . . Ent
. . . erprise
. . . . . . .Linux
. . . . . 6. .and
. . . .Red
. . . .Hat
. . . Ent
. . . erprise
. . . . . . .Linux
. . . . . 7. . . . .7. 1. . . . . . . . . .
A .1. Clus ter Creatio n with rg manag er and with Pac emaker
71
A .2. Clus ter Creatio n with Pac emaker in Red Hat Enterp ris e Linux 6 .5 and Red Hat Enterp ris e Linux
7
75
. .ppendix
A
. . . . . . . B.
. . .Revision
. . . . . . . .Hist
. . . ory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. 7. . . . . . . . . .
I.ndex
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. 9. . . . . . . . . .
Chapt er 1 . Red Hat High Availabilit y Add- O n Configurat ion and Management Reference O verview
1.1.1. New and Changed Feat ures for Red Hat Ent erprise Linux 7.1
Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and
changes.
The p cs reso u rce clean u p command can now reset the resource status and failcount for all
resources, as documented in Section 5.11, Cluster Resources Cleanup .
You can specify a lif et ime parameter for the p cs reso u rce mo ve command, as documented in
Section 7.1, Manually Moving Resources Around the Cluster .
As of Red Hat Enteprise Linux 7.1, you can use the p cs acl command to set permissions for local
users to allow read-only or read-write access to the cluster configuration by using access control
lists (ACLs). For information on ACLs, see Section 3.3, Setting User Permissions .
Section 6.2.3, Ordered Resource Sets and Section 6.3, Colocation of Resources have been
extensively updated and clarified.
Section 5.1, Resource Creation documents the d isab led parameter of the p cs reso u rce
creat e command, to indicate that the resource being created is not started automatically.
Section 3.1.6, Configuring Quorum Options documents the new clu st er q u o ru m u n b lo ck
feature, which prevents the cluster from waiting for all nodes when establishing quorum.
Section 5.1, Resource Creation documents the b ef o re and af t er parameters of the p cs
reso u rce creat e command, which can be used to configure resource group ordering.
As of the Red Hat Enterprise Linux 7.1 release, you can backup the cluster configuration in a
tarball and restore the cluster configuration files on all nodes from backup with the b acku p and
rest o re options of the p cs co n f ig command. For information on this feature, see Section 2.8,
Backing Up and Restoring a Cluster Configuration .
Small clarifications have been made throughout this document.
1.1.2. New and Changed Feat ures for Red Hat Ent erprise Linux 7.2
Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and
changes.
You can now use the p cs reso u rce relo cat e ru n command to move a resource to its preferred
node, as determined by current cluster status, constraints, location of resources and other
settings. For information on this command, see Section 7.1.2, Moving a Resource to its Preferred
Node .
Section 8.3, Event Notification with Monitoring Resources has been modified and expanded to
better document how to configure the C lu st erMo n resource to execute an external program to
determine what to do with cluster notifications.
When configuring fencing for redundant power supplies, you now are only required to define
each device once and to specify that both devices are required to fence the node. For information
on configuring fencing for redundant power supplies, see Section 4.11, Configuring Fencing for
Redundant Power Supplies .
This document now provides a procedure for adding a node to an existing cluster in
Section 3.2.3, Adding Cluster Nodes .
The new reso u rce- d isco very location constraint option allows you to indicate whether
Pacemaker should perform resource discovery on a node for a specified resource, as
documented in Table 6.1, Location Constraint Options .
Small clarifications and corrections have been made throughout this document.
Chapt er 1 . Red Hat High Availabilit y Add- O n Configurat ion and Management Reference O verview
Warning
After you install the Red Hat High Availability Add-On packages, you should ensure that your
software update preferences are set so that nothing is installed automatically. Installation on a
running cluster can cause unexpected behaviors.
2.5. Displaying St at us
You can display the status of the cluster and the cluster resources with the following command.
pcs status commands
If you do not specify a commands parameter, this command displays all information about the cluster
and the resources. You display the status of only particular cluster components by specifying
reso u rces, g ro u p s, clu st er, n o d es, or p csd .
D escrip t io n
- - t o ken timeout
- - jo in timeout
- - co n sen su s timeout
- - f ail_recv_co n st failures
For example, the following command creates the cluster n ew_clu st er and sets the token timeout
value to 10000ms (10 seconds) and the join timeout value to 100ms.
# p cs clu st er set u p - - n ame n ew_clu st er n o d eA n o d eB - - t o ken 10000 - - jo in 100
10
Note
This command should be used with extreme caution. Before issuing this command, it is
imperative that you ensure that nodes that are not currently in the cluster are switched off.
# p cs clu st er q u o ru m u n b lo ck
There are some special features of quorum configuration that you can set when you create a cluster
with the p cs clu st er set u p command. Table 3.2, Quorum Options summarizes these options.
T ab le 3.2. Q u o ru m O p t io n s
O p t io n
D escrip t io n
- - wait _f o r_all
When enabled, the cluster will be quorate for the first time only
after all nodes have been visible at least once at the same time.
When enabled, the cluster can suffer up to 50% of the nodes
failing at the same time, in a deterministic fashion. The cluster
partition, or the set of nodes that are still in contact with the
n o d eid configured in au t o _t ie_b reaker_n o d e (or lowest
n o d eid if not set), will remain quorate. The other nodes will be
inquorate.
- - au t o _t ie_b reaker
11
O p t io n
D escrip t io n
For further information about configuring and using these options, see the vo t eq u o ru m(5) man
page.
12
Use the following procedure to add a new node to an existing cluster. In this example, the existing
cluster nodes are clu st ern o d e- 01.examp le.co m, clu st ern o d e- 02.examp le.co m, and
clu st ern o d e- 03.examp le.co m. The new node is n ewn o d e.examp le.co m.
On the new node to add to the cluster, perform the following tasks.
1. Install the cluster packages;
[root@newnode ~]# yu m in st all - y p cs f en ce- ag en t s- all
2. If you are running the f irewalld daemon, execute the following commands to enable the
ports that are required by the Red Hat High Availability Add-On.
# f irewall- cmd - - p erman en t - - ad d - service= h ig h - availab ilit y
# f irewall- cmd - - ad d - service= h ig h - availab ilit y
3. Set a password for the user ID h aclu st er. It is recommended that you use the same
password for each node in the cluster.
[root@newnode ~]# p asswd h aclu st er
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
4. Execute the following commands to start the p csd service and to enable p csd at system
start.
# syst emct l st art p csd .service
# syst emct l en ab le p csd .service
On a node in the existing cluster, perform the following tasks.
1. Authenticate user h aclu st er on the new cluster node.
[root@clusternode-01 ~]# p cs clu st er au t h n ewn o d e.examp le.co m
Username: hacluster
Password:
newnode.example.com: Authorized
2. Add the new node to the existing cluster. This command also syncs the cluster configuration
file co ro syn c.co n f to all nodes in the cluster, including the new node you are adding.
[root@clusternode-01 ~]# p cs clu st er n o d e ad d n ewn o d e.examp le.co m
On the new node to add to the cluster, perform the following tasks.
1. Authenticate user h aclu st er on the new node for all nodes in the cluster.
[root@newnode ~]# p cs clu st er au t h
Username: hacluster
Password:
clusternode-01.example.com: Authorized
13
clusternode-02.example.com: Authorized
clusternode-03.example.com: Authorized
newnode.example.com: Already authorized
2. Start and enable cluster services on the new node.
[root@newnode ~]# p cs clu st er st art
Starting Cluster...
[root@newnode ~]# p cs clu st er en ab le
3. Ensure that you configure a fencing device for the new cluster node. For information on
configuring fencing devices, see Chapter 4, Fencing: Configuring STONITH.
14
15
4. Create the user wu ser in the pcs ACL system and assign that user the writ e- access role.
# p cs acl u ser creat e wu ser writ e- access
5. View the current ACLs.
# p cs acl
User: rouser
Roles: read-only
User: wuser
Roles: write-access
Role: read-only
D escription: Read access to cluster
Permission: read xpath /cib (read-only-read)
Role: write-access
D escription: Full Access
Permission: write xpath /cib (write-access-write)
For further information about cluster ACLs, see the help screen for the p cs acl command.
Warning
This command permanently removes any cluster configuration that has been created. It is
recommended that you run p cs clu st er st o p before destroying the cluster.
16
Note
To prevent a specific node from using a fencing device, you can configure location
constraints for the fencing resource.
Table 4.1, General Properties of Fencing D evices describes the general properties you can set for
fencing devices. Refer to Section 4.3, D isplaying D evice-Specific Fencing Options for information
on fencing properties you can set for specific fencing devices.
Note
For information on more advanced fencing configuration properties, refer to Section 4.9,
Additional Fencing Configuration Options
T yp e
D ef au lt
D escrip t io n
17
Field
T yp e
D ef au lt
p rio rit y
integer
p cmk_h o st _map
string
p cmk_h o st _list
string
string
D escrip t io n
18
19
Note
If the node you specify is still running the cluster software or services normally controlled by
the cluster, data corruption/cluster failure will occur.
T yp e
D ef au lt
D escrip t io n
string
port
p cmk_reb o o t _act io n
string
reboot
20
Field
T yp e
D ef au lt
D escrip t io n
p cmk_reb o o t _t imeo u t
time
60s
integer
p cmk_o f f _act io n
string
off
p cmk_o f f _t imeo u t
time
60s
integer
p cmk_list _act io n
string
list
p cmk_list _t imeo u t
time
60s
21
Field
T yp e
D ef au lt
D escrip t io n
integer
p cmk_mo n it o r_act io n
string
monitor
time
60s
integer
p cmk_st at u s_act io n
string
status
time
60s
22
Field
T yp e
D ef au lt
D escrip t io n
integer
23
The following command clears the fence levels on the specified node or stonith id. If you do not
specify a node or stonith id, all fence levels are cleared.
pcs stonith level clear [node|stonith_id(s)]
If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the
following example.
# p cs st o n it h level clear d ev_a,d ev_b
The following command verifies that all fence devices and nodes specified in fence levels exist.
pcs stonith level verify
24
25
For information on defining the operations to perform on a resource, refer to Section 5.6,
Resource Operations .
Specifying the - - clo n e creates a clone resource. Specifying the - - mast er creates a master/slave
resource. For information on resource clones and resources with multiple modes, refer to
Chapter 8, Advanced Resource types.
D escrip t io n
resource_id
standard
type
provider
Table 5.2, Commands to D isplay Resource Properties . summarizes the commands that display the
available resource properties.
T ab le 5.2. C o mman d s t o D isp lay R eso u rce Pro p ert ies
p cs D isp lay C o mman d
Output
p cs
p cs
p cs
p cs
reso u rce
reso u rce
reso u rce
reso u rce
list
st an d ard s
p ro vid ers
list string
26
D ef au lt
D escrip t io n
p rio rit y
t arg et - ro le
St art ed
is- man ag ed
t ru e
27
Field
D ef au lt
D escrip t io n
req u ires
Calculated
IN FIN IT Y
(disabled)
0 (disabled)
st o p _st art
To change the default value of a resource option, use the following command.
28
One of the most common elements of a cluster is a set of resources that need to be located together,
start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports
the concept of groups.
You create a resource group with the following command, specifying the resources to include in the
group. If the group does not exist, this command creates the group. If the group exists, this command
adds additional resources to the group. The resources will start in the order you specify them with
this command, and will stop in the reverse order of their starting order.
pcs resource group add group_name resource_id [resource_id] ... [resource_id]
[--before resource_id | --after resource_id
You can use the - - b ef o re and - - af t er options of this command to specify the position of the added
resources relative to a resource that already exists in the group.
You can also add a new resource to an existing group when you create the resource, using the
following command. The resource you create is added to the group named group_name.
pcs resource create resource_id standard:provider:type|type [resource_options] [op operation_action
operation_options] --group group_name
You remove a resource from a group with the following command. If there are no resources in the
group, this command removes the group itself.
pcs resource group remove group_name resource_id...
The following command lists all currently configured resource groups.
pcs resource group list
The following example creates a resource group named sh o rt cu t that contains the existing
resources IPad d r and Email.
# p cs reso u rce g ro u p ad d sh o rt cu t IPad d r Email
There is no limit to the number of resources a group can contain. The fundamental properties of a
group are as follows.
Resources are started in the order in which you specify them (in this example, IPad d r first, then
Email).
Resources are stopped in the reverse order in which you specify them. (Email first, then IPad d r).
If a resource in the group cannot run anywhere, then no resource specified after that resource is
allowed to run.
If IPad d r cannot run anywhere, neither can Email.
If Email cannot run anywhere, however, this does not affect IPad d r in any way.
Obviously as the group grows bigger, the reduced configuration effort of creating resource groups
can become significant.
30
A resource group inherits the following options from the resources that it contains: p rio rit y, t arg et ro le, is- man ag ed For information on resource options, refer to Table 5.3, Resource Meta Options .
D escrip t io n
id
Unique name for the action. The system assigns this when you configure an
operation.
The action to perform. Common values: mo n it o r, st art , st o p
How frequently (in seconds) to perform the operation. D efault value: 0,
meaning never.
How long to wait before declaring the action has failed. If you find that your
system includes a resource that takes a long time to start or stop or perform a
non-recurring monitor action at startup, and requires more time than the
system allows before declaring that the start action has failed, you can
increase this value from the default of 20 or the value of t imeo u t in " op
defaults" .
The action to take if this action ever fails. Allowed values:
n ame
in t erval
t imeo u t
o n - f ail
31
You can configure monitoring operations when you create a resource, using the following command.
pcs resource create resource_id standard:provider:type|type [resource_options] [op operation_action
operation_options [operation_type operation_options]...]
For example, the following command creates an IPad d r2 resource with a monitoring operation. The
new resource is called Virt u alIP with an IP address of 192.168.0.99 and a netmask of 24 on et h 2. A
monitoring operation will be performed every 30 seconds.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.9 9
cid r_n et mask= 24 n ic= et h 2 o p mo n it o r in t erval= 30s
Alternately, you can add a monitoring operation to an existing resource with the following command.
pcs resource op add resource_id operation_action [operation_properties]
Use the following command to delete a configured resource operation.
pcs resource op remove resource_id operation_name operation_properties
Note
You must specify the exact operation properties to properly remove an existing operation.
To change the values of a monitoring option, you remove the existing operation, then add the new
operation. For example, you can create a Virt u alIP with the following command.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.9 9
cid r_n et mask= 24 n ic= et h 2
By default, this command creates these operations.
Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s)
stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s)
monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)
The change the stop timeout operation, execute the following commands.
# p cs reso u rce o p remo ve Virt u alIP st o p in t erval= 0s t imeo u t = 20s
# p cs reso u rce o p ad d Virt u alIP st o p in t erval= 0s t imeo u t = 4 0s
# p cs reso u rce sh o w Virt u alIP
Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2
Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s)
monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)
stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)
To set global default values for monitoring operations, use the following command.
pcs resource op defaults [options]
32
For example, the following command sets a global default of a t imeo u t value of 240s for all
monitoring operations.
# p cs reso u rce o p d ef au lt s t imeo u t = 24 0s
To display the currently configured default values for monitoring operations, do not specify any
options when you execute the p cs reso u rce o p d ef au lt s commmand.
For example, following command displays the default monitoring operation values for a cluster which
has been configured with a t imeo u t value of 240s.
# p cs reso u rce o p d ef au lt s
timeout: 240s
Note
When configuring multiple monitor operations, you must ensure that no two operations are
performed at the same interval.
To configure additional monitoring operations for a resource that supports more in-depth checks at
different levels, you add an O C F_C H EC K _LEVEL= n option.
For example, if you configure the following IPad d r2 resource, by default this creates a monitoring
operation with an interval of 10 seconds and a timeout value of 20 seconds.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.9 9
cid r_n et mask= 24 n ic= et h 2
If the Virtual IP supports a different check with a depth of 10, the following command causes
Packemaker to perform the more advanced monitoring check every 60 seconds in addition to the
normal Virtual IP check every 10 seconds. (As noted, you should not configure the additional
monitoring operation with a 10-second interval as well.)
# p cs reso u rce o p ad d Virt u alIP mo n it o r in t erval= 6 0s O C F_C H EC K _LEVEL= 10
34
35
D escrip t io n
rsc
node
sco re
A resource name
A nodes name
Value to indicate the preference for whether a resource should run on
or avoid a node.
A Value of IN FIN IT Y changes " should" to " must" ; IN FIN IT Y is the
default sco re value for a resource location constraint.
36
Field
reso u rce- d isco very
D escrip t io n
Value to indicate the preference for whether Pacemaker should
perform resource discovery on this node for the specified resource.
Limiting resource discovery to a subset of nodes the resource is
physically capable of running on can significantly boost
performance when a large set of nodes is present. When
pacemaker_remote is in use to expand the node count into the
hundreds of nodes range, this option should be considered. Possible
values include:
always: Always perform resource discovery for the specified resource
on this node.
n ever: Never perform resource discovery for the specified resource
on this node.
exclu sive: Perform resource discovery for the specified resource
only on this node (and other nodes similarly marked as exclu sive).
Multiple location constraints using exclu sive discovery for the same
resource across different nodes creates a subset of nodes reso u rced isco very is exclusive to. If a resource is marked for exclu sive
discovery on one or more nodes, that resource is only allowed to be
placed within that subset of nodes.
Note that setting this option to n ever or exclu sive allows the
possibility for the resource to be active in those locations without the
clusters knowledge. This can lead to the resource being active in
more than one location if the service is started outside the cluster's
control (for example, by syst emd or by an administrator). This can
also occur if the reso u rce- d isco very property is changed while
part of the cluster is down or suffering split-brain, or if the reso u rced isco very property is changed for a resource and node while the
resource is active on that node. For this reason, using this option is
appropriate only when you have more than eight nodes and there is
a way to guarantee that the resource can run only in a particular
location (for example, when the required software is not installed
anywhere else).
always is the default reso u rce- d isco very value for a resource
location constraint.
The following command creates a location constraint for a resource to prefer the specified node or
nodes.
pcs constraint location rsc prefers node[=score] ...
The following command creates a location constraint for a resource to avoid the specified node or
nodes.
pcs constraint location rsc avoids node[=score] ...
There are two alternative strategies for specifying which nodes a resources can run on:
37
Opt-In Clusters Configure a cluster in which, by default, no resource can run anywhere and
then selectively enable allowed nodes for specific resources. The procedure for configuring an
opt-in cluster is described in Section 6.1.1, Configuring an " Opt-In" Cluster .
Opt-Out Clusters Configure a cluster in which, by default, all resources an run anywhere and
then create location constraints for resources that are not allowed to run on specific nodes. The
procedure for configuring an opt-out cluster is described in Section 6.1.2, Configuring an " OptOut" Cluster .
Whether you should choose to configure an opt-in or opt-out cluster depends both on your personal
preference and the make-up of your cluster. If most of your resources can run on most of the nodes,
then an opt-out arrangement is likely to result in a simpler configuration. On the other-hand, if most
resources can only run on a small subset of nodes an opt-in configuration might be simpler.
co n st rain t
co n st rain t
co n st rain t
co n st rain t
lo cat io n
lo cat io n
lo cat io n
lo cat io n
co n st rain t
co n st rain t
co n st rain t
co n st rain t
lo cat io n
lo cat io n
lo cat io n
lo cat io n
Note that it is not necessary to specify a score of INFINITY in these commands, since that is the
default value for the score.
38
D escrip t io n
resource_id
action
kin d option
How to enforce the constraint. The possible values of the kin d option are
as follows:
* O p t io n al - Only applies if both resources are starting and/or stopping.
For information on optional ordering, refer to Section 6.2.2, Advisory
Ordering .
* Man d at o ry - Always (default value). If the first resource you specified is
stopping or cannot be started, the second resource you specified must be
stopped. For information on mandatory ordering, refer to Section 6.2.1,
Mandatory Ordering .
* Serializ e - Ensure that no two stop/start actions occur concurrently for
a set of resources.
If true, which is the default, stop the resources in the reverse order. D efault
value: t ru e
39
If the first resource you specified resource was not running and cannot be started, the second
resource you specified will be stopped (if it is running).
If the first resource you specified is (re)started while the second resource you specified is running,
the second resource you specified will be stopped and restarted.
40
D escrip t io n
source_resource
target_resource
score
41
If you need myreso u rce1 to always run on the same machine as myreso u rce2, you would add the
following constraint:
# p cs co n st rain t co lo cat io n ad d myreso u rce1 wit h myreso u rce2 sco re= IN FIN IT Y
Because IN FIN IT Y was used, if myreso u rce2 cannot run on any of the cluster nodes (for whatever
reason) then myreso u rce1 will not be allowed to run.
Alternatively, you may want to configure the opposite, a cluster in which myreso u rce1 cannot run
on the same machine as myreso u rce2. In this case use sco re= - IN FIN IT Y
# p cs co n st rain t co lo cat io n ad d myreso u rce1 wit h myreso u rce2 sco re= - IN FIN IT Y
Again, by specifying - IN FIN IT Y, the constraint is binding. So if the only place left to run is where
myreso u rce2 already is, then myreso u rce1 may not run anywhere.
42
pcs constraint colocation set resource1 resource2 [resourceN]... [options] [set resourceX resourceY
... [options]] [setoptions [constraint_options]]
43
Note
When you execute the p cs reso u rce mo ve command, this adds a constraint to the resource
to prevent it from running on the node on which it is currently running. You can execute the
p cs reso u rce clear or the p cs co n st rain t d elet e command to remove the constraint. This
does not necessarily move the resources back to the original node; where the resources can
run at that point depends on how you have configured your resources initially.
If you specify the - - mast er parameter of the p cs reso u rce b an command, the scope of the
constraint is limited to the master role and you must specify master_id rather than resource_id.
44
You can optionally configure a lif et ime parameter for the p cs reso u rce mo ve command to
indicate a period of time the constraint should remain. You specify the units of a lif et ime parameter
according to the format defined in ISO 8601, which requires that you specify the unit as a capital
letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes),
and S (for seconds).
To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating
the value in minutes. For example, a lif et ime parameter of 5M indicates an interval of five months,
while a lif et ime parameter of PT5M indicates an interval of five minutes.
The lif et ime parameter is checked at intervals defined by the clu st er- rech eck- in t erval cluster
property. By default this value is 15 minutes. If your configuration requires that you check this
parameter more frequently, you can reset this value with the following command.
pcs property set cluster-recheck-interval=value
You can optionally configure a - - wait [ = n] parameter for the p cs reso u rce b an command to
indicate the number of seconds to wait for the resource to start on the destination node before
returning 0 if the resource is starated or 1 if the resource has not yet started. If you do not specify n,
the default resource timeout willb e used.
The following command moves the resource reso u rce1 to node examp le- n o d e2 and prevents it
from moving back to the node on which it was originally running for one hour and thirty minutes.
pcs resource move resource1 example-node2 lifetime=PT1H30M
The following command moves the resource reso u rce1 to node examp le- n o d e2 and prevents it
from moving back to the node on which it was originally running for thirty minutes.
pcs resource move resource1 example-node2 lifetime=PT30M
For information on resource constraints, refer to Chapter 6, Resource Constraints.
45
When you create a resource, you can configure the resource so that it will move to a new node after a
defined number of failures by setting the mig rat io n - t h resh o ld option for that resource. Once the
threshold has been reached, this node will no longer be allowed to run the failed resource until:
The administrator manually resets the resource's failcount using the p cs reso u rce f ailco u n t
command.
The resource's f ailu re- t imeo u t value is reached.
There is no threshold defined by default.
Note
Setting a mig rat io n - t h resh o ld for a resource is not the same as configuring a resource for
migration, in which the resource moves to another location without loss of state.
The following example adds a migration threshold of 10 to the resource named d u mmy_reso u rce,
which indicates that the resource will move to a new node after 10 failures.
# p cs reso u rce met a d u mmy_reso u rce mig rat io n - t h resh o ld = 10
You can add a migration threshold to the defaults for the whole cluster with the following command.
# p cs reso u rce d ef au lt s mig rat io n - t h resh o ld = 10
To determine the resource's current failure status and limits, use the p cs reso u rce f ailco u n t
command.
There are two exceptions to the migration threshold concept; they occur when a resource either fails
to start or fails to stop. Start failures cause the failcount to be set to IN FIN IT Y and thus always cause
the resource to move immediately.
Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled,
then the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is
not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere,
but will try to stop it again after the failure timeout.
46
Field
D escrip t io n
d amp en
mu lt ip lier
h o st _list
47
reso u rce clear or the p cs co n st rain t d elet e command to remove the constraint. This does not
necessarily move the resources back to the indicated node; where the resources can run at that point
depends on how you have configured your resources initially. For information on resource
constraints, refer to Chapter 6, Resource Constraints.
If you specify the - - mast er parameter of the p cs reso u rce b an command, the scope of the
constraint is limited to the master role and you must specify master_id rather than resource_id.
You can optionally configure a lif et ime parameter for the p cs reso u rce b an command to indicate
a period of time the constraint should remain. For information on specifying units for the lif et ime
parameter and on specifying the intervals at which the lif et ime parameter should be checked, see
Section 7.1, Manually Moving Resources Around the Cluster .
You can optionally configure a - - wait [ = n] parameter for the p cs reso u rce b an command to
indicate the number of seconds to wait for the resource to start on the destination node before
returning 0 if the resource is starated or 1 if the resource has not yet started. If you do not specify n,
the default resource timeout willb e used.
You can use the d eb u g - st art parameter of the p cs reso u rce command to force a specified
resource to start on the current node, ignoring the cluster recommendations and printing the output
from starting the resource. This is mainly used for debugging resources; starting resources on a
cluster is (almost) always done by Pacemaker and not directly with a p cs command. If your resource
is not starting, it is usually due to either a misconfiguration of the resource (which you debug in the
system log), constraints that the resource from starting, or the resource being disabled. You can use
this command to test resource configuration, but it should not normally be used to start resources in
a cluster.
The format of the d eb u g - st art command is as follows.
pcs resource debug-start resource_id
48
You can specify the name of a resource group with the p cs reso u rce man ag e or p cs reso u rce
u n man ag e command. The command will act on all of the resources in the group, so that you can
manage or unmanage all of the resource in a group with a single command and then manage the
contained resources individually.
49
Note
Only resources that can be active on multiple nodes at the same time are suitable for cloning.
For example, a Filesyst em resource mounting a non-clustered file system such as ext 4 from
a shared memory device should not be cloned. Since the ext 4 partition is not cluster aware,
this file system is not suitable for read/write operations occurring from multiple nodes at the
same time.
Note
You need to configure resource configuration changes on one node only.
Note
When configuring constraints, always use the name of the group or clone.
50
When you create a clone of a resource, the clone takes on the name of the resource with - clo n e
appended to the name. The following commands creates a resource of type ap ach e named
web f arm and a clone of that resource named web f arm- clo n e.
# p cs reso u rce creat e web f arm ap ach e clo n e
Use the following command to remove a clone of a resource or a resource group. This does not
remove the resource or resource group itself.
pcs resource unclone resource_id | group_name
For information on resource options, refer to Section 5.1, Resource Creation .
Table 8.1, Resource Clone Options describes the options you can specify for a cloned resource.
T ab le 8.1. R eso u rce C lo n e O p t io n s
Field
D escrip t io n
p rio rit y, t arg et - ro le, Options inherited from resource that is being cloned, as described in.
is- man ag ed
Table 5.3, Resource Meta Options .
clo n e- max
How many copies of the resource to start. D efaults to the number of nodes
in the cluster.
clo n e- n o d e- max
How many copies of the resource can be started on a single node; the
default value is 1.
n o t if y
When stopping or starting a copy of the clone, tell all the other copies
beforehand and when the action was successful. Allowed values: f alse,
t ru e. The default value is f alse.
g lo b ally- u n iq u e
D oes each copy of the clone perform a different function? Allowed values:
f alse, t ru e
If the value of this option is f alse, these resources behave identically
everywhere they are running and thus there can be only one copy of the
clone active per machine.
If the value of this option is t ru e, a copy of the clone running on one
machine is not equivalent to another instance, whether that instance is
running on another node or on the same node. The default value is t ru e
if the value of clo n e- n o d e- max is greater than one; otherwise the default
value is f alse.
o rd ered
in t erleave
51
8.2. Mult i-St at e Resources: Resources T hat Have Mult iple Modes
Multi-state resources are a specialization of Clone resources. They allow the instances to be in one
of two operating modes; these are called Mast er and Slave. The names of the modes do not have
specific meanings, except for the limitation that when an instance is started, it must come up in the
Slave state.
You can create a resource as a master/slave clone with the following single command.
pcs resource create resource_id standard:provider:type|type [resource options] \
--master [meta master_options]
The name of the master/slave clone will be resource_id- mast er.
Alternately, you can create a master/slave resource from a previously-created resource or resource
group with the following command: When you use this command, you can specify a name for the
master/slave clone. If you do not specify a name, the name of the master/slave clone will be
resource_id- mast er or group_name- mast er.
pcs resource master master/slave_name resource_id|group_name [master_options]
52
D escrip t io n
id
p rio rit y, t arg et - ro le, isman ag ed
clo n e- max, clo n e- n o d e- max,
n o t if y, g lo b ally- u n iq u e,
o rd ered , in t erleave
mast er- max
53
D escrip t io n
C R M_n o t if y_recip ie
nt
C R M_n o t if y_n o d e
C R M_n o t if y_rsc
C R M_n o t if y_t ask
C R M_n o t if y_d esc
C R M_n o t if y_rc
C R M_t arg et _rc
C R M_n o t if y_st at u s
The following example configures a C lu st erMo n resource that executes the external program
crm_lo g g er.sh which will log the event notifications specified in the program.
The following procedure creates the crm_lo g g er.sh program that this resource will use.
1. On one node of the cluster, create the program that will log the event notifications.
# cat <<- EN D >/u sr/lo cal/b in /crm_lo g g er.sh
54
#!/b in /sh
lo g g er - t "C lu st erMo n - Ext ern al" "${C R M_n o t if y_n o d e} ${C R M_n o t if y_rsc} \
${C R M_n o t if y_t ask} ${C R M_n o t if y_d esc} ${C R M_n o t if y_rc} \
${C R M_n o t if y_t arg et _rc} ${C R M_n o t if y_st at u s} ${C R M_n o t if y_recip ien t }";
exit ;
EN D
2. Set the ownership and permissions for the program.
# ch mo d 700 /u sr/lo cal/b in /crm_lo g g er.sh
# ch o wn ro o t .ro o t /u sr/lo cal/b in /crm_lo g g er.sh
3. Use the scp command to copy the crm_lo g g er.sh program to the other nodes of the cluster,
putting the program in the same location on those nodes and setting the same ownership
and permissions for the program.
The following example configures the C lu st erMo n resource, named C lu st erMo n - Ext ern al, that
runs the program /u sr/lo cal/b in /crm_lo g g er.sh . The C lu st erMo n resource outputs the cluster
status to an h t ml file, which is /var/www/h t ml/clu st er_mo n .h t ml in this example. The p id f ile
detects whether C lu st erMo n is already running; in this example that file is /var/ru n /crm_mo n ext ern al.p id . This resource is created as a clone so that it will run on every node in the cluster. The
wat ch - f en cin g is specified to enable monitoring of fencing events in addition to resource events,
unclding the start/stop/monitor, start/monitor. and stop of the fencing resource.
# p cs reso u rce creat e C lu st erMo n - Ext ern al C lu st erMo n u ser= ro o t \
u p d at e= 10 ext ra_o p t io n s= "- E /u sr/lo cal/b in /crm_lo g g er.sh - - wat ch - f en cin g " \
h t mlf ile= /var/www/h t ml/clu st er_mo n .h t ml \
p id f ile= /var/ru n /crm_mo n - ext ern al.p id clo n e
Note
The crm_mo n command that this resource executes and which could be run manually is as
follows:
# /u sr/sb in /crm_mo n - p /var/ru n /crm_mo n - man u al.p id - d - i 5 \
- h /var/www/h t ml/crm_mo n - man u al.h t ml - E "/u sr/lo cal/b in /crm_lo g g er.sh " \
- - wat ch - f en cin g
The following example shows the format of the output of the monitoring notifications that this example
yields.
Aug 7 11:31:32 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterIP
st_notify_fence Operation st_notify_fence requested by rh6node1pcmk.examplerh.com for peer
rh6node2pcmk.examplerh.com: OK (ref=b206b618-e532-42a5-92eb-44d363ac848e) 0 0 0 #177
Aug 7 11:31:32 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP
start OK 0 0 0
Aug 7 11:31:32 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP
monitor OK 0 0 0
Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com fence_xvms
monitor OK 0 0 0
Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP
55
monitor OK 0 0 0
Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com
ClusterMon-External start OK 0 0 0
Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com fence_xvms
start OK 0 0 0
Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP
start OK 0 0 0
Aug 7 11:33:59 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com
ClusterMon-External monitor OK 0 0 0
Aug 7 11:34:00 rh6node1pcmk crmd[2887]: notice: te_rsc_command: Initiating action 8:
monitor ClusterMon-External:1_monitor_0 on rh6node2pcmk.examplerh.com
Aug 7 11:34:00 rh6node1pcmk crmd[2887]: notice: te_rsc_command: Initiating action 16: start
ClusterMon-External:1_start_0 on rh6node2pcmk.examplerh.com
Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node1pcmk.examplerh.com ClusterIP
stop OK 0 0 0
Aug 7 11:34:00 rh6node1pcmk crmd[2887]: notice: te_rsc_command: Initiating action 15:
monitor ClusterMon-External_monitor_10000 on rh6node2pcmk.examplerh.com
Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com
ClusterMon-External start OK 0 0 0
Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com
ClusterMon-External monitor OK 0 0 0
Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterIP
start OK 0 0 0
Aug 7 11:34:00 rh6node1pcmk ClusterMon-External: rh6node2pcmk.examplerh.com ClusterIP
monitor OK 0 0 0
56
The virtual remote nodes run the p acemaker_remo t e service (with very little configuration
required on the virtual machine side).
The cluster stack (p acemaker and co ro syn c), running on the cluster nodes, launches the
virtual machines and immediately connects to the p acemaker_remo t e service, allowing the
virtual machines to integrate into the cluster.
The key difference between the virtual machine remote nodes and the cluster nodes is that the remote
nodes are not running the cluster stack. This means the remote nodes do not take place in quorum.
On the other hand, this also means that remote nodes are not bound to the scalability limits
associated with the cluster stack. Other than the quorum limitation, the remote nodes behave just like
cluster nodes in respect to resource management. The cluster is fully capable of managing and
monitoring resources on each remote node. You can build constraints against remote nodes, put
them in standby, or perform any other action you perform on cluster nodes. Remote nodes appear in
cluster status output just as cluster nodes do.
D ef au lt
D escrip t io n
remo t e- n o d e
<none>
remo t e- p o rt
3121
remo t e- ad d r
remo t e- n o d e
value used as
hostname
60s
57
If you need to change the default port or au t h key location for either Pacemaker or
p acemaker_remo t e, there are environment variables you can set that affect both of those daemons.
These environment variables can be enabled by placing them in the /et c/sysco n f ig /p acemaker file
as follows.
#==#==# Pacemaker Remote
# Use a custom directory for finding the authkey.
PCMK_authkey_location=/etc/pacemaker/authkey
#
# Specify a custom port for Pacemaker Remote connections
PCMK_remote_port=3121
58
6. After creating the Virt u alD o main resource, you can treat the remote node just as you would
treat any other node in the cluster. For example, you can create a resource and place a
resource constraint on the resource to run on the remote node.
# p cs reso u rce creat e web server ap ach e p arams
co n f ig f ile= /et c/h t t p d /co n f /h t t p d .co n f o p mo n it o r in t erval= 30s
# p cs co n st rain t lo cat io n web server p ref ers g u est 1
Once a remote node is integrated into the cluster, you can execute p cs commands from the
remote node itself, just as if the remote node was running Pacemaker.
59
D escrip t io n
ro le
Limits the rule to apply only when the resource is in that role. Allowed values:
St art ed , Slave, and Mast er. NOTE: A rule with ro le= "Mast er" can not
determine the initial location of a clone instance. It will only affect which of
the active instances will be promoted.
The score to apply if the rule evaluates to t ru e. Limited to use in rules that
are part of location constraints.
The node attribute to look up and use as a score if the rule evaluates to
t ru e. Limited to use in rules that are part of location constraints.
How to combine the result of multiple expression objects. Allowed values:
an d and o r. The default value is an d .
sco re
sco re- at t rib u t e
b o o lean - o p
D escrip t io n
valu e
at t rib u t e
t yp e
60
Field
D escrip t io n
o p erat io n
D escrip t io n
st art
en d
o p erat io n
D escrip t io n
id
61
Field
D escrip t io n
h o u rs
mo n t h d ays
weekd ays
yeard ays
mo n t h s
weeks
years
weekyears
mo o n
62
63
Note
In addition to the properties described in this table, there are additional cluster properties that
are exposed by the cluster software. For these properties, it is recommended that you not
change their values from their defaults.
D ef au lt
D escrip t io n
b at ch - limit
30
-1
(unlimited)
stop
n o - q u o ru m- p o licy
64
true
O p t io n
D ef au lt
D escrip t io n
st o n it h - en ab led
true
st o n it h - act io n
reboot
60s
true
true
-1 (all)
-1 (all)
p e- in p u t - series- max
-1 (all)
d c- versio n
15min
d ef au lt - act io n - t imeo u t
20s
65
O p t io n
D ef au lt
D escrip t io n
main t en an ce- mo d e
false
sh u t d o wn - escalat io n
20min
st o n it h - t imeo u t
st o p - all- reso u rces
d ef au lt - reso u rcest ickin ess
60s
false
5000
is- man ag ed - d ef au lt
true
en ab le- acl
false
66
To display the values of the property settings that have been set for the cluster, use the following p cs
command.
pcs property list
To display all of the values of the property settings for the cluster, including the default values of the
property settings that have not been explicitly set, use the following command.
pcs property list --all
To display the current value of a specific cluster property, use the following command.
pcs property show property
For example, to display the current value of the clu st er- in f rast ru ct u re property, execute the
following command:
# p cs p ro p ert y sh o w clu st er- in f rast ru ct u re
Cluster Properties:
cluster-infrastructure: cman
For informational purposes, you can display a list of all of the default values for the properties,
whether they have been set to a value other than the default or not, by using the following command.
pcs property [list|show] --defaults
67
68
Note
When using the p csd Web UI to configure a cluster, you can move your mouse over the text
describing many of the options to see longer descriptions of those options as a tooltip
display.
69
To create a resource group, select a resource that will be part of the group from the R eso u rces
screen, then click C reat e G ro u p . This displays the C reat e G ro u p screen. Enter a group name and
click C reat e G ro u p . This returns you to the R eso u rces screen, which now displays the group
name for the resource. After you have created a resource group, you can indicate that group name as
a resource parameter when you create or modify additional resources.
70
Appendix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
rg man ag er
Pacemaker
Cluster configuration
file
Network setup
Cluster Configuration
Tools
Installation
71
C o n f ig u rat io n
C o mp o n en t
rg man ag er
Pacemaker
Starting cluster
services
Cluster creation
Propagating cluster
configuration to all
nodes
Global cluster
properties
72
Appendix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
C o n f ig u rat io n
C o mp o n en t
rg man ag er
Pacemaker
Quorum in 2-node
clusters
Resources
73
C o n f ig u rat io n
C o mp o n en t
rg man ag er
Pacemaker
Resource behavior,
grouping, and
start/stop order
Resource
administration:
Moving, starting,
stopping resources
Removing a cluster
configuration
completely
Resources active on
multiple nodes,
resources active on
multiple nodes in
multiple modes
74
Appendix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
C o n f ig u rat io n
C o mp o n en t
rg man ag er
Multiple (backup)
fencing devices per
node
Pacemaker
Create a fencing device for each
node with the p cs st o n it h creat e
command or with the p csd Web UI.
For devices that can fence multiple
nodes, you need to define them
only once rather than separately for
each node. You can also define
p cmk_h o st _map to configure
fencing devices for all nodes with a
single command; for information on
p cmk_h o st _map refer to
Table 4.1, General Properties of
Fencing D evices . You can define
the st o n it h - t imeo u t value for the
cluster as a whole.
Configure fencing levels.
A.2. Clust er Creat ion wit h Pacemaker in Red Hat Ent erprise Linux 6.5
and Red Hat Ent erprise Linux 7
The Red Hat Enterprise Linux 6.5 release supports cluster configuration with Pacemaker, using the
p cs configuration tool. There are, however, some differences in cluster installation and creation
between Red Hat Enterprise Linux 6.5 and Red Hat Enterprise Linux 7 when using Pacemaker. This
section provides a brief listing of the command differences between these two releases. For further
information on installation and cluster creation in Red Hat Enterprise Linux 7, see Chapter 1, Red Hat
High Availability Add-On Configuration and Management Reference Overview and Chapter 3, Cluster
Creation and Administration.
A.2.1. Pacemaker Inst allat ion in Red Hat Ent erprise Linux 6.5 and Red Hat
Ent erprise Linux 7
The following commands install the Red Hat High Availability Add-On software packages that
Pacemaker requires in Red Hat Enterprise Linux 6.5 and prevent co ro syn c from starting without
cman . You must run these commands on each node in the cluster.
[root@rhel6]# yu m in st all p acemaker cman
[root@rhel6]# yu m in st all p cs
[root@rhel6]# ch kco n f ig co ro syn c o f f
In Red Hat Enterprise Linux 7, in addition to installing the Red Hat High Availability Add-On software
packages that Pacemaker requires, you set up a password for the p cs administration account
named h aclu st er, and you start and enable the p csd service, You also authenticate the
administration account for the nodes of the cluster.
In Red Hat Enterprise Linux 7, run the following commands on each node in the cluster.
75
A.2.2. Clust er Creat ion wit h Pacemaker in Red Hat Ent erprise Linux 6.5 and
Red Hat Ent erprise Linux 7
To create a Pacemaker cluster in Red Hat Enterprise Linux 6.5, you must create the cluster and start
the cluster services on each node in the cluster. For example, to create a cluster named my_clu st er
that consists of nodes z 1.examp le.co m and z 2.examp le.co m and start cluster services on those
nodes, run the following commands from both z 1.examp le.co m and z 2.examp le.co m.
[root@rhel6]# p cs clu st er set u p - - n ame my_clu st er z 1.examp le.co m z 2.examp le.co m
[root@rhel6]# p cs clu st er st art
In Red Hat Enterprise Linux 7, you run the cluster creation command from one node of the cluster.
The following command, run from one node only, creates the cluster named my_clu st er that
consists of nodes z 1.examp le.co m and z 2.examp le.co m and starts cluster services on those
nodes.
[root@rhel7]# p cs clu st er set u p - - st art - - n ame my_clu st er z 1.examp le.co m
z 2.examp le.co m
76
St even Levin e
R evisio n 2.1- 8
Mo n N o v 9 2015
Preparing document for 7.2 GA publication
St even Levin e
R evisio n 2.1- 7
Fri O ct 16 2015
Resolves #1129854
D ocuments updated pcs resource cleanup command
St even Levin e
Resolves #1267443
Fixes small errors and typos
Resolves #1245806
D ocuments pcs resource relocate run command
Resolves #1245809
Clarifies behavior of pcs resource move command
Resolves #1244872
Updates and expands documentation on event monitoring
Resolves #1129713
Adds warning regarding using quorum unblock
Resolves #1229826
Adds note regarding time delay following pcs cluster create
Resolves #1202956
Adds procedure for adding new nodes to a running cluster
Resolves #1211674
Removes incorrect reference to stonith-timeout veing valid as a per-device attribute
Resolves #129451
D ocuments resouce-discovery location constraint option
Resolves #1129851
D ocuments use of tooltips when using the pcsd Web UI
R evisio n 2.1- 5
Mo n Au g 24 2015
Preparing document for 7.2 Beta publication.
St even Levin e
R evisio n 1.1- 9
Version for 7.1 GA release
Mo n Feb 23 2015
St even Levin e
R evisio n 1.1- 7
Version for 7.1 Beta release
T h u D ec 11 2014
St even Levin e
77
R evisio n 1.1- 2
T u e D ec 2 2014
Resolves #1129854
D ocuments updated pcs resource cleanup command.
St even Levin e
Resolves #1129837
D ocuments lifetime parameter for pcs resource move command.
Resolves #1129862, #1129461
D ocuments support for access control lists.
Resolves #1069385
Adds port number to enable for D LM.
Resolves #1121128, #1129738
Updates and clarifies descriptions of set colocation and ordered sets.
Resolves #1129713
D ocuments cluster quorum unblock feature.
Resolves #1129722
Updates description of auto_tie_breaker quorum option.
Resolves #1129737
D ocuments disabled parameter for pcs resource create command.
Resolves #1129859
D ocuments pcs cluster configuration backup and restore.
Resolves #1098289
Small clarification to description of port requirements.
R evisio n 1.1- 1
D raft updates for 7.1 release
T u e N o v 18 2014
St even Levin e
R evisio n 0.1- 4 1
Version for 7.0 GA release
Mo n Ju n 2 2014
St even Levin e
R evisio n 0.1- 39
Resolves: #794494
D ocuments quorum support
T h u May 29 2014
St even Levin e
R evisio n 0.1- 23
Updated 7.0 Beta draft
Wed Ap r 9 2014
St even Levin e
R evisio n 0.1- 15
Beta document
Fri D ec 6 2013
St even Levin e
Resolves: #1088465
D ocuments unfencing
Resolves: #987087
D ocuments pcs updates
78
Index
R evisio n 0.1- 2
First printing of initial draft
T h u May 16 2013
St even Levin e
Index
- , Cluster Creation
A
Act io n
- Property
-
B
b at ch - limit , Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
b o o lean - o p , Pacemaker R u les
- Constraint Rule, Pacemaker Rules
C
C lo n e
- Option
-
79
80
Index
D
d amp en , Mo vin g R eso u rces D u e t o C o n n ect ivit y C h an g es
- Ping Resource Option, Moving Resources D ue to Connectivity Changes
D at e Sp ecif icat io n , D at e Sp ecif icat io n s
- hours, D ate Specifications
- id, D ate Specifications
- monthdays, D ate Specifications
- months, D ate Specifications
- moon, D ate Specifications
- weekdays, D ate Specifications
- weeks, D ate Specifications
- weekyears, D ate Specifications
- yeardays, D ate Specifications
- years, D ate Specifications
D at e/T ime Exp ressio n , T ime/D at e B ased Exp ressio n s
81
E
en ab le- acl, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
en ab led , R eso u rce O p erat io n s
- Action Property, Resource Operations
en ab lin g
- resources, Enabling and D isabling Cluster Resources
en d , T ime/D at e B ased Exp ressio n s
- Constraint Expression, Time/D ate Based Expressions
F
f ailu re- t imeo u t , R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
f eat u res, n ew an d ch an g ed , N ew an d C h an g ed Feat u res
G
g lo b ally- u n iq u e, C reat in g an d R emo vin g a C lo n ed R eso u rce
- Clone Option, Creating and Removing a Cloned Resource
G ro u p R eso u rces, R eso u rce G ro u p s
G ro u p s, R eso u rce G ro u p s, G ro u p St ickin ess
H
h o st _list , Mo vin g R eso u rces D u e t o C o n n ect ivit y C h an g es
- Ping Resource Option, Moving Resources D ue to Connectivity Changes
h o u rs, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
82
Index
I
id , R eso u rce Pro p ert ies, R eso u rce O p erat io n s, D at e Sp ecif icat io n s
- Action Property, Resource Operations
- D ate Specification, D ate Specifications
- Location Constraints, Location Constraints
- Multi-State Property, Multi-State Resources: Resources That Have Multiple Modes
- Resource, Resource Properties
in t erleave, C reat in g an d R emo vin g a C lo n ed R eso u rce
- Clone Option, Creating and Removing a Cloned Resource
in t erval, R eso u rce O p erat io n s
- Action Property, Resource Operations
is- man ag ed , R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
is- man ag ed - d ef au lt , Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
K
kin d , O rd er C o n st rain t s
- Order Constraints, Order Constraints
L
last - lrm- ref resh , Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
Lo cat io n
- D etermine by Rules, Using Rules to D etermine Resource Location
- score, Location Constraints
Lo cat io n C o n st rain t s, Lo cat io n C o n st rain t s
Lo cat io n R elat ive t o o t h er R eso u rces, C o lo cat io n o f R eso u rces
M
main t en an ce- mo d e, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
mast er- max, Mu lt i- St at e R eso u rces: R eso u rces T h at H ave Mu lt ip le Mo d es
- Multi-State Option, Multi-State Resources: Resources That Have Multiple Modes
mast er- n o d e- max, Mu lt i- St at e R eso u rces: R eso u rces T h at H ave Mu lt ip le Mo d es
- Multi-State Option, Multi-State Resources: Resources That Have Multiple Modes
mig rat io n - limit , Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
mig rat io n - t h resh o ld , R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
mo n t h d ays, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
83
mo n t h s, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
mo o n , D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
Mo vin g , Man u ally Mo vin g R eso u rces Aro u n d t h e C lu st er
- Resources, Manually Moving Resources Around the Cluster
Mu lt i- st at e, Mu lt i- St at e R eso u rces: R eso u rces T h at H ave Mu lt ip le Mo d es
Mu lt i- St at e, Mu lt i- st at e St ickin ess
- Option
- master-max, Multi-State Resources: Resources That Have Multiple Modes
- master-node-max, Multi-State Resources: Resources That Have Multiple
Modes
- Property
- id, Multi-State Resources: Resources That Have Multiple Modes
Mu lt i- St at e O p t io n , Mu lt i- St at e R eso u rces: R eso u rces T h at H ave Mu lt ip le Mo d es
Mu lt i- St at e Pro p ert y, Mu lt i- St at e R eso u rces: R eso u rces T h at H ave Mu lt ip le Mo d es
mu lt ip le- act ive, R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
mu lt ip lier, Mo vin g R eso u rces D u e t o C o n n ect ivit y C h an g es
- Ping Resource Option, Moving Resources D ue to Connectivity Changes
N
n ame, R eso u rce O p erat io n s
- Action Property, Resource Operations
n o - q u o ru m- p o licy, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
n o t if y, C reat in g an d R emo vin g a C lo n ed R eso u rce
- Clone Option, Creating and Removing a Cloned Resource
O
o n - f ail, R eso u rce O p erat io n s
- Action Property, Resource Operations
o p erat io n , N o d e At t rib u t e Exp ressio n s, T ime/D at e B ased Exp ressio n s
- Constraint Expression, Node Attribute Expressions, Time/D ate Based Expressions
O p t io n
-
84
Index
O rd er
- kind, Order Constraints
O rd er C o n st rain t s, O rd er C o n st rain t s
- symmetrical, Order Constraints
o rd ered , C reat in g an d R emo vin g a C lo n ed R eso u rce
- Clone Option, Creating and Removing a Cloned Resource
O rd erin g , O rd er C o n st rain t s
o verview
- features, new and changed, New and Changed Features
P
p e- erro r- series- max, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
p e- in p u t - series- max, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
85
Q
Q u eryin g
- Cluster Properties, Querying Cluster Property Settings
Q u eryin g O p t io n s, Q u eryin g C lu st er Pro p ert y Set t in g s
R
R emo vin g
- Cluster Properties, Setting and Removing Cluster Properties
R emo vin g Pro p ert ies, Set t in g an d R emo vin g C lu st er Pro p ert ies
req u ires, R eso u rce Met a O p t io n s
R eso u rce, R eso u rce Pro p ert ies
- Constraint
- Attribute Expression, Node Attribute Expressions
- D ate Specification, D ate Specifications
- D ate/Time Expression, Time/D ate Based Expressions
- D uration, D urations
- Rule, Pacemaker Rules
- Constraints
- Colocation, Colocation of Resources
- Order, Order Constraints
86
Index
- Location
- D etermine by Rules, Using Rules to D etermine Resource Location
- Location Relative to other Resources, Colocation of Resources
- Moving, Manually Moving Resources Around the Cluster
- Option
- failure-timeout, Resource Meta Options
- is-managed, Resource Meta Options
- migration-threshold, Resource Meta Options
- multiple-active, Resource Meta Options
- priority, Resource Meta Options
- requires, Resource Meta Options
- resource-stickiness, Resource Meta Options
- target-role, Resource Meta Options
- Property
-
S
sco re, Lo cat io n C o n st rain t s, Pacemaker R u les
- Constraint Rule, Pacemaker Rules
- Location Constraints, Location Constraints
sco re- at t rib u t e, Pacemaker R u les
87
T
t arg et - ro le, R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
T ime B ased Exp ressio n s, T ime/D at e B ased Exp ressio n s
t imeo u t , R eso u rce O p erat io n s
- Action Property, Resource Operations
88
Index
V
valu e, N o d e At t rib u t e Exp ressio n s
- Constraint Expression, Node Attribute Expressions
W
weekd ays, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
weeks, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
weekyears, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
Y
yeard ays, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
years, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
89