Sie sind auf Seite 1von 26

LINBIT DRBD 8.

4 Configuration Guide:
NFS On RHEL 6
LINBIT DRBD 8.4 Configuration Guide: NFS On RHEL 6
D. Cooper Stevenson
Copyright 2013 LINBIT USA, LLC
Trademark notice
DRBD and LINBIT are trademarks or registered trademarks of LINBIT in Austria, the United States, and other countries.
Other names mentioned in this document may be trademarks or registered trademarks of their respective owners.
License information
This is a commercial document from LINBIT. There are distribution terms that apply to this document, for more information please visit:
http://links.linbit.com/t-and-c
iii
Table of Contents
......................................................................................................................................... iv
1. Current Configuration ....................................................................................................... 1
1.1. alice.mydomain.org ................................................................................................ 1
1.2. bob.mydomain.org ................................................................................................. 1
2. Preparation ..................................................................................................................... 2
2.1. Install DRBD Packages .......................................................................................... 2
2.2. Extend LVs ........................................................................................................... 2
3. DRBD Configuration ......................................................................................................... 4
3.1. Create DRBD Metadata ........................................................................................ 4
3.2. Bring up the resources ........................................................................................... 5
3.3. Enable the resource ............................................................................................... 5
3.4. Check DRBD Status: ............................................................................................ 6
4. Installing/Configuring Corosync ......................................................................................... 7
5. NFS Installation ................................................................................................................ 9
6. Pacemaker Configuration ................................................................................................ 10
6.1. NFS Pacemaker Configuration ............................................................................... 10
6.2. Resource Primitive Configuration ........................................................................... 10
6.3. Master/Slave Relationship Configuration ................................................................ 11
6.4. Filesystem Resource Configuration ........................................................................ 11
6.5. Virtual IP Address ................................................................................................ 13
6.6. Filesystem Group ................................................................................................ 13
6.7. NFS Group .......................................................................................................... 13
6.8. NFS Kernel Server Resource ................................................................................. 13
7. External Ping Resource ................................................................................................... 15
8. NFS Exports .................................................................................................................. 16
8.1. Non-Root NFS Export .......................................................................................... 16
9. Using The NFS Service .................................................................................................... 20
9.1. Connecting With NFS 4 ........................................................................................ 20
10. Ensuring Smooth Cluster Failover ................................................................................... 21
11. System Testing ............................................................................................................ 22
iv
This guide outlines the installation and initialization of DRBD for a two-node redundant data cluster whos
host names are alice and bob.
Assumptions
1. The servers are running the 2.6.32-279.14.1.el6.x86_64 kernel
2. DRBD 8.4.x will be installed (installation instructions enclosed via link)
3. alice will serve as the data source for the initial data synchronization while bob serves as the target
4. Interface eth0 on each host is a 1 Gbps device and will be used to replicate the DRBD resources
via cross-over cable
5. The server runs a kernel capable of exporting NFS shares
6. The filesystems on the LVM will be NFS exported to /export/home and /export/public respec-
tively
7. The servers are not running iptables or other firewall daemon that blocks TCP ports 7789 and
7790
8. Each node will have a private 192.168.1.x address with a subnet mask of 255.255.255.0
9. Each node will have a public 128.46.19.x address with a subnet mask of 255.255.255.0
This guide outlines an NFS configuration that provides rolling failover; if the cluster is writing to a file
during a failover, the cluster will continue to write the file to the secondary server after the failover
has occured.
The cluster management software, Pacemaker, will monitor the exported filesystems, DRBD, the nfs
kernel/daemon components, and the fileystem mount.
Note
While the specific kernel version is defined above this guide covers other kernel versions.
The most important factor is obtaining the certified LINBIT kernel module/DRBD dae-
mon pair for your running kernel. Contact technical support for details.
This guide assumes that your cluster has existing Logical Volumes. If you are starting your cluster from
scratch you may, of course, safely skip the Preparation and Extending LVM sections.
Prerequesites
1. Each Volume Group must have enough space [http://www.drbd.org/users-guide/ch-
internals.html#s-meta-data-size] to accomodate the DRBD metatada (roughly 32kByte per
1GByte of storage).
1
Chapter1.Current Configuration
1.1.alice.mydomain.org
/dev/mapper/VolGroup00-lv_devhome
50G 8.0G 41G 17% /export/home
/dev/mapper/VolGroup00-lv_devpublic
148G 26G 121G 18% /export/public
1.2.bob.mydomain.org
/dev/mapper/VolGroup00-lv_devhome
50G 8.0G 41G 17% /export/home
/dev/mapper/VolGroup00-lv_devpublic
148G 26G 121G 18% /export/public
2
Chapter2.Preparation
On Both hosts as necessary
1. Create backup snapshot or dd each filesystem
2. Install NTP (Time Daemon) if not already installed
3. Install LVM
a. This guide assumes the Volume groups and Logical Volumes are configured as de-
scribed at the beginnning of this document. Documentation for configuring LVM may be
found here [https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/
html/Logical_Volume_Manager_Administration/index.html].
2.1.Install DRBD Packages
On Both hosts.Add the LINBIT DRBD package repository for RHEL 6. To add the DRBD package
repository to your configuration, create a file named linbit.repo in your /etc/yum.repos.d containing
the following:
[drbd-8.4]
name=DRBD 8.4
baseurl=http://packages.linbit.com/<hash>/8.4/rhel6/x86_64
gpgcheck=0
Where <hash> is the randomized token string you received from LINBIT with your support package.
Avoid package conflicts: add to the [base] section of /etc/yum.repos.d/rhel-base.repo:
exclude=drbd* kmod-drbd* kmod-drbd*
Copy the modified package installation configuration to bob:
# scp /etc/yum.repos.d/linbit.repo /etc/yum.repos.d/rhel-base.repo \
bob.mydomain.org:/etc/yum.repos.d/
Install the DRBD kernel module and DRBD binaries on both alice and bob:
# yum install drbd kmod-drbd
Load the DRBD Module on alice and bob:
# modprobe drbd
At this point you should see the following on both hosts:
[root@alice ~]# cat /proc/drbd
version: 8.4.2 (api:1/proto:86-101)
GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by \
buildsystem@linbit, 2012-09-04 18:12:23
Modify /etc/hosts on both hosts:
192.168.1.10 alice.mydomain.org alice
192.168.1.20 bob.mydomain.org bob
2.2.Extend LVs
1. Extend the logical volumes (32MB/TB)
Preparation
3
a. Extending the LV by 4MB just in case
On alice.
# lvextend -L+4M /dev/mapper/VolGroup00-lv_devhome
# lvextend -L+4M /dev/mapper/VolGroup00-lv_devpublic
On bob.
# lvextend -L+4M /dev/mapper/VolGroup00-lv_devhome
# lvextend -L+4M /dev/mapper/VolGroup00-lv_devpublic
4
Chapter3.DRBD Configuration
Create drbd Configuration to /etc/drbd.d/r0.res:
resource r0 {
device /dev/drbd0;
disk {
no-disk-barrier;
c-plan-ahead 10;
c-fill-target 100k;
c-min-rate 75k;
}
on alice.mydomain.org{
disk /dev/mapper/VolGroup00-lv_devhome;
address 192.168.1.10:7789;
meta-disk internal;
}
on bob.mydomain.org{
disk /dev/mapper/VolGroup00-lv_devhome;
address 192.168.1.20:7789;
meta-disk internal;
}
Create drbd Configuration to /etc/drbd.d/r1.res:
resource r1 {
device /dev/drbd1;
disk {
no-disk-barrier;
c-plan-ahead 10;
c-fill-target 100k;
c-min-rate 75k;
}
on alice.mydomain.org{
disk /dev/mapper/VolGroup00-lv_devpublic;
address 192.168.1.10:7790;
meta-disk internal;
}
on bob.mydomain.org{
disk /dev/mapper/VolGroup00-lv_devpublic;
address 192.168.1.20:7790;
meta-disk internal;
}
Note
The c-plan-ahead, c-fill-target, and c-min-rate parameters are performance tuning
metrics that may be adjusted for your network environment. Documentation for these pa-
rameters may be found here [http://www.drbd.org/users-guide/s-resync.html#s-vari-
able-rate-sync] and here [http://blogs.linbit.com/p/128/drbd-sync-rate-controller/].
Copy /etc/drbd.d/r0.res and /etc/drbd.d/r1.res to bob.
# scp /etc/drbd.d/r* bob:/etc/drbd.d
3.1.Create DRBD Metadata
1. Optional: Check the firewall (presuming your system has telnet) for each port
root@alice:/etc/drbd.d# telnet 192.168.1.20 7789
Trying 192.168.1.20....
DRBD Configuration
5
Connected to 192.168.1.20.
Escape character is '^]'.
1. Repeat the telnet command for port 7790
Note
If the telnet command shows no sign of connecting, try switching off iptables for both
hosts:
# /etc/init.d/iptables off
Caution
Be sure to modify iptables (or other firewall in place) and re-enable for production use.
Both Hosts
1. Start the DRBD daemon (for testing only, Pacemaker will handle the DRBD daemon in production
use)
# service drbd start
3.2.Bring up the resources
1. Create DRBD metadata
On both hosts:
# drbdadm create-md r0
# drbdadm create-md r1
3.3.Enable the resource
On Both hosts:
# drbdadm up r0
# drbdadm up r1
Caution
The following will destroy the data on the destination server. Be certain of the source
server (that server with which you wish to save the data) and the destination server (that
server with which you wish to have the data written to).
This guide assumes that alice is the source server and bob is the destination server. Make sure you
are executing these commands from the source server.
At this point you may receive an address already in use messagethis may safely be ignored.
On alice.
# drbdadm primary --force r0
# drbdadm primary --force r1
After issuing this command, the initial full synchronization will commence. You will be able to monitor
its progress via /proc/drbd. It may take some time depending on the size of the device.
By now, your DRBD device is fully operational, even before the initial synchronization has completed
(albeit with slightly reduced performance). You may now mount it and perform any other operation
you would with an accessible block device.
DRBD Configuration
6
# watch -n1 cat /proc/drbd
3.4.Check DRBD Status:
1. After the initial synchronization has completed, /proc/drbd now reflects a fully synchronized Prima-
ry/Secondary installation of DRBD with output similar to the following:
On alice.
# cat /proc/drbd
version: 8.4.2 (api:1/proto:86-101)
GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by \
buildsystem@linbit, 2012-09-04 18:12:23
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:37 nr:0 dw:1 dr:706 al:1 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:5 nr:0 dw:1 dr:674 al:1 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
7
Chapter4.Installing/Configuring
Corosync
Perform each step on both hosts:
Install Pacemaker:
# yum install pacemaker
Edit /etc/corosync/corosync.conf
compatibility: whitetank
# Totem Protocol options.
totem {
version: 2
secauth: off
threads: 0
rrp_mode: passive
interface {
# This is the back-channel subnet,
# which is the primary network
# for the totem protocol.
ringnumber: 0
bindnetaddr: 192.168.1.0
mcastaddr: 239.255.1.1
mcastport: 5405
}
member {
memberaddr: 192.168.1.10
}
member {
memberaddr: 192.168.1.20
}
}
logging {
to_syslog: yes
fileline: off
to_stderr: yes
to_logfile: yes
to_syslog: yes
logfile: /var/log/corosync.log
debug: off
timestamp: on
}
amf {
mode: disabled
}
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 1
}
Start the Corosync & Heartbeat service:
# service corosync start
# service pacemaker start
Make these daemons start at boot time:
Installing/Configuring Corosync
8
# chkconfig corosync on
# chkconfig pacemaker on
Your cluster configuration monitor (executing crm_mon at the command prompt) should now look
similar like this:
============
Last updated: Thu Jan 10 15:49:22 2013
Last change: Thu Jan 10 15:49:12 2013 via cibadmin on alice.mydomain.org
Stack: openais
Current DC: alice.mydomain.org - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ alice.mydomain.org bob.mydomain.org ]
9
Chapter5.NFS Installation
Create a link from /var/lib/nfs to the shared resource /export/home/nfs on both alice and bob:
mkdir -p /export/home/lib/nfs
ln -s /export/home/lib/nfs /var/lib
When you are finished with the link your /var/lib/nfs link will look like this:
[root@alice ~]# ls -l /var/lib/nfs
lrwxrwxrwx 1 root root 16 Jan 24 22:45 /var/lib/nfs -> /export/home/nfs
Install the NFS server packages:
# yum install nfs-utils
Pacemaker will handle the NFS tasks such as starting/stopping, exports, etc. so long as the base NFS
server packages are installed.
10
Chapter6.Pacemaker Configuration
In a highly available NFS server configuration that involves a 2-node cluster, you should
1. Disable STONITH
2. Set Pacemakers "no quorum policy" to ignore loss of quorum
3. Set the default resource stickiness to 200
To do so, issue the following commands from the CRM shell:
# crm
crm(live)# configure
crm(live)configure# property stonith-enabled="false"
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# rsc_defaults resource-stickiness="200"
crm(live)configure# commit
6.1.NFS Pacemaker Configuration
Now that Pacemaker is installed and its default configuration set, we will use Pacemaker to handle the
remainder of the NFS filesystem tasks.
A highly available NFS services consists of the following cluster resources:
1. A DRBD resource to replicate data, which is switched from and to the Primary and Secondary roles
as deemed necessary by the cluster resource manager;
2. An LVM Volume Group, which is made available on whichever node currently holds the DRBD re-
source in the Primary role;
3. One or more filesystems residing on any of the Logical Volumes in the Volume Group, which the
cluster manager mounts wherever the VG is active;
4. A virtual, floating cluster IP address, allowing NFS clients to connect to the service no matter which
physical node it is running on;
5. A virtual NFS root export (for NFSv4 clients may be omitted if the server should support v3
connections only);
6. One or more NFS exports, typically corresponding to the filesystems mounted from LVM Logical
Volumes.
The following Pacemaker configuration example assumes that 128.46.19.241 is the virtual IP ad-
dress to use for a NFS server which serves clients in the 128.46.19.0/24 subnet.
Into these export directories, the cluster will mount ext4 file systems from Logical Volumes
named home and public, respectively. the Both of these LVs will be part of highly avail-
able Volume Groups, named VolGroup00-lv_devhome, VolGroup00-lv_devpublic, Vol-
Group00-lv_devhome, and VolGroup00-lv_devhome. Each volume group (two on alice and
the remainder on bob), is hosted on a DRBD device.
To start configuring your NFS cluster, execute the following:
6.2.Resource Primitive Configuration
# crm
Pacemaker Configuration
11
crm(live)# configure
crm(live)configure# primitive p_drbdr0_nfs ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="31s" enabled="false" role="Master" \
op monitor interval="29s" enabled="false" role="Slave" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="120s"
These directives configure the configuration element p_drbdr0_nfs consisting of the DRBD resource
r0.
Further, configure resource r1. From the CRM configuration prompt (crm(live)configure), enter
the following:
crm(live)configure# primitive p_drbdr1_nfs ocf:linbit:drbd \
params drbd_resource="r1" \
op monitor interval="31s" enabled="false" role="Master" \
op monitor interval="29s" enabled="false" role="Slave" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="120s"
6.3.Master/Slave Relationship Configuration
Since we want only one instance of a mounted DRBD resource on one node of the cluster at a time we
create a master/slave relationship for cluster. Each master/slave (ms) directive corresponds to a single
DRBD resource:
ms ms_drbdr0_nfs p_drbdr0_nfs \
meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" \
target-role="Started" is-managed="true"
ms ms_drbdr1_nfs p_drbdr1_nfs \
meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" \
target-role="Started" is-managed="true"
These directives will create a Pacemaker Master/Slave resource corresponding to the DRBD resources
r0 and r1. Pacemaker should now activate your DRBD resource on both nodes, and promote it to
the Master role on one of them. You may confirm this with the crm_mon utility, or by looking at the
contents of /proc/drbd.
Contents of Promoted DRBD Resources.
# cat /proc/drbd
version: 8.4.2 (api:1/proto:86-101)
GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by \
buildsystem@linbit, 2012-09-04 18:12:23
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:37 nr:0 dw:1 dr:706 al:1 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:5 nr:0 dw:1 dr:674 al:1 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
6.4.Filesystem Resource Configuration
Create the filesystem resource configuration directives for the DRBD resources:
crm(live)configure# primitive p_fs_home \
ocf:heartbeat:Filesystem \
params device=/dev/drbd0 \
directory=/export/home \
fstype=ext4 \
op monitor interval="10s"
Pacemaker Configuration
12
crm(live)configure# primitive p_fs_public \
ocf:heartbeat:Filesystem \
params device=/dev/drbd1 \
directory=/export/public \
fstype=ext4 \
op monitor interval="10s"
Up to this point your complete configuration (viewed with crm configure show from the command
line will look like this:
node bob.mydomain.org
node alice.mydomain.org
primitive p_drbdr0_nfs ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="31s" enabled="false" role="Master" \
op monitor interval="29s" enabled="false" role="Slave" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="120s"
primitive p_drbdr1_nfs ocf:linbit:drbd \
params drbd_resource="r1" \
op monitor interval="31s" enabled="false" role="Master" \
op monitor interval="29s" enabled="false" role="Slave" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="120s"
primitive p_fs_home ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/export/home" fstype="ext4" \
op monitor interval="10s"\
meta target-role="Started"
primitive p_fs_public ocf:heartbeat:Filesystem \
params device="/dev/drbd1" directory="/export/public" fstype="ext4" \
op monitor interval="10s"
meta target-role="Started"
ms ms_drbdr0_nfs p_drbdr0_nfs \
meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" target-role="Started" \
is-managed="true"
ms ms_drbdr1_nfs p_drbdr1_nfs \
meta master-max="1" master-node-max="1" clone-max="2" \
clone-node-max="1" notify="true" target-role="Started" \
is-managed="true"
property $id="cib-bootstrap-options" \
dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16ed14" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1357863149"
rsc_defaults $id="rsc-options" \
resource-stickiness="200"
Your cluster monitor will look similiar to the following:
============
Last updated: Thu Jan 10 16:12:29 2013
Last change: Thu Jan 10 16:12:28 2013 via crmd on bob.mydomain.org
Stack: openais
Current DC: alice.mydomain.org - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
6 Resources configured.
============
Online: [ alice.mydomain.org bob.mydomain.org ]
Pacemaker Configuration
13
Master/Slave Set: ms_drbdr0_nfs [p_drbdr0_nfs]
Masters: [ alice.mydomain.org ]
Slaves: [ bob.mydomain.org ]
Master/Slave Set: ms_drbdr1_nfs [p_drbdr1_nfs]
Masters: [ alice.mydomain.org ]
Slaves: [ bob.mydomain.org ]
p_fs_home (ocf::heartbeat:Filesystem): Started alice.mydomain.org
p_fs_public (ocf::heartbeat:Filesystem): Started alice.mydomain.org
6.5.Virtual IP Address
To enable smooth and seamless failover, your NFS clients will be connecting to the NFS service via a
floating cluster IP address, rather than via any of the hosts' physical IP addresses. This is the last resource
to be added to the cluster configuration:
crm(live)configure# primitive p_ip_nfs \
ocf:heartbeat:IPaddr2 \
params ip=128.46.19.241 \
cidr_netmask=24 \
op monitor interval="30s"
Note
The ocf:heartbeat:IPaddr2 directive will automatically switch the virtual IP address
to the active node. No other configuration directives (colocation constraints) are neces-
sary.
At this point Pacemaker will set up the floating cluster IP address. You may confirm that the cluster IP
is running correctly by invoking ip address show. The cluster IP should be added as a secondary
address to whatever interface is connected to the 128.46.19.0/24 subnet.
6.6.Filesystem Group
Create a resource group for the filesystems and virtual IP address. We will later use this group to ensure
that the filesystem group is started on the same node where the DRBD Master/Slave resources is in
the Master role:
crm(live)configure# group g_fs \
p_fs_home p_fs_public
6.7.NFS Group
Next, we create a group that will handle the NFS migration. Note the order of the primitives in this (or
any other) group: the primitives are started and stopped in the order listed from left to right.
The g_nfs group, for example, will first start the virtual IP address followed by the two exported NFS
filesystems. Shutdown is in reverse; shutdown first the two NFS filesystems followed by the virtual IP
address.
Enter the following:
crm(live)configure# group g_nfs \
p_ip_nfs p_exportfs_home p_exportfs_public
6.8.NFS Kernel Server Resource
With this resource, Pacemaker ensures that the NFS server daemons are always available. In the crm
shell, these must be configured as a clone of an lsb resource type, as follows:
Pacemaker Configuration
14
crm(live)configure# primitive p_lsb_nfsserver lsb:nfs \
op monitor interval="30s"
Note
The name of the lsb resource type must be identical to the filename of the NFS server init
script, installed under /etc/init.d. For example, if your distribution ships this script as /
etc/init.d/nfs, then the resource must use the lsb:nfs resource type.
Finally, commit the configuration:
crm(live)configure# commit
After you have committed your changes, Pacemaker should:
1. Mount the two LVs to /export/home and /export/public on the same node (confirm with mount
or by looking at /proc/mounts).
15
Chapter7.External Ping Resource
We want to ensure that the cluster can detect and fail-over in the event that the working node loses
connectivity to the rest of the network. The ocf:pacemaker:ping OCF is designed to continually
ping a configured host (usually the gateway) and collects weighted connectivity statistics over time.
Pacemaker assigns a score to the sampling and will fail the cluster over should the working cluster fall
below a given threshold.
Enter the following:
crm(live)configure# primitive p_ping ocf:pacemaker:ping \
params host_list="128.46.19.1" multiplier="1000" name="p_ping" \
op monitor interval="30" timeout="60"
Since we want the ping statistics to reside on both nodes we add a colocation directive for the ping
primitive:
crm(live)configure# clone cl_ping p_ping
Commit your changes:
crm(live)configure# commit
16
Chapter8.NFS Exports
Once you have established that your DRBD and Filesystem resources are working properly, you may
continue with the resources managing your NFS exports. To create highly available NFS export re-
sources, use the exportfs resource type.
To enable NFSv4 support, you must configure one and only one NFS export whose fsid option
is either 0 (as used in the example below) or the string root. This is the root of the virtual NFSv4
filesystem.
This resource does not hold any actual NFS-exported data, merely the empty directory (/) that the
other NFS exports are mounted into. Since there is no shared data involved here, we can safely clone
this resource.
crm(live)configure# primitive p_exportfs_root ocf:heartbeat:exportfs \
params fsid="0" directory="/export" options="rw,sync,crossmnt" \
clientspec="128.46.19.0/255.255.255.0" wait_for_leasetime_on_stop="false" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
Commit the configuration and test test it by running exportfs -v.
[root@alice drbd.d]# exportfs -v
/ 128.46.19.0/255.255.255.0(rw,wdelay,crossmnt,root_squash,no_subtree_check,fsid=0)
8.1.Non-Root NFS Export
All NFS exports that do not represent an NFSv4 virtual filesystem root must set the fsid option to
either a unique positive integer (as used in the example), or a UUID string (32 hex digits with arbitrary
punctuation).
Create NFS exports with the following commands:
crm(live)configure# primitive p_exportfs_home ocf:heartbeat:exportfs \
params fsid="1" directory="/export/home" \
options="rw,sync,no_root_squash,mountpoint" \
clientspec="128.46.19.0/255.255.255.0" \
wait_for_leasetime_on_stop="false" \
op monitor interval="30s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
meta is-managed="true"
crm(live)configure# primitive p_exportfs_public ocf:heartbeat:exportfs \
params fsid="2" directory="/export/public" \
options="rw,sync,no_root_squash,mountpoint" \
clientspec="128.46.19.0/255.255.255. 0" \
wait_for_leasetime_on_stop="false" \
op monitor interval="30s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
Modify the g_nfs group to include the new NFS exported filesystem primitives:
crm(live)configure# group g_nfs \
p_ip_nfs p_fs_home p_fs_public \
p_exportfs_home p_exportfs_public
Finally, commit your changes:
crm(live)configure# commit
NFS Exports
17
A df -h on the primary node should now look like this:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 985M 698M 238M 75% /
tmpfs 499M 23M 477M 5% /dev/shm
/dev/drbd0 388M 11M 358M 3% /export/home
/dev/drbd1 380M 11M 350M 3% /export/public
Your crm_status output will look similiar to the following:
# crm status
============
Last updated: Fri Jan 11 09:06:35 2013
Last change: Fri Jan 11 09:04:07 2013 via cibadmin on alice.mydomain.org
Stack: openais
Current DC: bob.mydomain.org - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
11 Resources configured.
============
Online: [ alice.mydomain.org bob.mydomain.org ]
p_exportfs_root (ocf::heartbeat:exportfs): Started alice.mydomain.org
p_lsb_nfsserver (lsb:nfs): Started alice.mydomain.org
Resource Group: g_fs
p_fs_home (ocf::heartbeat:Filesystem): Started alice.mydomain.org
p_fs_public (ocf::heartbeat:Filesystem): Started alice.mydomain.org
Resource Group: g_nfs
p_ip_nfs (ocf::heartbeat:IPaddr2): Started alice.mydomain.org
p_exportfs_home (ocf::heartbeat:exportfs): Started alice.mydomain.org
p_exportfs_public (ocf::heartbeat:exportfs): Started alice.mydomain.org
Master/Slave Set: ms_drbdr0_nfs [p_drbdr0_nfs]
Masters: [ alice.mydomain.org ]
Slaves: [ bob.mydomain.org ]
Master/Slave Set: ms_drbdr1_nfs [p_drbdr1_nfs]
Masters: [ alice.mydomain.org ]
Slaves: [ bob.mydomain.org ]
Clone Set: cl_ping [p_ping]
Started: [ bob.mydomain.org alice.mydomain.org ]
exportfs -v should now display the following:
# exportfs -v
/public 128.46.19.0/255.255.255.0(rw,wdelay,root_squash,no_subtree_check,fsid=2,mountpoint)
/ 128.46.19.0/255.255.255.0(rw,wdelay,crossmnt,root_squash,no_subtree_check,fsid=0)
/home 128.46.19.0/255.255.255.0(rw,wdelay,root_squash,no_subtree_check,fsid=1,mountpoint)
Note
There is no way to make your NFS exports bind to just this cluster IP address; the kernel NFS
server always binds to the wildcard address (0.0.0.0 for IPv4). However, your clients must
connect to the NFS exports through the floating IP address only, otherwise the clients will
suffer service interruptions on cluster failover.
Your final configuration will look like this:
node alice.mydomain.com \
attributes standby="on"
node bob.mydomain.com \
attributes standby="off"
primitive p_drbdr0_nfs ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="31s" role="Master" \
NFS Exports
18
op monitor interval="29s" role="Slave" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="120s"
primitive p_drbdr1_nfs ocf:linbit:drbd \
params drbd_resource="r1" \
op monitor interval="31s" role="Master" \
op monitor interval="29s" role="Slave" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="120s"
primitive p_exportfs_home ocf:heartbeat:exportfs \
params fsid="1" directory="/export/home" \
options="rw,sync,no_root_squash,mountpoint" \
clientspec="128.46.19.0/255.255.255.0" wait_for_leasetime_on_stop="false" \
op monitor interval="30s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
meta is-managed="true" target-role="Started"
primitive p_exportfs_root ocf:heartbeat:exportfs \
params fsid="0" directory="/export" \
options="rw,sync,crossmnt" \
clientspec="128.46.19.0/255.255.255.0" wait_for_leasetime_on_stop="false" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
meta target-role="Started"
primitive p_exportfs_public ocf:heartbeat:exportfs \
params fsid="2" directory="/export/public" \
options="rw,sync,no_root_squash,mountpoint" \
clientspec="128.46.19.0/255.255.255.0" wait_for_leasetime_on_stop="false" \
op monitor interval="30s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
meta is-managed="true" target-role="Started"
primitive p_fs_home ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/export/home" fstype="ext4" \
op monitor interval="10s" \
meta target-role="Started" is-managed="true"
primitive p_fs_public ocf:heartbeat:Filesystem \
params device="/dev/drbd1" directory="/export/public" fstype="ext4" \
op monitor interval="10s" \
meta target-role="Started"
primitive p_ip_nfs ocf:heartbeat:IPaddr2 \
params ip="128.46.19.241" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive p_lsb_nfsserver lsb:nfs \
op monitor interval="30s" \
meta target-role="Started"
primitive p_ping ocf:pacemaker:ping \
params host_list="128.46.19.1" multiplier="1000" name="p_ping" \
op monitor interval="30" timeout="60"
group g_fs p_fs_home p_fs_public \
meta target-role="Started"
group g_nfs p_ip_nfs p_exportfs_home p_exportfs_public \
meta target-role="Started"
ms ms_drbdr0_nfs p_drbdr0_nfs \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" \
notify="true" target-role="Started" is- managed="true"
ms ms_drbdr1_nfs p_drbdr1_nfs \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" \
notify="true" target-role="Started" is- managed="true"
clone cl_ping p_ping
location g_fs_on_connected_node g_fs \
rule $id="g_fs_on_connected_node-rule" -inf: not_defined p_ping or p_ping lte 0
colocation c_filesystem_with_drbdr0master inf: g_fs ms_drbdr0_nfs:Master
NFS Exports
19
colocation c_filesystem_with_drbdr1master inf: g_fs ms_drbdr1_nfs:Master
colocation c_rootexport_with_nfsserver inf: p_exportfs_root p_lsb_nfsserver
order o_drbdr0_before_filesystems inf: ms_drbdr0_nfs:promote g_fs:start
order o_drbdr1_before_filesystems inf: ms_drbdr1_nfs:promote g_fs:start
order o_filesystem_before_nfsserver inf: g_fs p_lsb_nfsserver
order o_nfsserver_before_rootexport inf: p_lsb_nfsserver p_exportfs_root
order o_rootexport_before_exports inf: p_exportfs_root g_nfs
property $id="cib-bootstrap-options" \
dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stop-all-resources="false" \
stonith-enabled="false" \
last-lrm-refresh="1360339179" \
no-quorum-policy="ignore" \
cluster-recheck-interval="2m"
rsc_defaults $id="rsc-options" \
resource-stickiness="200"
20
Chapter9.Using The NFS Service
This section outlines how to use the highly available NFS service from an NFS client. It covers NFS clients
using NFS version 4.
9.1.Connecting With NFS 4
Important
Connecting to the NFS server with NFSv4 will not work unless you have configured an
NFSv4 virtual filesystem root.
NFSv4 clients must use the floating cluster IP address to connect to a highly available NFS service,
rather than any of the cluster nodes' physical NIC addresses. NFS version 4 requires that you specify the
NFS export path relative to the the root of the virtual filesystem. Thus, to connect to the engineering
export, you would use the following mount command (note the nfs4 filesystem type): mount -t nfs4
10.9.9.180:/engineering /home/engineering As with NFSv3, there are a multitude of mount options
for NFSv4. For example, selecting larger- than-default maximum request sizes may be desirable:
mount -t nfs -o rsize=32768,wsize=32768 \
128.46.19.241:/export/home /export/home
21
Chapter10.Ensuring Smooth Cluster
Failover
For exports used by NFSv4 clients, the NFS server hands out leases which the client holds for the
duration of file access and then relinquishes. If the NFS resource shuts down or migrates while one
of its clients is holding a lease on it, the client never lets go of the lease at which point the serv-
er considers the lease expired after the expiration of a certain grace period. While any clients are
still holding unexpired leases, the NFS server maintains an open file handle on the underlying file
system, preventing it from being unmounted. The exportfs resource agent handles this through the
wait_for_leasetime_on_stop resource parameter when set, it simply rides out the grace period be-
fore declaring the resource properly stopped. At that point, the cluster can proceed by unmounting
the underlying filesystem (and subsequently, bringing both the filesystem and the NFS export up on
the other cluster node). On some systems, this grace period is set to 90 seconds by default, which
may cause a prolonged period of NFS lock-up that clients experience during an NFS service transition
on the cluster. To allow for faster failover, you may decrease the grace period by issuing the following
command on the cluster nodes (this example sets the grace period to 10 seconds): echo 10> /proc/
fs/nfsd/nfsv4leasetime
Note
The nfsv4leasetime virtual file may be modified only when the NFS kernel server is
stopped, and any modification becomes active only after it restarts.
22
Chapter11.System Testing
Test: monitor filesystem through failover. From the client, execute the following while performing
a failover and back:
for i in `seq 1 2000`; do echo $i; ls /home; sleep 1; done
Test: write large file to NFS while performing failover.
dd if=/dev/zero of=/home/test.bin bs=1024 count=1000
Test.Copy a CD image while failing over:
# cp /cd_image.iso /home
# md5sum /home/cd_image.iso

Das könnte Ihnen auch gefallen