Beruflich Dokumente
Kultur Dokumente
Release 8.0
Copyright 1990 - 2012 EMC Corporation. All rights reserved. Published in the USA.
Published August, 2012
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the
EMC online support website.
CONTENTS
Revision History
Chapter 1
PREFACE
Chapter 2
Introduction
About the NetWorker product ......................................................................
NetWorker client .........................................................................................
NetWorker storage node..............................................................................
NetWorker server ........................................................................................
NetWorker Management Console server ......................................................
NetWorker datazone....................................................................................
NetWorker daemons....................................................................................
Supported storage devices..........................................................................
NetWorker cluster services ..........................................................................
Enabler codes .............................................................................................
Chapter 3
Chapter 4
13
13
14
14
15
15
15
16
16
17
19
19
20
20
20
21
22
22
23
23
23
24
24
24
26
27
28
29
29
29
30
30
31
31
31
32
32
Contents
Chapter 5
36
36
37
37
37
39
39
39
40
40
43
44
44
45
46
46
47
48
48
49
49
50
51
51
52
52
52
52
53
56
56
56
56
56
57
58
59
59
59
60
61
62
63
64
64
64
65
65
66
67
68
68
Contents
Chapter 6
Chapter 7
72
72
72
73
73
74
74
74
75
76
79
80
80
81
84
84
85
86
87
88
89
Chapter 8
109
109
110
110
117
119
Contents
Chapter 9
Chapter 10
122
122
122
122
122
123
124
125
125
137
139
145
146
147
148
150
150
154
155
155
155
156
158
159
163
164
165
166
167
168
168
169
170
170
171
171
172
172
173
REVISION HISTORY
Date
03
02
01
Revision History
PREFACE
As part of an effort to improve its product lines, EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not
be supported by all versions of the software or hardware currently in use. The product
release notes provide the most up-to-date information on product features.
Contact your EMC representative if a product does not function properly or does not
function as described in this document.
This document was accurate at publication time. New versions of this document might be
released on the EMC online support website. Check the EMC online support website to
ensure that you are using the latest version of this document.
Purpose
This document describes how to uninstall, update and install the NetWorker software in a
cluster environment.
Audience
This document is part of the NetWorker documentation set and is intended for use by
system administrators during the installation and setup of NetWorker software in a cluster
environment.
Related documentation
The following EMC publications provide additional information:
PREFACE
NOTICE is used to present information that is important or essential to software or
hardware operation.
10
PREFACE
Note: A note presents information that is important, but not hazard-related. Used in
tables.
Typographical conventions
EMC uses the following type style conventions in this document:
Normal
Bold
Italic
Courier
Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when shown
outside of running text
Courier bold
Courier italic
<>
[]
{}
...
11
PREFACE
Technical support For technical support, go to EMC online support and select Support.
On the Support page, you will see several options, including one to create a service
request. Note that to open a service request, you must have a valid support agreement.
Contact your EMC sales representative for details about obtaining a valid support
agreement or with questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications. Send your opinions of this document to:
BSGdocumentation@emc.com
12
CHAPTER 5
Introduction
This chapter includes these sections:
In a media kit that contains the software and electronic documentation for several
related NetWorker products.
As a downloadable archive file from the EMC Online Support Site web site.
NetWorker client
NetWorker server
NetWorker client
The NetWorker client software communicates with the NetWorker server and provides
client initiated backup and recover functionality. The NetWorker client software is installed
on all machines that are backed up to the NetWorker server.
Introduction
13
Introduction
Off-loads most of the data movement involved in a backup or recovery operation from
the NetWorker server
Improves performance.
Requires high I/O bandwidth to manage the transfer of data transfer from local clients,
or network clients to target devices.
NetWorker server
The NetWorker server provides services to back up and recover data for the NetWorker
client machines in a datazone. The NetWorker server can also act as a storage node and
control multiple remote storage nodes.
Index and media management operations are some of the primary processes of the
NetWorker server:
The client file index tracks the files that belong to a save set. There is one client file
index for each client.
Unlike the client file indexes, there is only one media database per server.
The client file indexes and media database can grow to become prohibitively large
over time and will negatively impact backup performance.
The NetWorker server schedules and queues all backup operations, tracks real-time
backup and restore related activities, and all Console server communication.
This information is stored for a limited amount of time in the jobsdb database which
for real-time operations, has the most critical backup server performance impact. The
data stored in the jobsdb database is not required for restore operations.
14
Introduction
NetWorker datazone
A NetWorker datazone is a single NetWorker server and its client and storage node
machines.
NetWorker daemons
The NetWorker software requires processes on Windows or daemons on UNIX to run on the
system and facilitate NetWorker operations in the datazone.
Table 1 on page 15 lists the NetWorker daemons for each of the software components.
Table 1 NetWorker daemons
NetWorker packages
NetWorker daemons
NetWorker server
NetWorker client
nsrexecd
NetWorker Management
Console server
15
Introduction
The nsrmmd process or daemon is present when one or more devices are enabled.
The nsrmmgd process or daemon is present on the NetWorker server when a library is
enabled.
The nsrlcpd process or daemon is present on a NetWorker server and storage nodes
with an attached library.
The nsrcpd process or daemon is present on the NetWorker server during a client push
software upgrade.
gstsnmptrapd an optional process that is present on the Console server when SNMP
Trap monitoring is configured for a Data Domain system.
Disk devices
The NetWorker 7.3 (and Later) Hardware Compatibility Guide provides the most up-to-date
list of supported devices.
16
Introduction
Enabler codes
Enabler codes or licenses activate the functionality of the NetWorker software and are
generally sold separately. The NetWorker 8.0 License Guide provides more information.
Enabler codes
17
Introduction
18
Chapter 6
EMC AutoStart for Microsoft Windows Installation
This chapter includes these sections:
20
20
23
24
31
32
32
33
19
Installation requirements
This section specifies the software and hardware required to install and configure the
NetWorker server or client software within an AutoStartTM cluster environment.
The following software must be installed on each node in the cluster:
Software requirements
Cluster server
This software must be installed on the cluster server:
The NetWorker 8.0 or later software. The EMC Information Protection Software
Compatibility Guide provides the most up-to-date information about software
requirements and supported Autostart versions for each supported operating system.
Dedicated shared disk to be used as the NetWorker storage disk (for the nsr folder)
connected to all the nodes within the cluster
Ensure that the most recent cluster patch for the operating system is installed.
20
Cluster client
This software must be installed on the private disk of each node in the cluster:
EMC AutoStart
The EMC Information Protection Software Compatibility Guide provides the most
up-to-date information about software requirements and supported Autostart versions for
each supported operating system.
Hardware requirements
There are no hardware requirements for the cluster client. The following hardware
requirements must be met for server installation only:
Dedicated shared disk that is used as the NetWorker storage disk (for the /nsr
directory) is connected to all the nodes within the cluster.
Device with local affinity for the local bootstrap backup is connected to all the nodes
within the cluster.
Configuration options
The NetWorker Administration Guide provides information on how to configure:
Installation requirements
21
Node 1 clus_phys1
Node 2 clus_phys2
Private NetWork
clus_log1
Local Disk
If Node1fails,
clus_log1 fails
over to Node2
Local Disk
NetWorker
Logical Host
Public NetWork
22
Required information
Example
s:\nsr
jupiter
Server netmask
255.255.255.0
galaxy
C:\Program Files\EMC\AutoStart\galaxy.
mars
saturn
For a new installation of the NetWorker software, these steps for upgrading do not apply.
Install only the NetWorker client software in a cluster on page 31 provides instructions
on installing the NetWorker software in an AutoStart cluster environment.
23
Task 4: Create the resource that will become the managed shared disk on page 25
24
Task 4: Create the resource that will become the managed shared disk
On one of the cluster nodes, create a folder to use later as the managed, shared disk. For
example, create the folder s:\nsr.
Do not share the folder at this point, or the installation will fail. If the AutoStart software is
already installed and a managed shared disk already exists, remove the share property
now.
25
26
27
AutoStart warning messages are generated when running the nwinst.bat script. The
messages do not indicate that any AutoStart functionality is affected, and should be
ignored.
This is an example of an Autostart warning message:
Connecting to AutoStart domain autostar...Backbone warning on
primrose (pid 135) Wed Mar 31 01:52:34 2010 in
ISIS_MGT_INTERCL_MODULE .\cl_inter.c/intercl_accept(), line
1927 ID00005235 Intercl IO Queue NULL/IO_DEAD calling
resurrect. Process from=1/612 nd dest=1/1352. Backbone warning
on primrose (pid 1352) Wed Mar 31 01:52:34 2010 in
ISIS_MGT_INTERCL_MODULE .\cl_inter.c/intercl_accept(), line
1927 ID00005235 Intercl IO Queue NULL/IO_DEAD calling
resurrect. Process from=2/1868 and dest=1/1352.
b. Click Ok.
28
4. Restrict the set of NetWorker servers that can back up a particular client. Edit the
%SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file and add the
NetWorker virtual host as well as each cluster node to the list of servers.
Consider the following:
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
If this test does not display the expected scheduled backups and index, create an
empty file named pathownerignore in the directory where the NetWorker savefs
command is installed. This allows a valid save set on a NetWorker client to be
scheduled for backup. For example:
29
A NetWorker scheduled save might use a default rather than a specified client index
name. To override this default, run a manual save with the -c option:
save -c client_name
5. Restrict the set of NetWorker servers that can back up a particular client. Edit the
%SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file and add the
NetWorker virtual host as well as each cluster node to the list of servers.
Consider the following:
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
After the client configuration is complete:
The NetWorker cluster server uses the IP address of the NetWorker virtual host
regardless of which cluster node currently masters the NetWorker virtual server.
The NetWorker cluster server takes the identity of the NetWorker virtual servers
hostname regardless of which cluster node is currently running the NetWorker service.
The first time NetWorker software runs, it creates the client resource for the NetWorker
virtual host. Client resources must be created manually for any cluster node to be
backed up by the NetWorker virtual host.
6. Restart the NetWorker virtual server by taking it offline and then bringing it back
online.
7. Register the NetWorker software for permanent use. More information is available in
the NetWorker Licensing Guide.
30
Make sure the NetWorker client software is installed on every node to be backed up in the
cluster.
Ensure that the operating system is updated with the most recent cluster patch.
Install the NetWorker client software on the physical disk of each node in the cluster to
be backed up. More information is available in the NetWorker Licensing Guide.
If a physical Client is backed up to a NetWorker server outside the cluster, the name of
any virtual service that can run on the physical node must be added to the Remote
Access list of the physical Client resource.
2. Add the virtual hostnames to the hosts file on each cluster node (located in
%SystemRoot%\system32 # # \drivers\etc).
3. Make each virtual client within the cluster a client of the NetWorker server. For each
virtual client in the cluster:
a. Create a new client.
b. For the Name attribute, type the name of the NetWorker server.
c. For the Remote Access attribute, add entries for each physical client within the
cluster. For example:
root@clus_phys1
31
5. On each node in the cluster, run the cluster configuration program lc_config.
a. Log in as administrator on each node.
b. Run the lc_config program:
lc_config
When prompted for the shared nsr directory, you can leave the field blank.
c. Type y to confirm the information.
6. For each physical node in the cluster, ensure that the AutoStart Console user account,
NT AUTHORITY\SYSTEM is included in the valid user list with administrator access.
7. Uninstall the NetWorker Server software from each node. The NetWorker Installation
Guide provides instructions.
If the NetWorker software will be reinstalled to the same location, ensure that the required
files are deleted from the /bin subdirectory: NetWorker.clustersvr, lcmap.bat and
winst.bat.
32
This is the recommended device configuration for a NetWorker virtual server, release 6.0
or later, that is running in a two-node AutoStart cluster:
Each cluster node must be defined as a storage node for the NetWorker virtual server.
The nsrserverhost must always be listed last in each client's storage node list.
NetWorker software does not allow the configuration of a storage node on a cluster node
that is running the NetWorker server binaries. Consequently, before configuring a cluster
node as a storage node, you must move the NetWorker virtual server to another node in
the cluster.
33
The error might be caused by the NetWorker Remote Exec service not running on the
storage node. If the service is not running, do the following to restart the service:
1. Select Services from the Control Panel on the storage node.
2. Restart the service.
This error might occur because the NetWorker Client resources for each of the physical
nodes in the cluster are missing. In such cases, perform the following to correct the error:
1. Create a Client resources for each physical node in the cluster that is allowed to own
the virtual cluster client.
2. Rerun the backup.
34
35
36
CHAPTER 7
EMC AutoStart for UNIX Installation
This chapter includes these sections:
36
36
38
46
48
50
52
53
35
Installation requirements
This section specifies the software and hardware required to install and configure the
NetWorker server or client software within an AutoStart cluster environment.
Software requirements
The EMC Information Protection Software Compatibility Guide provides the most
up-to-date information about software requirements and supported Autostart versions for
each supported operating system.
Hardware requirements
The following hardware requirements must be met for server installation only:
Dedicated shared disk that is used as the NetWorker storage disk (for the /nsr
directory) is connected to all the nodes within the cluster.
Device with local affinity for the local bootstrap backup is connected to all the nodes
within the cluster.
Configuration options
The EMC NetWorker 8.0 Administration Guide provides information on how to configure:
36
Node A
clus_phys1
Node B
clus_phys2
(If Node A fails,
clus_vir1 fails
over to Node B)
Local
Disk 1
Local
Disk 2
Shared Disks 3
Public Network
Example
clus_vir1
192.168.1.10
/nsr_shared_mnt_pt
AIX: /dev/lv1
HP-UX: /dev/dsk/c5t2d0, /dev/vg03/1vol1 (for
Logical Volume Manager)
Linux: /dev/sdc1
Solaris: /dev/dsk/c1t3d0s0
Installation requirements
37
Example
AIX: /usr/bin/nw_ux.lc
HP-UX: /opt/networker/bin/nw_ux.lc
Linux: /usr/sbin/nsr/nw_ux.lc
Solaris: /usr/sbin/nw_ux.lc
AIX: /nsr/res/hostids
HP-UX: /nsr/res/hostids
Linux: /nsr/res/hostids
Solaris: /nsr/res/hostids
AIX: /usr/bin/nw_ux.lc
HP-UX: /opt/networker/bin/nw_ux.lc
Linux: /usr/sbin/nsr/nw_ux.lc
Solaris: /usr/sbin/nw_ux.lc
Linux: nmc_shared_mnt_pt
Solaris: nmc_shared_mnt_pt
For a new installation of the NetWorker software, these steps for updating do not apply.
Install only the NetWorker client software in a cluster on page 48 provides instruction on
installing the NetWorker software in an AutoStart cluster.
38
39
Table 4 on page 40 lists the environmental variables that you would type from the
Bourne shell:
Table 4 NetWorker server high availability environment variables
Operating
System
Command
Variable description
AIX
FT_DIR=/usr/lpp/LGTOaam51
FT_CONSOLE_DIR=$FT_DIR/console
FT_DOMAIN=domain_name
export FT_DIR FT_DOMAIN FT_CONSOLE_DIR
HP-UX
FT_DIR=/opt/LGTOaamxx
FT_CONSOLE_DIR=$FT_DIR/console
FT_DOMAIN=domain_name
export FT_DIR FT_DOMAIN FT_CONSOLE_DIR
Linux
FT_DIR=/opt/LGTOaamxx
FT_CONSOLE_DIR=$FT_DIR/console
FT_DOMAIN=domain_name
export FT_DIR FT_DOMAIN FT_CONSOLE_DIR
Solaris
FT_DIR=/opt/LGTOaamxx
FT_CONSOLE_DIR=$FT_DIR/console
FT_DOMAIN=domain_name
export FT_DIR FT_DOMAIN FT_CONSOLE_DIR
4. Perform the following from each node in the cluster that will run the NetWorker server
process:
a. Run the cluster configuration script, networker.cluster, from the appropriate
operating system directory. Table 5 on page 40 provides details:
Table 5 Operating system directory for networker.cluster script
Operating system
Directory
AIX
/usr/bin/
HP-UX
/opt/networker/bin/
Linux
/usr/sbin/
Solaris
/usr/sbin/
40
Any configuration errors can be undone by running the networker.cluster -r option.
5. From one node in the cluster:
a. Log in as administrator.
b. Customize the necessary file from the appropriate operating system directory.
Table 6 on page 41 provides details. This file can be used to create the NetWorker
resource group and its dependant objects.
In the file, there are multiple instances of the NW Customize comment, ensure that all
entries are replaced with the appropriate cluster configuration values.
Table 6 Operating system directory for cluster configuration files
Operating system
Directory
AIX
/usr/bin/nw_ux.lc.imp
HP-UX
/opt/networker/bin/nw_ux.lc.aam5.imp
Linux
/opt/lgtonmc/bin/nw_ux.lc.aam5.imp
Solaris
/opt/LGTOnmc/bin/nw_ux.lc.aam5.imp
c. Follow the instructions in the comments at the beginning of the file to customize
the default values listed in Table 7 on page 41 based on the cluster configuration.
Filename
Environmental variable
Value
AIX
/usr/bin/nw_ux.lc.imp
192.168.1.10
clus_phys1, clus_phys2
/nsr_shared_mnt_pt
/dev/1v1
HP-UX
/opt/networker/bin/nw_ux.lc.aa
m5.imp
192.168.1.10
clus_phys1, clus_phys2
/dev/vg03/lvol1, /vg_nsr
hfs
Linux
192.168.1.10
clus_phys1, clus_phys2
/nsr_shared_mnt_pt
/dev/dsk/c1t3d0s0
Solaris
192.168.1.10
clus_phys1, clus_phys2
/nsr_shared_mnt_pt
/dev/dsk/c1t3d0s0
41
Command
AIX
$FT_DIR/bin/ftcli -c import/usr/bin/nw_ux.lc.aam5.imp
HP-UX
Linux
Solaris
e. Use the AutoStart Management Console to verify that the NetWorker resource
group was imported correctly.
f. Run the appropriate script from Table 9 on page 42.
Table 9 NetWorker server scripts
Operating System
Script
AIX
/usr/bin/nwinst.sh
HP-UX
/opt/networker/bin/nwinst.sh
Linux
/usr/bin/nwinst.sh
Solaris
/usr/sbin/nwinst.sh
42
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
2. On one node in the cluster, start the NetWorker service by using the cluster
management software:
a. Use the AutoStart Management Console to bring the NetWorker Resource Group
online.
b. Edit or create the /nsr/res/servers file:
Add the set of NetWorker servers, one per line, that require access to this client.
For each virtual NetWorker server, add an entry for each physical host and the
virtual NetWorker server. For example:
clus_vir1
clus_phys1
clus_phys2
3. If required, grant access to each NetWorker client that is outside of the cluster:
a. Shut down the NetWorker processes and verify that all NetWorker services have
stopped.
b. Edit or create the /nsr/res/servers file:
Add the set of NetWorker servers, one per line, that require access to this client.
For each virtual NetWorker server, add an entry for each physical host and the
virtual NetWorker server. For example:
clus_vir1
clus_phys1
clus_phys2
43
b. Click Ok.
Linux
/nsr->/nsr.NetWorkerBackup.local
/nsr.NetWorkerBackup.local->/var/nsr
In order for their save sets to restart after a virtual client or NetWorker server failover
savegroups must have the Autorestart attribute enabled and the Manual Restart
option disabled.
2. Make each physical client within the cluster a client of the NetWorker virtual server. For
each physical client in the cluster:
a. Create a new NetWorker client.
b. For the Name attribute, type the name of the physical client.
3. Make each virtual client within the cluster a client of the NetWorker virtual server. For
each virtual client in the cluster:
a. Create a new NetWorker client.
b. For the Name attribute, type the name of the virtual client.
44
c. For the Remote Access attribute, add entries for each physical client within the
cluster. For example:
root@clus_phys1
5. Run a test probe to verify that the Client and Group resources have been properly
configured.
Type this command on the node on which the NetWorker server resides:
savegrp -pv -c client_name group_name
5. Restart the server by taking the NetWorker virtual server offline and then putting it
back online.
6. From the NetWorker Administration window, note the host ID number for the
appropriate cluster license.
7. Register the NetWorker software.
45
A highly available NetWorker Console Server in only supported on the Linux and Solaris
platforms.
3. Install the NetWorker Console server software on each node in the cluster.
Linux: lgtonmc
Solaris: LGTOnmc
The NetWorker Installation Guide provides detailed installation instructions.
where:
xx is set to 5.0 for AutoStart version 5.0 or 5.1 for version 5.1
domain_name is set to AutoStart domain
4. From each node in the cluster that will run the NetWorker Console server process:
46
a. Run the appropriate cluster configuration script as listed in Table 10 on page 47:
Table 10 Cluster configuration scripts
Under this operating system
Linux
/opt/lgtonmc/bin/gst_ha.cluster
Solaris
/opt/LGTOnmc/bin/gst_ha.cluster
Any changes to the configuration can be undone by running the gst_ha.cluster -r
option.
5. From one node in the cluster, customize the appropriate file listed in Table 11 on
page 47.
Table 11 Cluster file customization
Under this operating system
Linux
/opt/lgtonmc/bin/gst_ha_ux.aam5.imp
Solaris
/opt/LGTOnmc/bin/gst_ha_ux.aam5.imp
This file is used to create the NetWorker Console resource group and its dependant
objects in one step.
In the gst_ha_ux.aam5.imp file, there are multiple instances of the NW Customize
comment. Ensure that all entries are replaced with the appropriate cluster
configuration values.
Follow the instructions listed in the comments at the beginning of the
gst_ha_ux.aam5.imp file to customize these NetWorker Console default values based
on the cluster configuration:
Virtual hosts IP address: 192.168.1.10
Physical hostnames: clus_phys1, clus_phys2
Shared disk file system: /nmc_shared_mnt_pt
Device name: /dev/dsk/c1t3d0s0
Table 3 on page 37 lists the sample values.
47
6. Type the appropriate command from Table 12 on page 48. The NetWorker Console
resource group is automatically created.
Table 12 Cluster configuration scripts
Under this operating system
Linux
Solaris
7. Verify that the NetWorker Console resource group was imported by using the FullTime
AutoStart Console.
Install the NetWorker client software on every node in the cluster to be backed up.
48
AIX
/usrbin/
HP-UX
/opt/networker/bin/
Linux
/usr/sbin/
Solaris
/usr/sbin/
Any configuration errors can be undone by running the networker.cluster -r option.
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
To define the list of trusted NetWorker servers, perform the following steps on each node
in the cluster:
1. Shut down the NetWorker processes and verify that all NetWorker services have
stopped.
2. Edit or create the /nsr/res/servers file:
a. Add the set of NetWorker servers, one per line, that require access to this client.
b. For each virtual NetWorker server, add an entry for each physical host and the
virtual NetWorker server. For example:
clus_vir1
clus_phys1
clus_phys2
49
3. Check the NetWorker boot-time startup file to see whether nsrexecd is being run with
the -s option.
If the -s option exists, remove all occurrences of the following:
-s server_name
If a physical Client is backed up to a NetWorker server outside the cluster, the name of
any virtual service that can run on the physical node must be added to the Remote
Access list of the physical Client property sheet.
2. Make each virtual client within the cluster a client of the NetWorker virtual server. For
each virtual client in the cluster:
a. Create a new client.
b. For the Name attribute, type the name of the virtual client.
c. For the Remote Access attribute, add entries for each physical client within the
cluster. For example:
root@clus_phys1
AIX
/usr/bin/networker.cluster -r
HP-UX
/opt/networker/bin/networker.cluster -r
Linux
/usr/sbin/networker.cluster -r
Solaris
/usr/sbin/networker.cluster -r
AIX Version
To uninstall individual NetWorker software packages or all of the NetWorker packages at
the same time, use the SMIT utility:
1. Log in as root on the computer where the software is being removed.
2. Type this command to remove the NetWorker software:
smitty remove
Client software
LGTOnw.clnt.rte
Storage node
LGTOnw.node.rte
Server
LGTOnw.serv.rte
Man pages
LGTOnw.man.rte
LGTOnw.licm.rte
HP-UX version
To uninstall individual NetWorker software packages or all of the NetWorker packages
simultaneously:
51
Linux version
To uninstall a single NetWorker software package or all of the NetWorker packages
simultaneously, type:
rpm -e lgtoman-x.x.x lgtolicm-x.x.x lgtoserv-x.x.x lgtonode-x.x.x
lgtoclnt-x.x.x
Solaris version
To uninstall a single NetWorker software package or all of the NetWorker packages
simultaneously, type:
pkgrm LGTOman LGTOserv LGTOnode LGTOnmc LGTOlicm LGTOclnt
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
52
The IPOverride attribute does not add to the normal list of virtual client-owned paths, but
completely overrides them. In the previous example, if the virtual client also owns the
/share/web filesystem, set this path:
IPOverrride=135.69.103.149=/dev/rdsk/c1t3d0s1, /share/web
53
54
CHAPTER 8
HACMP or PowerHA SystemMirror for AIX
Installation
This chapter includes these sections:
56
56
58
64
66
67
68
68
55
Installation requirements
This section specifies the software and hardware required to install and configure the
NetWorker server or client software within an HACMP/PowerHA for AIX environment.
Software requirements
The EMC NetWorker Software Compatibility Guide on the EMC Online Support Site
provides information on the supported AIX and HACMP/PowerHA versions.
Hardware requirements
The following hardware requirements must be met:
A dedicated shared volume group and file system used as the NetWorker storage area
(for the /nsr directory) must be connected to all the nodes within the cluster (for
server installation only).
A device with local affinity for the local bootstrap backup connected to all the nodes
within the cluster (for server installation only).
For clusters configured with IP address takeover (IPAT), must be connect to a computer
through its boot address if a resource group is not defined. Service addresses are
associated with a resource group, not physical nodes.
The output of the hostname command on a machine must correspond to an IP
address that can be pinged.
The machine hostname must also be set to the name equivalent to the address
that is used by the physical clients dedicated NIC. Configure this NIC as the
primary network adapter, for example, en0.
An extra NIC outside of the control of HACMP/PowerHA for AIX is not required to
enable a highly available NetWorker server.
56
Configuration options
The EMC NetWorker 8.0 Administration Guide provides details about the following
optional configurations:
Node name The HACMP/PowerHA for AIX defined name for a physical node.
Boot address The address used by a node when it boots up, but before
HACMP/PowerHA for AIX starts.
Virtual client The client associated with a highly available resource group. The file
system defined in a resource group belongs to a virtual client. The virtual client uses
the service address.
The HACMP/PowerHA for AIX resource group must contain an IP service label to be
considered a NetWorker virtual client.
Physical client The client associated with a physical node. For example the / and
/usr file systems belong to the physical client.
Physical host address (physical hostname) The address used by the physical client.
For HACMP for AIX 4.5, this is equivalent to a persistent IP address.
Example
clus_vir1, 191.168.1.10
191.168.1.20
191.168.1.30
/nsr_shared_mnt_pt
/usr/bin/nw_hacmp.lc
/usr/bin/nw_powerha.lc
/nsr/res/hostids
Installation requirements
57
Cluster
Node 2
node name: right
physical host: node_2
Node 1
node name: left
physical host: node_1
Private Network
Local
Disk 1
Local
Disk 2
Shared Disks 3
Public Network
Figure 3 Sample cluster configuration with private network
58
Any configuration can be undone by running the networker.cluster -r option.
4. Verify that these values are set:
NSR_SERVERHOST = virtual hostname (clus_vir1)
NSR_SHARED_DISK_DIR = shared nsr mount directory
(/nsr_shared_mnt_pt)
59
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
60
c. Check the NetWorker boot-time startup file to see whether nsrexecd is being run
with the -s option. If the -s option exists, remove all occurrences of this in the file:
-s server_name
2. Perform the following on one node in the cluster, start the NetWorker service by using
the cluster management software:
a. Use the HACMP/PowerHA for AIX software to bring the NetWorker resource group
online.
b. Edit or create the /nsr/res/servers file:
Add the set of NetWorker servers, one per line, that require access to the client.
For each virtual NetWorker server, add an entry for each physical host and the
virtual NetWorker server. For example:
clus_vir1
clus_phys1
clus_phys2
3. If required, grant access to each NetWorker client that is outside of the cluster:
a. Shut down the NetWorker processes and verify that all NetWorker daemons have
stopped:
nsr_shutdown
ps -ef | grep nsr
61
Click Ok.
/nsr->/nsr.NetWorker.local
/nsr.NetWorker.local->/var/nsr
In order for their save sets to restart after a virtual client or NetWorker server failover,
savegroups must have the Autorestart attribute enabled and the Manual Restart
option disabled.
2. Make each physical client within the cluster a client of the NetWorker server. For each
physical client in the cluster:
a. Create a new NetWorker client.
b. Type the name of the physical client for the Name attribute.
c. Add the boot adaptor name in the Aliases list.
3. Make each virtual client within the cluster a client of the virtual NetWorker server. For
each virtual client in the cluster:
a. Create a new client.
b. Type the name of the service address for the Name attribute.
c. For the Remote Access attribute, the user must be the root user for the boot
address.
d. Add entries for each service, boot, and physical address defined within the cluster.
For example:
root@clus_phys1
62
4. Run a test probe to verify that the Client and Group resources have been properly
configured.
On the node on which the NetWorker server resides, type:
savegrp -pv -c client_name group_name
5. If the test probe does not display the scheduled backups and index, see Track
scheduled saves on page 68.
For example:
12345678:87654321
5. Restart the server by taking the NetWorker virtual server offline and then putting it
back online.
6. From the NetWorker Administration window, note the host ID number for the
appropriate cluster license.
7. Register the NetWorker software.
63
Ensure that the NetWorker client software is installed on every node in the cluster that
needs to be backed up.
If the configuration is not correct, any configuration can be undone by running the
networker.cluster -r option. For an example of the script, see Exit the SMIT program. on
page 67.
64
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
To define the list of trusted NetWorker servers, perform these steps on each node in the
cluster:
1. Shut down the NetWorker processes and verify that all NetWorker daemons have
stopped.
2. Edit or create the /nsr/res/servers file:
a. Add the set of NetWorker servers, one per line, that require access to this client.
b. For each NetWorker virtual server, add an entry for each physical host and the
NetWorker virtual server. For example:
clus_vir1
clus_phys1
clus_phys2
3. Check the NetWorker boot-time startup file to see whether nsrexecd is being run with
the -s option. If the -s option exists, remove all occurrences of this in the file:
-s server_name
If a physical Client is backed up to a NetWorker server outside the cluster, the name of
any virtual service that can run on the physical node must be added to the Remote
Access list of the physical Client property sheet.
65
2. Make each virtual client within the cluster a client of the NetWorker server.
For each virtual client in the cluster:
a. Create a new client.
b. For the Name attribute, type the name of the NetWorker server.
c. For the Remote Access attribute, add entries for each service, boot, and physical
address defined within the cluster. For example:
root@clus_phys1
A list of NetWorker services appears and prompts you to continue with the
nsr_shutdown command.
3. Run this command:
/usr/bin/networker.cluster -r
4. Use SMIT to uninstall individual NetWorker software packages or all of the NetWorker
packages at the same time.
5. To uninstall the NetWorker software, log in as root on the computer where the software
is being removed.
6. To remove the NetWorker software, type:
smitty remove
66
Client software
LGTOnw.clnt.rte
Storage node
LGTOnw.node.rte
Server
LGTOnw.serv.rte
Man pages
LGTOnw.man.rte
LGTOnw.licm.rte
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
67
In this example, the logical volume ha_rv, that belongs to the volume group havg, will
be used as a raw device:
chlv -traw ha_rv
lsvg -l havg
havg:
The NetWorker software does not support raw volumes used for concurrent access.
If the test probe does not display all the scheduled save sets:
Ensure that the save sets defined for the client are owned by that client. If necessary,
redistribute the client save sets to the appropriate Client resources.
Misconfiguration of the Cluster resources might cause scheduled save sets to be
dropped from the backup. The EMC NetWorker 8.0 Administration Guide provides
more information.
Type this command to override scheduled save rules (not path ownership rules):
touch networker_bin_dir/pathownerignore
68
If the pathownerignore command was used, check that the NetWorker scheduled save
uses the correct client index. If the wrong index is used, the save sets can be forced to go
to the correct index:
1. From the NetWorker Administration window, select a client and edit its properties.
2. For the Backup Command attribute, type the name of a backup script that contains
save -c client_name,
The NetWorker administration guide provides details on the Backup Command
attribute.
69
70
CHAPTER 9
MC/ServiceGuard and MC/LockManager
Installation
This chapter includes the following sections:
72
72
74
81
84
88
89
71
Installation requirements
This section provides the software and hardware requirements to install the NetWorker
software within an HP-UX MC/ServiceGuard or HP-UX MC/LockManager cluster
environment:
The EMC Information Protection Software Compatibility Guide provides the most
up-to-date information about software requirements.
NetWorker 8.0 and later does not support a MC/ServiceGuard NetWorker server running
on the PA_RISC architecture.
72
Hardware requirements
These hardware requirements must be met:
A dedicated shared disk used as the NetWorker storage area (for the /nsr directory)
must be connected to all the nodes within the cluster (for server installation only).
A device with local affinity for the local bootstrap backup (server installation only).
Private Network
Node A
clus_phys1
Node B
clus_phys2
Local
Disk 1
Local
Disk 2
Shared Disks
Example
clus_vir1,192.168.109.41
/vg011
/dev/vg01/lvol1
/etc/cmcluster/networker/legato.control
/nsr/res/hostids
/opt/networker/bin/legato.monitor
/etc/cmcluster/.nsr_cluster
Installation requirements
73
Different operating systems use different terms for the same cluster concepts. For
example, HP-UX MC/ServiceGuard and HP-UX MC/LockManager refer to:
Unzip and extract the NetWorker package- for package download and extraction
details.
Consider the installation directory - for details regarding the installation of the
NetWorker software to a non default location and disk space requirements.
Task 5: Make the cluster nodes a client of the NetWorker virtual server on page 80
74
Ensure that the /etc/hosts file on each cluster node contains the name of the virtual
host. The virtual hostname can be published in the Domain Name System (DNS) or
Network Information Services (NIS).
The cluster services do not automatically start after a reboot. Set the
AUTOSTART_CMCLD=1 value in the /etc/rc.config.d/cmcluster file to configure the
cluster services to automatically start. If the value is set to 0, the cluster services do
not automatically start after a reboot. Setting the value to 1 ensures that the cluster
services start after reboot.
When the NetWorker software is configured for the cluster on a node, the cluster
configuration creates:
A symbolic link, /nsr.NetWorker.local, that points to the directory that contains the
NetWorker configuration files.
A link, /nsr, that points to the /nsr.NetWorker.local.
For example, if the local NetWorker directory was created in /var/nsr, each client
member has the following links after the installation:
/nsr->/nsr.NetWorker.local
/nsr.NetWorker.local->/var/nsr
Support of multiple IPs in one package. Refer to the Support for multiple IP addresses
for one resource group in the NetWorker 8.0 Administration Guide for more
information.
Support for the lcmap caching mechanism. Refer to the NetWorker cluster
performance issues section in the NetWorker 8.0 Administration Guide for
information about adjusting the cluster cache timeout value.
Does not require the creation and configuration of the NetWorker.clucheck and
.nsr_cluster files.
If desired, the previous method to configure the NetWorker software using the
NetWorker.clucheck and .nsr_cluster files can still used.
75
IPv4 addresses can also be enclosed in square brackets, but it is not necessary.
b. Ensure everyone has read ownership and access permissions for the .nsr_cluster
file.
c. Additional paths, preceded by colons, can be added as required. The following is
an example of a typical .nsr_cluster file:
networker:192.168.109.41:/vg011
oracle:192.168.109.10:/vg021:/ora_data1:/ora_data2
If an HP-UX MC/ServiceGuard package does not contain a disk resource, it does
not require an entry in the .nsr_cluster file. However, if this diskless package is
online and the only package on that cluster node, there might be cmgetconf
messages generated in the var/admin message file during the backup.
To avoid these messages, allocate a file system that is mounted to a mount point
and add this mount point along with the package name and IP address into the
.nsr_cluster file in the format shown above. This file system is not backed up, but
should be mountable on each cluster node that the diskless package might fail
over to.
Use the Logical Volume Manager (LVM) to define the logical volumes or volume groups the
NetWorker software will use. The HP-UX documentation provides more information.
76
Legacy Mode
To define and configure the NetWorker software as highly available in legacy mode:
1. Run the networker.cluster script (located in the /opt/networker/bin directory) on one
of the nodes where the NetWorker software is installed.
2. Follow the prompts to configure the NetWorker package in Legacy mode:
a. Do you wish to continue? [Yes]? Press Enter.
b. Enter directory where local NetWorker database is installed [/nsr]? Press Enter.
c. Do you wish to use the updated NetWorker integration framework? Yes or No [Yes]?
If the previous method to configure the NetWorker software using the
NetWorker.clucheck and .nsr_cluster files was performed, enter No at this
prompt.
If the updated method will be used to configure the NetWorker software (Steps
1 and 2) were not performed, enter Yes at this prompt.
d. Select the type of package for the NetWorker Server(1-modular or 2-legacy) [2]?
Press Enter.
The networker.cluster script creates a pkg.conf file and a legato.control file on the
physical host on which it is running.
3. Copy the legato.control file to all the nodes in the cluster and ensure that execute
permissions are set.
4. After the legato.control file is copied to all the nodes in the cluster, run the
networker.cluster script on the remaining nodes in the cluster. When prompted to
generate a legato.control file, type n.
5. Check each node to determine if the nsrexecd daemon is running:
ps -ef | grep nsrexecd
6. Check each node to determine if the legato.control file was successfully installed on
each node:
cd /etc/cmcluster/networker
more legato.control
77
Modular mode
To define and configure the NetWorker software as highly available in modular mode:
1. Run the networker.cluster script (located in the /opt/networker/bin directory) on one
of the nodes where the NetWorker software is installed.
2. Follow the prompts to configure the NetWorker package in Modular mode:
a. Do you wish to continue? [Yes]? Press Enter.
b. Enter directory where local NetWorker database is installed [/nsr]? Press Enter.
c. Do you wish to use the updated NetWorker integration framework? Yes or No [Yes]?
If the previous method to configure the NetWorker software using the
NetWorker.clucheck and .nsr_cluster files was performed, enter No at this
prompt.
If the updated method will be used to configure the NetWorker software (Steps
1 and 2) were not performed, enter Yes at this prompt.
d. Select the type of package for the NetWorker Server(1-modular or 2-legacy) [1]?
Press Enter.
e. Do you wish to generate a new package configuration file for NetWorker package
[Yes]? Press Enter.
f. Type the IP address to use to monitor this package? 10.5.163.89 (the IP address of
the specific cluster.)
g. Enter the IP subnet to monitor for this package? 10.5.163.0 (the subnet of the
above IP.)
h. Select the volume management method to be used for the disk resource (1-LVM or
2-VxVM) [1]? Press Enter.
i. Check each node to determine if the nsrexecd daemon is running:
ps -ef | grep nsrexecd
3. Check each node to determine if the legato.control file was successfully installed on
each node:
cd /etc/cmcluster/networker
more legato.control
78
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
The nsrexecd daemon named in the file should appear only as:
nsrexecd
3. On one node in the cluster, start the NetWorker service by using the cluster
management software.
4. Repeat steps 2-3 for the global /nsr/res/servers file located shared drive on the active
node of the cluster.
79
Task 5: Make the cluster nodes a client of the NetWorker virtual server
Make all cluster nodes a client of the NetWorker server:
1. While still connect to the NetWorker server in NMC.
2. Configure the Group resource to be used to backup the cluster clients, as required.
Enable the Autorestart attribute is enabled and the Manual Restart option disabled.
This will ensure that save set backups will restart in the event of a virtual client of
NetWorker server failover during backups. The NetWorker 8.0 Administration Guide
provides details on defining and configuring groups.
3. Create a new NetWorker client for each physical node within the cluster:
a. In the Administration window, click the Configuration button.
b. Create a new NetWorker client.
c. In the Name attribute, type the name of the physical client.
4. Modify the virtual cluster client resource automatically created for the virtual
NetWorker server:
a. In the Save set field, specify the shared disk resource.
b. In the Remote Access field under the Globals (1 of 2) tab, specify the root user
account for each physical node within the cluster.
For example:
root@<clus_phys1>
root@<clus_phys2>
b. If the test probe does not display the scheduled backups and index, see Track
scheduled saves on page 89 for more information.
80
4. Log in to the system that is running the NetWorker virtual server and create a file
named /nsr/res/hostids that contains the host IDs of all the cluster nodes.
Use the following syntax:
hostid1:hostid2:hostid3:...
For example:
12345678:87654321
5. Restart the server by taking the NetWorker virtual server offline and then putting it
back online.
6. From the NetWorker Administration window, note the host ID number for the
appropriate cluster license.
7. Register the NetWorker software as described in the NetWorker Licensing Guide.
In all examples, the physical nodes in the cluster are dbkpsr01 and dbkpsr02.
1. Mount the VxVM file system as /nsrdata on the dbkpsr01 node (active node) by
running the following:
mount /dev/vx/dsk/nsrdg/nsrvol /nsrdata
6. On dbkpsr01, run:
rcp -r -p /etc/cmcluster/.nsr_cluster dbkpsr02:/etc/cmcluster
81
This file may need to be modified prior to creating new binary configuration file.
More information is provided in the Cluster Installation Guide.
9. On dbkpsr02, create the NetWorker directory in /etc/cmcluster.
10. On dbkpsr01, run:
rcp -r -p /etc/cmcluster/networker/legato.control
dbkpsr02:/etc/cmcluster/networker
82
This file might require confirmation before creating the new binary configuration file.
More information is provided in the Cluster Installation Guide.
14. On each node, check if nsrexecd is running:
ps -ef | grep nsrexecd
This will install the NetWorker server in the ServiceGuard cluster. When the NetWorker
server is not in the cluster, the following appears on both nodes:
nsr -> /nsr.NetWorker.local
When the NetWorker server is in the cluster (the NetWorker server service is running
on active node):
Active node:
nsr -> /nsrdata/nsr
Passive node:
nsr -> /nsr.NetWorker.local
83
18. Create a file called servers in this directory and enter dbkpsr as the text (this will be for
/nsr.NetWorker.local directory - not in cluster mode).
19. On dbkpsr02, run:
cd /nsr/res
20. Create a file called servers in this directory and enter dbkpsr as the text (this will be for
the /nsr.NetWorker.local directory - not in cluster mode).
21. On both nodes, edit /sbin/init.d/networker and search for nsrexecd. It should not
have any "-s" parameters
22. On both nodes, run /sbin/init.d/networker stop.
23. On both nodes, run /sbin/init.d/networker start.
24. On dbkpsr01, start the NetWorker service using cluster management software. Run:
cmrunpkg networker
Create a file called servers in this directory and enter dbkpsr as the text (this will be for
the /nsrdata/nsr directory in cluster mode)
26. On dbkpsr01, run:
rcp -r -p /etc/cmcluster/.nsr_cluster dbkpsr02:/etc/cmcluster
84
Does not require the creation and configuration of the NetWorker.clucheck and
.nsr_cluster files.
If desired, the previous method to configure the NetWorker software using the
NetWorker.clucheck and .nsr_cluster files can still used.
To configure the NetWorker client software in the cluster:
If the previous method of configuring the NetWorker client in a cluster will be used, create
the NetWorker.clucheck file and configure the .nsr_cluster file as detailed in Step 1 and 2.
If the new method of configuring the NetWorker server will be used, proceed to step 3 .
1. Create the NetWorker.clucheck file in the /etc/cmcluster directory.
2. Configure the .nsr_cluster file in the /etc/cmcluster directory to determine which
mount points an MC/ServiceGuard or MC/LockManager package owns. The
.nsr_cluster file should have an entry for the NetWorker shared mount point, which is
owned by the NetWorker package.
a. Add the name and path of each mount point to the file in the following format:
pkgname:published_ip_address:owned_path [:...]
IPv4 addresses can also be enclosed in square brackets, but it is not necessary.
b. Ensure everyone has read ownership and access permissions for the .nsr_cluster
file.
c. Additional paths, preceded by colons, can be added as required. The following is
an example of a typical .nsr_cluster file:
networker:192.168.109.41:/vg011
oracle:192.168.109.10:/vg021:/ora_data1:/ora_data2
If an HP-UX MC/ServiceGuard package does not contain a disk resource, it does
not require an entry in the .nsr_cluster file. However, if this diskless package is
online and the only package on that cluster node, there might be cmgetconf
messages generated in the var/admin message file during the backup.
To avoid these messages, allocate a file system that is mounted to a mount point
and add this mount point along with the package name and IP address into the
85
.nsr_cluster file in the format shown above. This file system is not backed up, but
should be mountable on each cluster node that the diskless package might fail
over to.
3. Configure the NetWorker client software as cluster aware:
a. Run the networker.cluster script (located in the /opt/networker/bin directory) on
one of the nodes where the NetWorker software is installed.
b. Follow the prompts to configure the NetWorker package in Modular mode:
a. Do you wish to continue? [Yes]? Press Enter.
b. Enter directory where local NetWorker database is installed [/nsr]? Press Enter.
c. Do you wish to use the updated NetWorker integration framework? Yes or No
[Yes]?
If the previous method to configure the NetWorker software using the
NetWorker.clucheck and .nsr_cluster files was performed, enter No at this
prompt.
If the updated method will be used to configure the NetWorker software (Steps
1 and 2) were not performed, enter Yes at this prompt.
d. Do you wish to configure for both NetWorker server and client? Yes or No [Yes]?
Enter No.
86
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
To specify a list of NetWorker servers that can access the cluster, on each node in the
cluster:
a. Edit or create the local /nsr/res/servers file and specify the NetWorker servers, one
per line, that require access to the client. The first entry becomes the default
NetWorker server.
b. Check the NetWorker boot-time startup file for nsrexecd -s arguments and delete
any that exist. The nsrexecd -s argument supersedes the /nsr/res/servers file.
For example, delete the arguments for nsrexecd -s in the following file:
vi /etc/init.d/networker
nsrexecd -s venus -s mars
The nsrexecd daemon named in the file should appear only as:
nsrexecd
c. Use the NetWorker boot-time startup file to stop and restart the NetWorker
software, as follows:
/sbin/init.d/networker stop
/sbin/init.d/networker start
87
4. Run a test probe to verify that the Client and resources have been properly configured.
On the node on which the NetWorker server resides, run this command:
savegrp -pv -c <client_name> -G <_name>
If the test probe does not display the scheduled backups and index, see Track
scheduled saves on page 89 for more information.
2. Shut down the client services on each node from which the NetWorker software is
being removed:
nsr_shutdown
c. If the previous method was used to install the NetWorker software, remove the
following file from the /etc/cmcluster directory:
NetWorker.clucheck
4. To avoid running the NetWorker server software in the cluster, remove the NetWorker
package software. The EMC NetWorker Installation Guide provides information on how
to uninstall the software.
88
If the test probe does not display all the scheduled save sets:
Ensure that the save sets defined for the client are owned by that client. If necessary,
redistribute the client save sets to the appropriate Client resources.
Misconfiguration of the Cluster resources might cause scheduled save sets to be
dropped from the backup. The EMC NetWorker administration guide provides more
information.
Type this command to override scheduled save rules (not path ownership rules):
touch networker_bin_dir/pathownerignore
If the pathownerignore command was used, check that the NetWorker scheduled save
uses the correct client index. If the wrong index is used, the save sets can be forced to go
to the correct index:
1. From the NetWorker Administration window, select a client and edit its properties.
2. For the Backup Command attribute, type the name of a backup script that contains
save -c client_name.
The NetWorker administration guide provides details about the Backup Command
attribute.
89
90
CHAPTER 10
Microsoft Cluster Server (MSCS) Installation
This chapter includes these sections:
Cluster terminology................................................................................................. 92
Installation requirements ........................................................................................ 92
Update the NetWorker software............................................................................... 93
Install the NetWorker server .................................................................................... 94
Configure a backup device for the NetWorker virtual server.................................... 101
Move the NetWorker server to another node.......................................................... 103
Register NetWorker licenses for cluster failover...................................................... 103
Restart the NetWorker server services ................................................................... 104
Install only the NetWorker client software in a cluster ............................................ 104
Uninstall the NetWorker software from MSCS ........................................................ 105
Troubleshoot the NetWorker software in the MSCS environment ............................ 106
91
Cluster terminology
NetWorker documentation uses the following cluster terminology:
Private disk A local disk on a cluster node. A private disk is not available to other
nodes within the cluster.
Cluster client A NetWorker client within a cluster; this can be either a virtual client,
or a NetWorker Client resource that backs up the private data that belongs to one of
the physical nodes.
Virtual client A NetWorker Client resource that backs up data that belongs to a
highly available service or application within a cluster. Virtual clients can fail over
from one cluster node to another.
Stand-alone server A NetWorker server that is running within a cluster, but not
configured as a highly available application. A stand-alone server does not have
failover capability.
Installation requirements
This section specifies the software and hardware required to install and configure the
NetWorker server and client software within a MSCS environment. Consider the following:
92
If nsrexecd is running, the NetWorker.clustersvr file can not be modified.
3. Rename the <NetWorker_install_path>\bin\NetWorker.clustersvr file to
<NetWorker_install_path>\bin\NetWorker.nocluster.
4. Update the NetWorker software. The NetWorker Installation Guide provides
instructions.
5. Stop the NetWorker Backup and Recover Server service.
6. Open the NetWorker Backup and Recover Server Properties dialog box and change the
startup type from Automatic to Manual.
7. Rename the NetWorker.nocluster file to NetWorker.clustersvr.
8. On the second node of the cluster:
a. Rename the <NetWorker_install_path>\bin\NetWorker.clustersvr file to
<NetWorker_install_path>\bin\NetWorker.nocluster.
b. Update the NetWorker software.
c. Stop the NetWorker Backup and Recover Server service.
d. Open the NetWorker Backup and Recover Server Properties dialog box and change
the startup type from Automatic to Manual.
e. Rename the NetWorker.nocluster file back to NetWorker.clustersvr.
9. Bring the NetWorker server Cluster resource group back online.
93
If you install NetWorker software without rebooting, running the cluster administrator for
the first time can result in an error due to the cluster administrator extension DLL not being
reloaded. If this error occurs, close the cluster administrator interface and reload the
software by running this from the command line:
regsvr32 /u nsrdresex.dll
In this configuration, the NetWorker server software is usually installed on only one of the
nodes in the cluster, and the NetWorker client software is installed on each node in the
cluster.
The following rules apply:
The NetWorker server does not have to be configured as a resource managed by the
cluster.
94
This creates the NetWorker Server cluster resource type. It also registers the resource
extension module so the NetWorker Server resource type can be managed from this
node.
3. On the second cluster node, type:
regcnsrd -r
This registers the resource extension module so the NetWorker Server resource type
can be managed from this node.
95
The NetWorker server must have its own dedicated Shared Disk resource. The cluster
service quorum disk cannot be used for this.
96
8. Select the NetWorker Group (Disk Group) that was identified in the previous step and
create the following resources for the NetWorker Group:
NetWorker IP Address resource on page 97
NetWorker Network Name resource on page 97
NetWorker Server resource on page 98
Do not create multiple instances of the NetWorker Server resources. Creating more
than one instance of a NetWorker Server resource interferes with how the existing
NetWorker Server resources function.
For the Name attribute, type any descriptive name (without spaces). For the
Description attribute, type any descriptive text.
3. Click Next. In the Possible Owners dialog box, type the hostname of each required
node in the cluster. For example:
Possible Owners: uranus, pluto
4. Click Next. In the Dependencies dialog box, specify the shared disk you associated
with the NetWorker virtual server. For example:
Dependencies: Disk P:
5. Click Next. In the TCP/IP Address Parameters dialog box, type the IP address of the
NetWorker virtual server. For example, 10.0.0.4.
For the Name attribute, type any descriptive name (without spaces). For the
Description attribute, type any descriptive text.
97
3. Click Next.
4. In the Possible Owners dialog box, type the hostname of each required node in the
cluster. For example:
Possible Owners: pluto, uranus
5. Click Next. In the Dependencies dialog box, type the name of the NetWorker IP
Address resource. For example:
Dependencies: NSR_IP
6. Click Next. In the Network Name Parameters dialog box, type the name of the
NetWorker virtual server. For example:
Name: neptune
For the Name attribute, type any descriptive name (without spaces). For the
Description attribute, type any descriptive text. For the Resource Type attribute, type
the name of the resource type created in Task 2: Create and register the cluster
resource type on page 95.
3. Click Next. In the Possible Owners dialog box, type the hostname of each required
node in the cluster. For example:
Possible Owners: pluto, uranus
4. Click Next. In the Dependencies dialog box, type the name of the NetWorker Network
Name resource. For example:
Dependencies: NSR_NetworkName
5. Click Next. In the NetWorker Server Parameters dialog box, complete the following
attributes:
Server Name: (leave blank)
NsrDir: P:\nsr
Additional Arguments: (leave blank)
The directory path entered for the NsrDir attribute must reside on the NetWorker
server shared disk.
98
6. After completing these steps, a message appears indicating that the resource was
successfully created. Verify that a new NetWorker Server type resource was created in
the selected resource group.
The MSCS software provides an option to set up an application of the Generic
Application type within a Group resource. Do not create a Generic Application resource
for the NetWorker virtual server.
b. Click Ok.
4. Restrict the set of NetWorker servers that can back up a particular client. Edit the
%SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file and add the
NetWorker virtual host as well as each cluster node to the list of servers.
Consider the following:
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
99
The only save sets that restart after a virtual client failover are those that belong to a
group in which Autorestart is enabled and Manual Restart is disabled.
6. Run a test probe to verify that the Client and Group resources are properly configured.
On the node on which the NetWorker server resides, run this command:
savegrp -pv -c client_name group_name
If the expected scheduled backups and index do not appear, perform the following
workaround:
100
a. Create an empty file named pathownerignore in the directory where the NetWorker
savefs command is installed (the default location is C:\Program Files\EMC
NetWorker\nsr\bin). This allows a valid save set to be scheduled for backup for a
NetWorker client.
This file must be created on each NetWorker server and NetWorker client host in
the cluster.
b. To bypass the default ownership scheduling rules, run this command:
echo NUL: > NetWorker_install_path\bin\pathownerignore
This command allows any path to be scheduled for backup for a NetWorker client.
A NetWorker scheduled save might use a client index name other than the one you
want it to. To override this default, run a manual save with the -c option:
save -c client_name.
7. Restrict the set of NetWorker servers that can back up a particular client. Edit the
%SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file and add the
NetWorker virtual host as well as each cluster node to the list of servers.
Consider the following:
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
After the client configuration is complete:
The NetWorker cluster server uses the IP address of the NetWorker virtual host,
regardless of which cluster node currently masters the NetWorker virtual server.
The NetWorker cluster server takes the identity of the NetWorker virtual servers
hostname, regardless of which cluster node is currently running the NetWorker
service.
The first time NetWorker software runs, it creates the Client resource for the NetWorker
virtual host. Client resources must be created manually for any cluster node that is to
be backed up by the NetWorker virtual host.
101
MSCS does not support shared tapes. You cannot configure the NetWorker virtual server
with tape devices connected to a shared bus.
MSCS does support disk devices connected to a shared bus. However, it is strongly
recommended that you not use file type devices connected to a shared bus.
The NetWorker virtual server requires a local backup device to save the bootstrap and the
server indexes. Use either of these two methods to backup the bootstrap and the server
indexes on a clustered NetWorker server:
Configure a device belongs to the NetWorker virtual server that is shared between the
physical nodes so the device is always available regardless of the node on which the
NetWorker virtual server is running.
Save the NetWorker virtual server bootstrap and indexes to a storage node. In this
method, the storage node device must be attached to the cluster node on which the
NetWorker virtual server is currently running. A NetWorker virtual server running on a
two-node cluster requires this minimum device configuration:
Configure each cluster node as a storage node for the NetWorker virtual server.
Instructions are available in Define a cluster node as a storage node on
page 102.
List the storage nodes enabled to store client data in the Storage Nodes attribute
for the NetWorker virtual server in the Preferences tab for the NetWorker virtual
server Client resource:
1. One cluster node
2. The other cluster node
3. The nsrserverhost
The nsrserverhost must be listed last in the clients storage node list.
Do not configure a storage node on a cluster node that is running the NetWorker server
software. Before configuring a storage node, move the NetWorker virtual server to another
node in the cluster.
102
The NetWorker Remote Exec service must be running on each storage node in the cluster.
where the host ID are the NetWorker server host ID values obtained in: Task 1: Install
the NetWorker software on page 95.
5. Restart the NetWorker virtual server by taking it offline and then bringing it back
online.
The host ID now displayed in the NetWorker Administrator program (or by running the
lgtolic -i command) is the composite hostid. This composite host ID is required for
permanent registration in a clustered environment.
6. Register the NetWorker software for permanent use. More information is available in
the NetWorker Licensing Guide.
103
104
2. Log in to another node in the cluster that can host the NetWorker server.
105
3. Remove the node that you are uninstalling from the Possible Owners attribute in the
NetWorker Server resource.
To determine the possible owners of the NetWorker Server cluster resource, review the
properties of the NetWorker Server resource.
4. Close the Cluster Administrator program on the node where you plan to uninstall the
NetWorker software.
5. Uninstall the NetWorker software from the node.
The NetWorker Installation Guide provides detailed instructions.
The error might be caused by the NetWorker Remote Exec service not running on the
storage node.
If the NetWorker Remote Exec service is not running:
1. On the Storage Node, from Control Panel, select Services.
2. Right-click the NetWorker Remote Exec service name and select Start.
106
This error might occur because the NetWorker Client resources for each of the physical
nodes in the cluster are missing.
To correct the error:
1. Create a NetWorker Client resource for each physical node that is allowed to own the
virtual cluster client.
2. Rerun the backup.
107
108
CHAPTER 11
Microsoft Failover Cluster on Windows Server
2008
This chapter includes these sections:
109
109
110
119
Installation requirements
Ensure that the following software is installed on each node in the cluster:
Microsoft Windows Server 2008, Standard, Enterprise, and Datacenter Editions, 32-bit
or x64 version
The EMC Information Protection Software Compatibility Guide provides the most
up-to-date information about the NetWorker software and hardware requirements.
109
110
Task 2: Create and register the NetWorker Server resource type on page 111
Task 4: Create NetWorker Server virtual service and required components resource
on page 112
Task 5: Set a dependency between the NetWorker server IP address and its shared
disk on page 113
Task 10: Verify that the Client and Group resources have been configured on
page 116
This creates the NetWorker server resource type. It also registers the NetWorker server
resource extension module so that the NetWorker Server resource type can be
managed from this node.
3. On the rest of the cluster nodes, type:
regcnsrd -r
This registers the NetWorker server resource extension module so the NetWorker
Server resource type can be managed from this node.
111
Task 4: Create NetWorker Server virtual service and required components resource
To create the NetWorker Server virtual service:
1. In the Failover Cluster Management program, select the cluster name.
2. From the Action menu, select Configure a Service or Application. The High Availability
Wizard is displayed.
3. On the Before You Begin page, click Next.
4. On the Select Service or Application page, select Other Server from the list, and then
click Next.
5. On the Client Access Point page, complete the Name and Address attributes, and then
click Next. For example:
Name: bu-ara
NetWorks: 10.5.162.0/23
Address: 10.5.163.219
6. On the Select Storage page, check the storage volume for the NetWorker server, and
then click Next. For example,
Cluster Disk 2
7. In the Select Resource Type list, select the NetWorker Server resource type that was
created in Task 2: Create and register the NetWorker Server resource type on
page 111, and click Next.
8. On the Confirmation page, confirm the resource configurations and click Next. The
High Availability Wizard creates the resources components and the group.
When the Summary page appears, ignore the following message that appears on the
page, because the steps in Task 5: Set a dependency between the NetWorker server
IP address and its shared disk on page 113 detail the required additional
configuration steps:
The group will not be brought online since the resources may need
additional configuration. Please finish configuration and bring the
group online.
Do not create a Generic Application resource for the NetWorker virtual server.
112
Task 5: Set a dependency between the NetWorker server IP address and its shared disk
To set a dependency between NetWorker server IP address and its shared disk:
1. In the Failover Cluster Management program, right-click the IP Address resource just
created in the NetWorker virtual service, and then click Properties.
2. On the Dependencies tab:
a. Click Insert to add an empty dependency line.
b. Select the shared disk associated with the NetWorker virtual server from the
Resource list. For example:
Cluster Disk 2
Note: The directory path entered for the NsrDir attribute must reside on the
NetWorker server shared disk.
b. Click Ok to close the Properties window of the New NetWorker Server resource.
Note: Do not create multiple instances of the NetWorker Server resources.
Creating more than one instance of a NetWorker Server resource interferes with
how the existing NetWorker Server resources function.
113
114
5. Restrict the set of NetWorker servers that can back up a particular client. Edit the
%SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file and add the
NetWorker virtual host as well as each cluster node to the list of servers.
Consider the following:
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
115
c. For each group, ensure that Manual Restart is not selected in the Options attribute
(under the Advanced tab).
The only save sets that restart after a virtual server failover are those that belong to a
group in which Autorestart is enabled and Manual Restart is disabled.
After the client configuration is complete:
The NetWorker virtual server uses the IP address of the NetWorker virtual host,
regardless of which cluster node currently masters the NetWorker virtual server.
The NetWorker virtual server takes the identity of the NetWorker virtual servers
hostname, regardless of which cluster node is currently running the NetWorker
service.
The first time NetWorker server runs, it creates the Client resource for the NetWorker
virtual host. Client resources must be created manually for any cluster node that is to
be backed up by the NetWorker virtual server.
Task 10: Verify that the Client and Group resources have been configured
To verify that the Client and Group resources are properly configured, run a test probe for
each client from the node where the NetWorker application is running:
savegrp -pv -c client_name group_name
If the test probe does not display all the scheduled save sets:
Ensure that the save sets defined for the client are owned by that client. If necessary,
redistribute the client save sets to the appropriate Client resources.
Misconfiguration of the Cluster resources might cause scheduled save sets to be
dropped from the backup. The NetWorker administration guide provides more
information.
To override scheduled save rules, create an empty text file, pathownerignore in the
directory that contains the savefs command. By default the NetWorker bin location is
C:\Program Files\EMC NetWorker\nsr\bin.
When this file exists, it allows any path to be backed up for a client, whether it is
owned by the virtual client or physical node. The NetWorker 8.0 Administration Guide
provides more information.
If the pathownerignore file was used, check that the NetWorker scheduled save uses the
correct client index.
116
If the wrong index is used, the save sets can be forced to go to the correct index:
1. From the NetWorker Administration window, select a client and edit its properties.
2. For the Backup Command attribute, type the name of a backup script that contains:
save -c client_name
The NetWorker 8.0 Administration Guide provides details about the Backup Command
attribute.
Task 4: Verify that the Client and Group resources have been configured on
page 118
Ensure the NetWorker client software is installed on every physical node in the cluster to
be backed up.
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
To define the list of trusted NetWorker servers, perform the following steps on each node
in the cluster:
1. Edit or create the %SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file.
2. Add the NetWorker servers, one per line, that require access to this client.
Install the NetWorker software in a Microsoft Failover Cluster
117
The first time the NetWorker application runs, it creates the Client resource for the
NetWorker virtual server.
4. To back up a NetWorker Windows client that has multiple domains that are part of
both an Active Directory domain and a DNS domain, perform one of these steps:
Define the Active Directory domain name, which is the Full Computer Name, in the
NetWorker servers /etc/hosts file.
Define the AD domain, which is the Full Computer Name, in the Active Directory
DNS. Also, on the NetWorker server, open the Client resource and add the Full
Computer Name in the alias list.
5. If the NetWorker Remote Exec service is started before the cluster service is enabled
on the node, restart the NetWorker Remote Exec service on each cluster node.
6. Schedule backups by using the NetWorker application.
Task 4: Verify that the Client and Group resources have been configured
To verify that the Client and Group resources on the NetWorker server are properly
configured, run a test probe for each client from the node where the NetWorker server is
running:
savegrp -pv -c <client_name> <group_name>
If the test probe does not display all the scheduled save sets:
118
Ensure that the save sets defined for the client are owned by that client. If necessary,
redistribute the client save sets to the appropriate Client resources.
Misconfiguration of the Cluster resources might cause scheduled save sets to be
dropped from the backup. The NetWorker administration guide provides more
information.
This command allows any path to be backed up for a client, whether it is owned by the
virtual client or physical node. The NetWorker administration guide provides more
information.
If the pathownerignore file was used, check that the NetWorker scheduled save uses the
correct client index. If the wrong index is used, the save sets can be forced to go to the
correct index:
1. From the NetWorker Administration window, select a client and edit its properties.
2. For the Backup Command attribute, type the name of a backup script that contains:
save -c client_name
The NetWorker 8.0 Administration Guide provides details about the Backup Command
attribute.
Before uninstalling NetWorker software from a cluster node, close the Failover Cluster
Management program.
119
2. Log in to another node in the cluster that can host the NetWorker server.
3. Remove the node that you are uninstalling from the Possible Owners attribute in the
NetWorker Server resource.
To determine the possible owners of the NetWorker Server cluster resource, review the
properties of the NetWorker Server resource.
4. Close the Failover Cluster Management program on the node where you plan to
uninstall the NetWorker software.
5. Uninstall the NetWorker software from the node. The NetWorker 8.0 Installation Guide
provides detailed instructions.
120
CHAPTER 12
Sun Cluster Version 3.2 Installation
This chapter includes these sections:
122
124
125
137
125
145
150
150
148
121
Installation requirements
This section specifies the software and hardware required for installing and setting up the
NetWorker server or client software within a Sun Cluster environment.
The EMC Information Protection Software Compatibility Guide provides the most
up-to-date information about software and hardware requirements.
Software requirements
The following software must be installed on each node in the cluster:
Volume Manager software. For example Solstice DiskSuite and Solaris Volume
Manager.
Highly available storage nodes are not supported.
Hardware requirements
The following hardware requirements must be met:
A multihosted disk is used as a mount point for the global file systems. These contain
the shared /nsr area.
A device with local affinity for the local bootstrap backup that is connected to all the
nodes within the cluster.
Configuration options
Refer to the NetWorker Administration Guide for information about:
122
Example
clus_vir1
192.168.1.10
/global/nw
/usr/sbin/networker.cluster
/nsr/res/hostids
Node 1
clus_phys1
Node 2
clus_phys2
Private Network
Local Disk
clus_log1
If Node1fails,
clus_log1 fails
over to Node2
Local Disk
Public NetWork
Figure 5 Sample cluster configuration
Installation requirements
123
2. If the NetWorker server is configured in a Sun Cluster, determine the location of the
NetWorker server global /nsr directory on the shared storage.
For example, on cluster_node1 the global /nsr directory is /global/nw/nsr:
cluster_node1: ls -l /nsr
lrwxrwxrwx
1 root root 14 May 30 15:06 /nsr -> /global/nw/nsr
3. If the target machine is a NetWorker server, perform a bootstrap and index backup.
For example:
savegrp -vvv -O group
The specified group contains all of the NetWorker clients in the datazone. This ensures
that all client file indexes are backed up.
If a group that contains all of the clients does not exist, run the savegrp command
more than once, specifying a different group each time, until all clients indexes are
backed up.
Ensure the media pool associated with the group has appendable media available.
4. Remove the NetWorker software cluster configuration files and uninstall the NetWorker
software. Do not remove the global and local /nsr directories.
Uninstall the NetWorker software on page 148 provides instructions for uninstalling
the NetWorker software.
5. If required, upgrade the Sun Cluster software. For instructions, refer to the Sun Cluster
documentation.
124
Do not relocate the NetWorker software. By default, the NetWorker is installed in the
/usr directory.
7. Ensure that:
You specify the same local /nsr and global /nsr directories.
The NetWorker Client Type resource properties for Owned_paths and Clientname
are the same as before the upgrade.
The NetWorker Config_dir resource contains the same values as before the
upgrade.
The Network_resources_used property contains the same value that the
Resource_dependencies property had before the upgrade.
Do not relocate the NetWorker software. By default, the NetWorker is installed in the /usr
directory.
Task 3: Create an Instance of the NetWorker server resource group on page 128
Task 5: Grant access to the highly available NetWorker server on page 131
Task 8: Create instances of the NetWorker Client resource type on page 134
Task 9: Register licenses for the highly available NetWorker server on page 135
125
Ensure that the following apply:
- The operating environment and Sun Cluster 3.2 software are already installed on all
nodes in the cluster and that those nodes boot in cluster mode.
- PATH environment variable includes /usr/sbin and /usr/cluster/bin.
To install the NetWorker software on nodes that will be running the NetWorker resource
group:
1. Access the NetWorker software from the distribution media. The NetWorker
Installation Guide provides installation instructions.
2. Keep a copy of the current configuration. The NetWorker software installation script
modifies the /etc/rpc and /etc/syslog.conf files during the installation process. Type
these commands:
cp /etc/rpc /etc/rpc.old
cp /etc/syslog.conf /etc/syslog.conf.old
4. Press Enter to install all of the packages on the server. Start the NetWorker daemons
only after the last NetWorker package is installed.
Install selected software packages in this order:
a. LGTOclnt (client software package)
b. LGTOnode (storage node software package)
c. LGTOserv (server software package)
d. LGTOman (optional man pages)
5. Start the NetWorker daemons:
/etc/init.d/networker start
6. Type q to exit.
Do not relocate the NetWorkersoftware. By default, the NetWorker is installed in the
/usr directory.
126
2. Ensure that the /etc/hosts file on each cluster node contains the name of the logical
host. The logical hostname can be published in the Domain Name System (DNS) or
Network Information Services (NIS).
3. From each node in the cluster that will run the NetWorker server process:
a. Run the cluster configuration script networker.cluster located in /usr/sbin.
This script defines the LGTOserv and the LGTOclnt resource types that the
NetWorker software requires.
b. Type the information specified for each system prompt:
Enter directory where local NetWorker database is installed
[/nsr]?
Type the location of the local NetWorker database directory provided during the
installation procedure. For example: /space/nsr
Do you wish to configure for both NetWorker server and client?
Yes or No [Yes]?
Type Yes to configure the server software. This also installs the client software
by default:
Type No to configure only the client software:
Do you wish to add now the site-specific values for:
NSR_SHARED_DISK_DIR and NSR_SERVICE_ID
Yes or No [Yes]?
Type the pathname of the globally mounted /nsr directory that will contain the
configuration information for the highly available NetWorker server. For
example: /global/nw
For more information, see System information requirements on page 123.
To undo any changes to the configuration, run the networker.cluster -r script and
then run the networker.cluster script again.
127
Logical hostname
LGTO.serv resource
LGTO.clnt resource
HAStoragePlus (optional)
To create an instance of the NetWorker Server resource group, perform these steps on one
node in the cluster:
1. Create a resource group:
clresourcegroup create networker
For more information on the SUNW.HAStoragePlus resource and the setup for
locally mounted global systems, refer to the Sun Cluster 3.2 documentation.
128
If the logical host resource name is different than the hostname it specifies, use this
command to do the following:
a. Set the client name to the virtual hostname.
b. Set the optional network_resource property to the logical host resource name.
For example:
clresource create -g networker -t LGTO.clnt \
-x clientname=virtual_hostname -x network_resource=clus_vir1 \
-x owned_paths=/global/clus_vir1/nw,/global/clus_vir1/space
client
If the logical host resource name is different than the hostname it specifies, set the
optional servername property to the virtual hostname:
clresource create -g networker -t LGTO.serv \
-y Resource_dependencies=clus_vir1 \
-x servername=virtual_hostname -x config_dir=/global/clus_vir1/nw \
server
If you are using a HAStoragePlus resource, set the resource_dependencies property to
the HAStoragePlus resource name.
6. Start the NetWorker resource group:
clresourcegroup online networker
Example 1 A highly available NetWorker server
In this example, a highly available NetWorker server uses the logical hostname
backup_server. The highly available NetWorker server uses /global/networker (globally
mounted file system) as its configuration directory.
1. Create a resource group with the name backups:
clresourcegroup create backups
2. Add the logical hostname resource type to the resource group created in the previous
step:
clreslogicalhostname create -g backups backup_server
129
3. Create an instance of the LGTO.serv resource type with the name networker_server.
This resource belongs to the resource group backups and has a dependency on the
logical host created in the previous step.
Specify the configuration directory on the globally mounted file system
/global/networker:
clresource create -g backups -t LGTO.serv \
-y Resource_dependencies=backup_server \
-x config_dir=/global/networker networker_server
4. The NetWorker logical host is also a client of the highly available NetWorker server.
Create an instance of the LGTO.clnt resource type for the logical host backup_server
within the resource group backups. The name of this resource is networker_client:
clresource create -g backups -t LGTO.clnt \
-x clientname=backup_server -x owned_paths=/global/networker \
networker_client
5. Start the highly available service associated with the resource group backups.
clresourcegroup online backups
130
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
b. Edit or create the servers file in the globally mounted /nsr area. For example,
/global/nw/res/servers:
Add the set of NetWorker servers, one per line, to be granted access to this
client.
Add an entry for the NetWorker logical hostname first.
Add entries for each physical host that can run the NetWorker resource group.
For example:
clus_vir1
clus_phys1
clus_phys2
b. Check the NetWorker boot-time startup file to see whether nsrexecd is being run
with the -s option.
If the -s option exists, remove all occurrences of -s servername in the file.
131
3. On one node in the cluster, start the NetWorker daemon by using the cluster
management software, as follows:
clresourcegroup online networker
4. If required, grant access to the NetWorker virtual server on clients outside of the
cluster:
On each client that is outside of the cluster:
a. To shut down the NetWorker processes, type:
nsr_shutdown
132
b. Click Ok.
/nsr->/nsr.NetWorker.local
/nsr.NetWorker.local->/var/nsr
In order for their save sets to restart after a virtual client or NetWorker server failover,
save groups must have the Autorestart attribute enabled and the Manual Restart
option disabled.
2. Make each physical client within the cluster a client of the NetWorker server. For each
physical client in the cluster:
a. Create a new client.
b. Type the name of the physical client for the Name attribute.
3. Make each virtual client within the cluster a client of the virtual NetWorker server. For
each virtual client in the cluster:
a. Create a new NetWorker client.
b. For the Name attribute, type the name of the virtual client.
c. In the Remote Access attribute, add entries for each physical client within the
cluster. For example:
root@clus_phys1
133
4. Run a test probe to verify that the Client resource and the Group resource have been
properly configured.
Type this command on the node on which the NetWorker server resides:
savegrp -pv -c client_name group_name
If the test probe does not display the correct scheduled backups and index, refer to
the NetWorker 8.0 Administration Guide.
All globally mounted file systems (except the /global/.devices/... file systems) must be
owned by a resource group and defined in a NetWorker Client resource type. If the file
systems are not properly configured, multiple copies will be backed up for each cluster
node.
To back up the data for a virtual client:
1. Create an instance of the NetWorker Client resource as part of an existing resource
group that contains a logical host or shared addresses. For example:
clresource create -g resource_group_name -t LGTO.clnt \
-x clientname=virtual_hostname -x owned_paths=pathname_1,
pathname_2[,...] resource_name
If the test probe does not display the scheduled backups and index, refer to the
NetWorker Administration Guide.
134
In this example, the Informix database server is configured to use the DNS registered
hostname informix_lhrs.
An existing failover resource group named informix_rg contains a:
This SUNW.informix database server can access data on a global file system under
/global/informix/config and /global/informix/db.
To add a NetWorker virtual client to the existing resource group informix_rg, type:
clresource create -g informix_rg -t LGTO.clnt \
-x clientname=informix_lhrs \
-x owned_paths=/global/informix/config,/global/informix/db \
informix_clntrs
Example 3 A scalable Apache web server
In this example, an Apache web server is configured to use the DNS registered hostname
apache_sars.
An existing scalable resource group named apache_rg contains a:
This Apache web server accesses data on a global file system under /global/web/config
and /global/web/data.
To add a NetWorker virtual client to the existing resource group apache_rg, type:
clresource create -g apache_rg -t LGTO.clnt \
-x clientname=apache_sars \
-x owned_paths=/global/web/config,/global/web/data \
apache_clntrs
135
4. Ensure that the highly available NetWorker server is defined as a part of the cluster.
5. Run the following command, and capture the output, on each node that is currently
running the NetWorker server resource group:
hostid
For example:
12345678:87654321
7. Type the following commands to restart the server by taking the highly available
NetWorker server offline and then putting it back online:
sclresourcegroup offline networker
clresourcegroup online networker
Do not change the logical hostname for the highly available NetWorker server. If you
change it after you update the software, you must permanently license and authorize the
highly available NetWorker server.
136
Task 2: Define the NetWorker Management server as highly available on page 138
137
When running the gst_ha.cluster script, ensure that you use the same values for
logical hostname and for the global mounted path for all node in the cluster.
./gst_ha.cluster
NMC Console Server is in the process of being made a Highly
Available application within Sun Cluster
3.2.0,REV=2003.03.24.14.50.
To complete this task, the following are required.
A Logical host or virtual IP for the Console Server A globally
mounted dir for the LGTOnmc database.
A GST_HA.serv resource type will be created via this process and
will be needed to configure NMC as a Highly Available Application
within Sun Cluster 3.2.0,REV=2003.03.24.14.50.
Do you wish to continue? [Yes]?
Restarting syslog daemon...
Please enter Logical Hostname to be used by NMC server? hunt
Is the Logical Hostname entered correct (y/n)? y
The lgto_gstdb database should be on a globally mounted
filesystem which can be accessible by the cluster nodes which
will host the Highly Available NMC server.
Please enter the globally mounted shared directory for the
lgto_gstdb database (/global/logicalhost)? /global/hunt/data1
Is the shared directory path entered for the lgto_gstdb database
correct (y/n)? y
Moving /bigspace/lgto_gstdb local gstdb directory to globally
mounted /global/hunt/data1/lgto_gstdb Resource type GST_HA.serv
is not registered Defining GST_HA.serv resource type with RGM.
NMC has been successfully cluster-configured.
138
To undo any changes to the configuration, run the gst_ha.cluster -r script and then
run the gst_ha.cluster script again.
3. From one node in the cluster:
a. Create a resource group named "nmc":
clresourcegroup create nmc
Task 3: Create instances of the NetWorker Client resource type on page 141
Ensure that the NetWorker client software is installed on each node in the cluster. Do not
relocate the NetWorker software. By default, the NetWorker is installed in the /usr
directory.
139
Do not press the Enter key for the default response All. Accepting the default installs
the server package.
3. Type the appropriate option number to install the client package (LGTOclnt).
4. (Optional) Type the appropriate option number to install the man pages, (LGTOman).
5. Type this command to start the NetWorker daemons:
/etc/init.d/networker start
6. When all the applicable packages have been added, and the prompt appears, type q
to exit.
Type the location of the local NetWorker database directory is provided during
the installation procedure. For example: /space/nsr.
Do you wish to configure for both NetWorker server and client? Yes
or No [Yes]?
Any changes to the configuration can be undone by running the networker.cluster
-r option and then running the networker.cluster script again.
For information, see System information requirements on page 123.
140
All globally mounted file systems (except the /global/.devices/... file systems) must be
owned by a logical host and defined in a NetWorker Client resource type. If the file systems
are not properly configured, multiple copies will be backed up for each cluster node.
To back up the data for a virtual client, from any node in the cluster, create an instance of
the NetWorker Client resource as part of an existing resource group that contains a logical
host or shared address.
For example:
clresource create -g resource_group_name -t LGTO.clnt \
-x clientname=virtual_hostname -x owned_paths=pathname_1,
pathname_2[,...] resource_name
The virtual_hostname variable is the name of the resource used by the Sun Cluster logical
hostname (SUNW.LogicalHostname) or shared address (SUNW.SharedAddress) that you
want to configure as a virtual hostname.
Example 4 A highly available Informix database server
In this example, the Informix database server is configured to use the DNS registered
hostname informix_lhrs.
An existing failover resource group named informix_rg contains a:
This SUNW.informix database server can access data on a global file system under
/global/informix/config and /global/informix/db.
To add a NetWorker virtual client to the existing resource group informix_rg, type:
clresource create -g informix_rg -t LGTO.clnt \
-x clientname=informix_lhrs \
-x owned_paths=/global/informix/config,/global/informix/db \
informix_clntrs
141
To help understand this example, study the following output that was created by running
the scstat - g command after the running the scrgadm command. The scstat -g command
output displays the informix_rg group and its resources, assuming that the informix_rg
group is the only resource group configured in the cluster:
-- Resource Groups and
Resources -Group Name
Resources
informix_rg
informix_res
informix_lhr
s
informix_clntrs
Group Name
Node Name
state
Suspended
------------------
---------------
-------
---------------
Group:
informix_rg
phynode-1
Offline
No
Group:
informix_rg
phynode-2
Offline
No
Resources
:
-- Resource Groups
--
--Resources--
142
Resource Name
Node Name
state
Suspended
---------------------
----------------
--------
----------------
Resource:
informix_res
phynode-1
Offline
Offline
Resource:
informix_res
phynode-2
Offline
Offline
Resource:
informix_lhrs
phynode-1
Offline
Offline - LogicalHostname
offline.
Resource:
informix_lhrs
phynode-2
Offline
Offline - LogicalHostname
offline.
Resource:
informix_clnt
rs
phynode-1
Offline
Offline
Resource:
informix_clnt
rs
phynode-2
Offline
Offline
In this example, an Apache web server is configured to use the DNS registered hostname
apache_sars. An existing scalable resource group named apache_rg contains a:
This Apache web server accesses data on a global file system under /global/web/config
and /global/web/data.
To add a NetWorker virtual client to the existing resource group apache_rg, type:
clresource create -g apache_rg -t LGTO.clnt \
-x clientname=apache_sars \
-x owned_paths=/global/web/config,/global/web/data \
apache_clntrs
To help understand this example, study the following output that was created by running
the scstat - g command after the running the scrgadm command. The scstat -g command
output displays the apache_rg group and its resources, assuming that the apache_rg
group is the only resource group configured in the cluster:
-- Resource Groups and Resources -Group Name
Resource:
Resources
------------------
-------------
apache_rg
apache_re
s
apache_sa
rs
apache_cl
ntrs
Node Name
state
Suspended
-----------------
----------------
-------
----------------
Group:
apache_rg
phynode-1
Offline
No
Group:
apache_rg
phynode-2
Offline
No
Resource Name
Node Name
State
Status Message
---------------------
----------------
--------
-----------------------
Resource:
apache_res
phynode-1
Offline
Offline
Resource:
apache_res
phynode-2
Offline
Offline
Resource:
apache_res
phynode-1
Offline
Offline - SharedAddress
offline.
--Resources
143
--Resources
Resource:
apache_res
phynode-2
Offline
Offline - SharedAddress
offline.
Resource:
apache_res
phynode-1
Offline
Offline
Resource:
apache_res
phynode-2
Offline
Offline
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
To define the list of trusted NetWorker servers, perform these steps on each node in the
cluster:
1. Shut down the NetWorker processes and verify that all NetWorker daemons have
stopped:
nsr_shutdown
ps -ef |grep nsr
2. Edit or create the /nsr/res/servers file and add the set of NetWorker servers, one per
line, that require access to this client.
3. Check the NetWorker boot-time startup file to see whether nsrexecd is being run with
the -s option. If the -s option exists, remove all occurrences of this from the file:
-s servername
144
If a physical client is backed up to a NetWorker server outside the cluster, the name of
any virtual service that can run on the physical node must be added to the Remote
Access list of the physical Client resource.
2. Make each virtual client within the cluster a client of the NetWorker server.
For each virtual client in the cluster:
a. Create a new client.
b. For the Name attribute, type the name of the NetWorker server.
c. For the Remote Access attribute, add entries for each physical client in the cluster.
For example:
root@clus_phys1
whole root
sparse root
Obtain the appropriate documentation from the Sun website for configuration information
on Solaris zone clusters.
145
Task 2: Configure the NetWorker client software as cluster aware on page 146
Task 3: Create instances of the NetWorker Client resource type on page 146
Client
Type the location of the NetWorker database directory provided during the
installation. For example: /space/nsr.
146
Task 3: Create instances of the NetWorker Client resource type on page 147
Client
The NetWorker 8.0 Installation Guide describes how to install the NetWorker software.
Type the location of the NetWorker database directory provided during the
installation. For example: /space/nsr.
147
148
Node Name
State
Suspended
Group:
networker
cluster_node1
Offline
No
Group:
networker
cluster_node2
Offline
No
Resource
Name
Node Name
State
Status Message
---------------------
----------------
--------
----------------------
Resource
:
hastorageplu
s
cluster_node1
Offline
Offline
Resource
:
hastorageplu
s
cluster_node2
Offline
Offline
--Resources--
e. Use the scstat -g command to confirm that the hastorageplus resource is removed.
7. Make a copy of the cluster startup files. Perform this step on all cluster nodes.
cp /usr/lib/nsr/networker* /tmp/
Remove the NetWorker software packages in this order:
- LGTOserv
- LGTOnode
- LGTOnmc
- LGTOclnt
149
The man pages (LGTOman) and language packages do not have any file dependencies
and can be removed at any time.
For example, to remove all of the packages, type:
pkgrm LGTOserv LGTOnode LGTOnmc LGTOclnt LGTOlic LGTOman
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
In this example, a highly available web server uses the shared address web_server.
150
The web server services are placed under the control of the apache_rg resource group. The
services access files that are kept in two separate globally mounted file systems:
/global/web/config and /global/web/data. The web server also accesses the raw
partition /dev/md/hunt/rdsk/d30.
To create the NetWorker Client resource named apache_nw, type:
clresource create -g apache_rg -t LGTO.clnt
-x clientname=web_server \
-x owned_paths=/global/web/config,/global/web/data,
/dev/md/hunt/rdsk/d30 apache_nw
For information on backing up raw partitions, refer to the rawasm command as described
in the uasm(1m) man page.
151
152
CHAPTER 13
VERITAS Cluster Server Installation
This chapter includes these sections:
Cluster configuration.............................................................................................
Software requirements..........................................................................................
Install the NetWorker server in a cluster ................................................................
Install only the NetWorker client software in a cluster ............................................
Tracking scheduled saves......................................................................................
Uninstall the NetWorker software ..........................................................................
154
155
155
168
172
173
153
Cluster configuration
Figure 6 on page 154 displays a general cluster configuration consisting of two or more
nodes and at least one virtual server. In this illustration:
In this example, the virtual server, clus_log1, can fail over between Node 1 and Node 2.
However, the server runs only on one node at a time.
The NetWorker client software runs on all the physical nodes within the cluster. This allows
the backup of the physical client to proceed, as long as the node is running. Only one
instance of the client software (nsrexecd) runs on each physical node within the cluster.
The NetWorker client software is designed to recognize more than one client (physical
client plus virtual client) that might be associated with a single physical node.
Node 1
clus_phys1
Node 2
clus_phys2
Private NetWork
clus_log1
Local Disk
If Node1fails,
clus_log1 fails
over to Node2
Local Disk
Virtual Server
Public NetWork
Figure 6 General cluster configuration
154
Software requirements
Ensure that the following software is installed on each node in the cluster:
Any Veritas Cluster Server (VCS) required patches for your operating environment
The EMC Information Protection Software Compatibility Guide provides the most
up-to-date software and hardware requirements.
Task 2: Define the NetWorker server as a highly available application on page 156
Task 5: Grant access to the highly available NetWorker server on page 163
Task 7: Configure clients under the NetWorker virtual server on page 165
Task 8: Register licenses for the highly available NetWorker server on page 166
155
3. Ensure that the PATH environment variable includes /usr/sbin and $VCS_HOME/bin.
The the default directory is /opt/VRTSvcs/bin.
To install the NetWorker software on the computer designated as the NetWorker client:
1. Access the NetWorker software from the distribution media.
2. Install the NetWorker client software on the physical disk of each node in the cluster to
be backed up.
The NetWorker Installation Guide provides detailed installation instructions.
On Windows
To install the NetWorker client software on each node in the cluster:
1. Ensure that the operating system is updated with the most recent cluster patch.
2. Install the NetWorker client software on the physical disk of each node in the cluster to
be backed up.
Enter the location of the local NetWorker database directory provided during
the installation procedure. For example: /space/nsr
Do you wish to configure for both NetWorker server and client?
Yes or No [Yes]?
Enter Yes to configure the server software. This also installs the client software
by default.
Do you wish to add now the site-specific values for:
NSR_SHARED_DISK_DIR and NSR_SERVICE_ID Yes or No [Yes]?
156
Enter the pathname of the globally mounted /nsr directory that will contain the
configuration information for the highly available NetWorker server. For
example: /global/nw.
Enter the Logical Hostname to be used for NetWorker?
To undo any changes to the configuration, run the networker.cluster -r script and then run
the networker.cluster script again.
On Windows
1. Log in as Administrator on each node where the NetWorker software is being installed.
2. Ensure that the C:\WINDOWS\system32\drivers\etc\hosts file on each cluster node
contains the names of the virtual hosts. The virtual hostnames can be published in the
Domain Name System (DNS) or Network Information Services (NIS).
3. For each node in the cluster:
a. Run the cluster configuration binary, <NSR_BIN>\lc_config.exe.
This script creates NWClient resource types that may need to be added later (see
Task 4 (Optional): Create NetWorker Client resource instances) to the VERITAS
Cluster Server configuration.
b. In response to the following prompts:
Do you want to configure NetWorker virtual server?[y/n]
Enter y. This configures both the server software and the client software.
Enter shared nsr dir:
Enter the pathname of the shared nsr directory that will contain the
configuration information for the highly available NetWorker server. For
example: S:\nsr.
Enter the directory in which your Veritas Cluster Server software is installed
(typically something like C:\Program Files\Veritas\cluster server):
Enter the location where the Veritas Cluster Server is installed.
Any changes to the configuration can be undone by running the lc_config.exe -r option and
then running the lc_config.exe again.
157
)
Process NW_1 (
Enabled = 0
StartProgram = "D:\\Program Files\\EMC
NetWorker\\nsr\\bin\\nw_vcs.exe start"
StopProgram = "D:\\Program Files\\EMC
NetWorker\\nsr\\bin\\nw_vcs.exe stop"
CleanProgram = "D:\\Program Files\\EMC
NetWorker\\nsr\\bin\\nw_vcs.exe stop_force"
MonitorProgram = "D:\\program files\\EMC
NetWorker\\nsr\\bin\\nw_vcs.exe monitor"
UserName = "bureng\\administrator"
Password = BHFlGHdNDpGNkNNnF
)
VMDg NWdg_1 (
DiskGroupName = "32dg1"
)
NWip1 requires NWmount1
NWmount1 requires NWdg_1
NW_1 requires NWip1
// resource dependency tree
//
// group networker
// {
// Process NW_1
// {
//
IP NWip1
//
{
//
MountV NWmount1
//
{
//
VMDg NWdg_1
//
}
//
}
//
}
// }
On Windows, it is required to set the CleanProgramTimeout attribute of the NetWorker
server process to a minimum value of 180, and the StopProgramTimeout attribute to a
minimum of value of 120.
Contain storage resources that are not automatically detected such as:
Storage resources that are defined in dependent groups.
Storage resources that are not of the type Mount or CFSmount.
159
Creating an instance of NWClient resource type is optional if the following conditions exist:
The failover VERITAS Cluster service group has only one IP type resource.
The owned filesystems on the shared devices are instances of the mount type
resource contained in the same service group.
Check the VERITAS Cluster Server configuration to determine which, if any, service groups
require one or more NWClient resources. If no such groups require NWClient resources,
proceed to Task 5: Grant access to the highly available NetWorker server on page 163.
Definition
IPAddress
string, scalar
Owned_paths
string, vector
160
2. Stop the VERITAS Cluster Server software on all nodes and leave the resources
available:
hastop -all -force
4. Copy the NWClient resource definition file that is in the VERITAS Cluster Server
configuration directory:
cp /etc/VRTSvcs/conf/NWClient.cf /etc/VRTSvcs/conf
/config/NWClient.cf
5. Add the NWClient resource type and the NWClient resource type instances by editing
the main.cf file:
a. Add the NWClient resource type definition by adding an include statement to the
main.cf file:
include "NWClient.cf"
8. Log in on the remaining nodes in the cluster and start the VERITAS Cluster Server
engine:
hastart
10. Add an NWClient resource instance for service groups that require it.
161
On Windows
To register the resource type and create resource instances:
1. Save the existing VERITAS Cluster Server configuration and prevent further changes
while main.cf is modified:
haconf -dump -maker
2. Stop VERITAS Cluster Server on all nodes and leave the resources available:
hastop -all -force
4. Copy the NWClient resource definition file which is located in the VERITAS Cluster
Server configuration directory:
cp C:\Program Files\Veritas\cluster server\conf\NWClient.cf
C:\Program Files\Veritas\cluster server
\conf\config\NWClient.cf
5. Add this include statement to the main.cf file and then close and save the file:
include "NWClient.cf"
8. Log on to the remaining nodes in the cluster and start the VERITAS Cluster Server
engine:
hastart
10. Add the NWClient resource instance for service groups that require it.
162
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
b. Check the NetWorker boot-time startup file to see whether nsrexecd is being run
with the -s option.
If the -s option exists, remove all occurrences of -s servername in the file.
163
3. On one node in the cluster, start the NetWorker daemon by using the cluster
management software.
4. If required, grant access to the NetWorker virtual server on clients outside of the
cluster on each client:
a. Shut down the NetWorker processes.
b. Verify that all NetWorker daemons have stopped.
c. Edit or create the /nsr/res/servers file:
Add the set of NetWorker servers, one per line, that require access to this client.
Add an entry for the NetWorker logical hostname first.
Add entries for each physical host that can run the NetWorker resource group.
For example:
clus_vir1
clus_phys1
clus_phys2
164
A second link named nsr.NetWorker.local that points to the local NetWorker directory.
For example, if the local NetWorker directory was created in /var/nsr, each client member
has these links:
/nsr->/nsr.NetWorker.local
/nsr.NetWorker.local->/var/nsr
In order for their save sets to restart after NetWorker server failover, save groups must
have the Autorestart attribute enabled and the Manual Restart option disabled.
2. Make each physical client within the cluster a client of the NetWorker virtual server. For
each physical client in the cluster:
a. Create a new client.
b. Type the name of the physical client for the Name attribute.
3. Make each virtual client within the cluster a client of the virtual NetWorker server. For
each virtual client in the cluster:
a. Create a new NetWorker client.
b. For the Name attribute, type the name of the virtual client.
c. In the Remote Access attribute, add entries for each physical client within the
cluster. For example, on Solaris and Linux:
root@clus_phys1
165
4. Run a test probe to verify that the Client resource and the Group resource have been
properly configured.
Type this command on the node on which the NetWorker server resides:
savegrp -pv -c client_name group_name
If the test probe does not display the correct scheduled backups and index, refer to
the EMC NetWorker 8.0 Administration Guide.
4. Ensure that the highly available NetWorker server is defined as a part of the cluster.
5. Run the following command, and capture the output, on each node that is currently
running the NetWorker server resource group:
hostid
For example:
12345678:87654321
7. Restart the server by taking the highly available NetWorker server offline and then
putting it back online.
166
Do not change the logical hostname for the highly available NetWorker server. If you
change it after you update the software, you must permanently license and authorize the
highly available NetWorker server.
167
Ensure that the NetWorker client software is installed on each node to be backed up in the
cluster.
168
On Windows
To install the NetWorker client software on each node in the cluster:
1. Ensure that the operating system is updated with the most recent cluster patch.
2. Install the NetWorker client software on the physical disk of each node in the cluster to
be backed up.
This script creates NWClient resource types that might need to be added later
(see Task 4 (Optional): Create NetWorker Client resource instances) to the
VERITAS Cluster Server configuration.
b. Type the appropriate information in response to the prompts:
Enter directory where local NetWorker database is installed
[/nsr]?
Type the location of the local NetWorker database directory provided
during the installation procedure. For example:
/space/nsr
Do you wish to configure for both NetWorker server and client?
Yes or No [Yes]?
Any changes to the configuration can be undone by running the networker.cluster -r
option and then running the networker.cluster script again.
169
On Windows
To define and configure a NetWorker client as cluster aware, on each node in the cluster:
1. Log in as administrator.
2. Ensure that the C:\WINDOWS\system32\drivers\etc\hosts file contains the names of
the virtual hosts.
The virtual hostnames can be published in the Domain Name System (DNS) or
Network Information Services (NIS).
3. Run the cluster configuration program, NetWorker_installation_dir\lc_config.exe.
This script creates the NWClient resource type that may need to be added later (see
Task 4 (Optional): Create NetWorker Client resource instances) to the VERITAS
Cluster Server configuration.
4. Enter the appropriate information in response to the prompts:
Do you want to configure NetWorker virtual server? [y/n]
Any changes to the configuration can be undone by running the lc_config.exe -r option
and then running the lc_config.exe again.
170
If no servers are specified, any NetWorker server can backup this client.
If no servers are specified, any NetWorker server can perform a directed recovery to
the client.
When adding NetWorker servers, specify both the short name and FQDN for each
NetWorker server.
To define the list of trusted NetWorker servers, perform the following steps on each node
in the cluster:
2. Edit or create the /nsr/res/servers file and add the set of NetWorker servers, one per
line, that require access to this client.
3. Check the NetWorker boot-time startup file to see whether nsrexecd is being run with
the -s option. If the -s option exists, remove all occurrences. For example:
-s servername
On Windows
1. Edit or create the %SystemDrive%\Program Files\EMC NetWorker\nsr\res\servers file.
2. Add the NetWorker servers, one per line, that require access to this client.
171
If the test probe does not display all the scheduled save sets:
Ensure that the save sets defined for the client are owned by that client. If necessary,
redistribute the client save sets to the appropriate Client resources.
Misconfiguration of the Cluster resources might cause scheduled save sets to be
dropped from the backup. The NetWorker Administration Guide provides more
information.
On Windows:
echo NUL: networker_bin_dir\pathownerignore
This command allows any path to be backed up for a client, whether it is owned by the
virtual client or physical node. The NetWorker Administration Guide provides more
information.
If the pathownerignore file was used, check that the NetWorker scheduled save uses the
correct client index. If the wrong index is used, the save sets can be forced to go to the
correct index:
1. From the NetWorker Administration window, select a client and edit its properties.
2. For the Backup Command attribute, type the name of a backup script that contains:
save -c client_name
The EMC NetWorker Administration Guide provides details about the Backup Command
attribute.
172
A list of NetWorker daemons to be shut down appears, and you are prompted
whether to continue.
3. Uninstall the NetWorker client software from each node. The NetWorker installation
guide provides instructions.
3. Uninstall the NetWorker client software from each node. The NetWorker 8.0
Installation Guide describes how to remove the NetWorker software.
173
174