Sie sind auf Seite 1von 28

Procedure for Veritas Cluster Implementation

Prepared by:

1.0 Before You start

2.0 Preparing to Install VCS 3.5

2.1 Setting Up the Private Network

2.2 Using Network Switches

2.3 Disabling the Abort Sequence on SPARC Systems

2.4 Enabling Communication Between Systems

2.5 Obtaining License Keys for VCS

2.6 Starting the Installation Utility

3.0 Administering and Troubleshooting VCS

3.1 Installing a VCS License

3.2 Starting VCS

3.3 Stopping VCS

3.4 Querying VCS

3.5 Querying Resources

4.0 Basic Administrative Operations

11/26/2017

Page 1
Confidential Strictly for Internal use
1.0 Before you start
Before we start the installation, we need to have the admin work station and
terminal concentrator functional.

1) Setup all the hardware like CPUs, Memory, etc for all Admin Workstation
Follow the safety rules for static charge sensitive devices.

2) For the private network you will require 2 cross cables. No connect one
private line to qfe0 and the other to qfe1.

3) For public network connect one cable to hme0 and the other to ce1.

4) Connect the public Ethernet ports to the switch / hub provided for the
same.

5) Install the latest recommended patches including Kernel Patches for the
OS. Enable remote root logging by editing /etc/default/login.

Here we end with the Admin Workstation configuration. Next few steps are
to configure the terminal concentrator.

Terminal Concentrator Equipment Configuration:

Connect one end of TCE cable (DB25) to the Serial port of Admin Work
station & other end (RJ45) to the TCE.
Power on the TCE.
On the Utra5 Workstation type
#tip 9600 /dev/ttya.
Press the test button on the TCE, a monitor: prompt appears. Make the
entries as given below.
monitor: :addr
Enter IP: :10.100.1.2 (IP address for TCE)
Subnet : :255.255.255.0
Load host internet address: : 0.0.0.0. (NA)
Broadcast address: :0.0.0.0
Select type of IP encapsulation: : Ethernet
Load Broadcast [N]: :
monitor: : seq
Enter interface seq : self
monitor: :image
11/26/2017

Page 2
Confidential Strictly for Internal use
image name: :(type enter)
TFTP Load wire: :( type enter)
TFTP Dump path/filename: (type enter)
monitor: :image
monitor: :~ (to come out of the monitor prompt)
Power off the TCE & Power it on again

On Admin WorkStation
#telnet 10.1.200.2 (TCE)
Enter the following as shown below
Enter Annex port: cli (type cli to enter into the menu)
Annex : su
Password : same as ip address i.e 10.1.200.2
annex # admin
admin: set port = all mode slave
admin: set port = all type dial-in
admin: set port = all imask_7bits y
admin: set port = all input_flow_control start/stop
admin: set port = all output_flow_control none
admin: reset annex all
admin: reset 1-8
admin: quit
annex # boot (reboot the TCE, configuration of TCE is now complete).

6) Boot both the nodes and from the AWS do


# telnet tc 5002 ( for getting console of node 0)
# telnet tc 5003 ( for getting console of node 1)

or specify the required port number after issuing #telnet TC ( Telnet TC will
get you the TC s prompt for the port to which it should establish a connection
)

Now we are all set to access the servers.

Power on both the cluster nodes, and access their console by typing

11/26/2017

Page 3
Confidential Strictly for Internal use
#telnet TC <specific_port_no> on the AWS terminal. Once the system
controller runs a POST, it would prompt a menu as below.

This will do a POST of the system boards & other controller boards and I/O
boards and take you to the OBP (OK>)

Here ends the first stage of the installation.

2 Veritas cluster 3.5 Installation and configuration.

2.0 Preparing to Install VCS 3.5


1. $ PATH=/sbin:/usr/sbin:/opt/VRTS/bin:/opt/VRTSvcs/bin:$PATH;
export PATH

2.1 Setting Up the Private Network


1. Install the required Ethernet network interface cards.

2. Connect the VCS private Ethernet controllers on each system. Use cross-over
Ethernet
cables (supported only on two systems), or independent hubs, for each VCS
communication network. Ensure hubs are powered from separate sources. On
each
system, use two independent network cards to provide redundancy.
During the process of setting up heartbeat connections, note that a chance for
data
corruption exists if a failure removes all communications between the systems
and
still leaves the systems running and capable of accessing shared storage.
Public Network
Private Network Private
Public Network
Network
Hubs
Private network setups: two-node cluster and four-node cluster

3. Configure the Ethernet devices used for the private network such that the
auto-negotiation protocol is not used. This helps ensure a more stable
configuration
11/26/2017

Page 4
Confidential Strictly for Internal use
with cross-over cables.

You can do this in one of two ways: by editing the /etc/system .le to disable
auto-negotiation on all Ethernet devices system-wide, or by creating a qfe.conf .le
in the /kernel/drv directory to disable auto-negotiation for the individual
devices
used for private network. Refer to the Sun Ethernet driver product
documentation for
information on these methods to configure device driver parameters.

4. Test network connections by temporarily assigning network addresses and use


telnet or ping to verify communications.

LLT uses its own protocol, and does not use TCP/IP. Therefore, to ensure the
private
network connections are used only for LLT communication and not for TCP/IP
trafic,
unplumb and unconfigure the temporary addresses after testing.

2.2 Using Network Switches

Network switches may be used in place of hubs. However, by default, Sun


systems assign
the same MAC address to all interfaces. Thus, connecting two or more interfaces
to a
network switch can cause problems. For example, if IP is configured on one
interface and
LLT on another, and both interfaces are connected to a switch (assuming
separate
VLANs), the duplicate MAC address on the two switch ports can cause the switch to
incorrectly redirect IP trafic to the LLT interface and vice-versa. To avoid this,
configure
the system to assign unique MAC addresses by setting the eeprom(1M)
parameter
local-mac-address? to true.

2.3 Disabling the Abort Sequence on SPARC Systems

Sun SPARC systems provide the following console-abort sequences that enable
you to halt
and continue the processor:

11/26/2017

Page 5
Confidential Strictly for Internal use
press L1-A or STOP-A on the keyboard,
or,
press BREAK on the serial console input device.
Each command can then followed by a response of go at the ok prompt to
enable the
system to continue.
VCS does not support continuing operations after the processor has been
stopped by the
abort sequence because data corruption may result. Speci.cally, when a system is
halted
with the abort sequence it stops producing heartbeats. The other systems in the
cluster
then consider the system failed and take over its services. If the system is later
enabled
with go, it continues writing to shared storage as before, even though its
applications
have been restarted on other systems.
In Solaris 2.6, Sun introduced support for disabling the abort sequence. We
recommend
disabling the keyboard-abort sequence on systems running Solaris 2.6 or greater.
To do
this:
1. Add the following line to the /etc/default/kbd .le (create the .le if it does not
exist):
KEYBOARD_ABORT=disable
2. Reboot.

2.4 Enabling Communication Between Systems

When VCS is installed using the installvcs utility, communication between


systems is
required to install and configure the entire cluster at one time. Permissions must
be
granted for the system on which is installvcs is run to issue ssh or rsh commands
as
root on all systems in the cluster. If ssh is used to communicate between systems,
it must
be configured in a way such that it operates without requests for passwords or
passphrases.

2.5 Obtaining License Keys for VCS

11/26/2017

Page 6
Confidential Strictly for Internal use
VCS is a licensed software product. The installvcs utility prompts you for a
license key for
each system. You cannot use your VERITAS software product until you have
completed
the licensing process. Use either method described in the following two sections
to obtain
a valid license key.

Using the VERITAS vLicenseTM Web Site to Obtain License Key


You can obtain your license key most efficiently using the VERITAS vLicense
web site.
The License Key Request Form has all the information needed to establish a User
Account
on vLicense and generate your license key. The License Key Request Form is a
one-page
insert included with the CD in your product package. You must have this form
to obtain a
software license key for your VERITAS product.
Note Do not discard the License Key Request Form. If you have lost or do not
have the
form for any reason, email license@veritas.com.

Follow the appropriate instructions on the vLicense web site to obtain your
license key
depending on whether you are a new or previous user of vLicense:
1. Access the web site at http://vlicense.veritas.com.
2. Log in or create a new login, as necessary.
3. Follow the instructions on the pages as they are displayed.
When you receive the generated license key, you can proceed with installation.

Patches Required for Java Run Time Environment from Sun


The GUI modules for VCS use the Java Run Time Environment from Sun
Microsystems.
You need to obtain and install the latest Solaris patches to enable the modules to
function
properly. You can obtain the patches from:
http://java.sun.com/j2se/1.4.2/download.html

2.6 Starting the Installation Utility

11/26/2017

Page 7
Confidential Strictly for Internal use
1. Log in as root user on a system connected by the network to the systems where
VCS is
to be installed. The system from which VCS is installed need not be part of the
cluster.

2. Insert the CD with the VCS software into a drive connected to the system.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must mount
the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
Where, in this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom/cluster_server

3. Start the VCS installation utility by entering:


# ./installvcs
4. The utility starts by prompting for the names of the systems in the cluster.
Enter the names of the systems on which VCS is to be installed
separated by spaces (example: system1 system2):
INBLRPPS140 INBLRPPS141
The installvcs Utility

Verifying System Communication

5. The utility verifies that the systems you specify can communicate via ssh or
rsh. If
ssh binaries are found, the program confirms that ssh is set up to operate without
requests for passwords or passphrases.
Checking for ssh on north ........................ .. not found
Checking OS version on north ........................ SunOS 5.8
Verifying communication with south ............ ping successful
Attempting rsh with south ...................... rsh successful
Checking OS version on south ........................ SunOS 5.8
Creating /tmp subdirectory on south.. /tmp subdirectory created
Using /usr/bin/rsh and /usr/bin/rcp to communicate with south
Communication check completed successfully

Licensing VCS
6. The installation utility verifies the license status of each system. If a VCS
license is
found on the system, you are prompted about whether you want to use that
license or
enter a new license.
If no license is found, or you would like to enter a new license, enter a license key

11/26/2017

Page 8
Confidential Strictly for Internal use
when prompted.
VCS licensing verification:
Checking INBLRPPS140 ........................ No license key found
Enter the license key for north: XXXX-XXXX-XXXX-XXXX-XXXX-XXX
Valid VCS Single Node Permanent key entered for INBLRPPS140
Registering key XXXX-XXXX-XXXX-XXXX-XXXX-XXX on INBLRPPS140 .. Done
Checking INBLRPPS141 ........................ No license key found
Enter the license key for INBLRPPS141: XXXX-XXXX-XXXX-XXXX-XXXX-XXX
Valid VCS Single Node Permanent key entered for INBLRPPS141
Registering key XXXX-XXXX-XXXX-XXXX-XXXX-XXX on INBLRPPS141 .. Done
Proceed when VCS licensing completes successfully.

Starting Installation

8. When you are prompted, you can have the installation of the software begin.
Are you ready to start the Cluster installation now? (Y)

9. For each system, the installation program checks if any of the packages to be
installed
are already present.
Checking current installation on INBLRPPS140:
Checking VRTSvcsw ..........................Not installed
Checking VRTSweb ...........................Not installed

Checking current installation on INBLRPPS141:


Checking VRTSvcsw ..........................Not installed

Installation check completed successfully

10. For each system, the installation utility checks for the necessary file system
space. If
the required space is not available, the installation terminates. The utility checks
for
the existence of separate volumes or partitions for /opt and /usr. If they do not
exist, or if the space in them is insufficient, the utility checks for the space in /,
using
the space there if it is available

Checking file system space on INBLRPPS140:


Checking /opt ................ No /opt volume or partition
Checking /usr ................ No /usr volume or partition
Checking /var ................... required space available
Checking / ...................... required space available

Checking file system space on INBLRPPS141:


Checking /tmp.................... required space available
11/26/2017

Page 9
Confidential Strictly for Internal use
Checking /opt ................ No /opt volume or partition
Checking /usr ................ No /usr volume or partition
Checking /var ................... required space available
Checking / ...................... required space available
File system verification completed successfully

11. For each system, the installation utility checks that VCS processes are not
running on the systems. Any running VCS processes are stopped or killed by
installvcs.
Note If you are running VxFS version 3.4 and have not installed the VxFS 3.4
Patch 2, GAB or LLT may not stop properly and you may have to reboot your
system to
install VCS successfully.
Stopping VCS processes on INBLRPPS140:
Checking VCS .................................... not running
Checking hashadow ............................... not running
Checking CmdServer .............................. not running
Checking Web GUI processes....................... not running
Checking notifier processes ..................... not running
Checking GAB .................................... not running
Checking LLT .................................... not running
Stopping VCS processes on INBLRPPS141:
Checking VCS .................................... not running
Checking hashadow ............................... not running
Checking CmdServer .............................. not running
Checking Web GUI processes....................... not running
Checking notifier processes ..................... not running
Checking GAB .................................... not running
Checking LLT .................................... not running
VCS processes are stopped

Configuring the Cluster

12. The program lists the information required to start installation:


To configure VCS the following is required:
A unique Cluster name
A unique Cluster ID number between 0-255
Two NIC cards on each system used for private network links
Are you ready to configure VCS on these systems? (Y)
Select Y to start installing and configuring VCS on the systems. If you enter
N,
VCS packages are installed on the systems without configuration.

13. The installation program prompts you for the cluster name and identification:
Enter the unique Cluster Name: vcs_cluster2
Enter the unique Cluster ID number between 0-255: 7

11/26/2017

Page 10
Confidential Strictly for Internal use
14. After you enter the cluster ID number, the program discovers and lists all
NICs on the
first system:
Discovering NICs on north: ..... discovered hme0 qfe0 qfe1
qfe2 qfe3
Enter the NIC for the first private network heartbeat link on
north: (hme0 qfe0 qfe1 qfe2 qfe3) qfe0
Enter the NIC for the second private network heartbeat link on
north: (hme0 qfe1 qfe2 qfe3) qfe1
Are you using the same NICs for private heartbeat links on all
systems? (Y)
If you answer N, the program prompts for the NICs of each system.

15. The installation program asks you to verify the cluster information:
Cluster information verification:
Cluster Name: vcs_cluster2
Cluster ID Number: 7
Private Network Heartbeat Links for north: link1=qfe0
link2=qfe1
Private Network Heartbeat Links for south: link1=qfe0
link2=qfe1
Is this information correct? (Y)
If you enter N, the program returns to the screen describing the installation
requirements (step 12).

Installing VCS

26. After you have verified that the information you have entered is correct, the
installation program begins installing the packages on the first system:
Installing VCS on INBLRPPS140:
Installing VRTSvlic package ........................... Done
Installing VRTSperl package ........................... Done
Installing VRTSllt package ............................ Done
Installing VRTSgab package ............................ Done
Installing VRTSvcs package ............................ Done
Installing VRTSvcsmg package .......................... Done
Installing VRTSvcsag package .......................... Done
Installing VRTSvcsdc package .......................... Done
Installing VRTSvcsmn package .......................... Done
Installing VRTSweb package ............................ Done
Installing VRTSvcsw package ........................... Done
The same packages are installed on each machine in the cluster:
Installing VCS on INBLRPPS141:

11/26/2017

Page 11
Confidential Strictly for Internal use
Copying VRTSvlic binaries ............................. Done
Installing VRTSvlic package ........................... Done
Copying VRTSperl binaries ............................. Done
Installing VRTSperl package ........................... Done
.
.
Package installation completed successfully

Configuring VCS
27. The installation program continues by creating configuration files and
copying them
to each system:
Configuring VCS ...................................... Done
Copying VCS configuration files to north.............. Done
Copying VCS configuration files to south.............. Done
Configuration files copied successfully

Starting VCS
28. You can now start VCS and its components on each system:
Do you want to start the cluster components now? (Y)
Starting VCS on INBLRPPS140
Starting LLT ...................................... Started
Starting GAB ...................................... Started
Starting VCS ...................................... Started
Starting VCS on INBLRPPS141
Starting LLT ...................................... Started
Starting GAB ...................................... Started
Starting VCS ...................................... Started
VCS Startup completed successfully

29. When VCS installation completes, the installation program reports on:
- The backup configuration files named with the extension init.name_of_cluster.
For example, the file /etc/llttab in the installation example has a backup file
named:
/etc/llttab.init.vcs_cluster2
- The URL and the login information for Cluster Manager (Web Console), if you
chose to configure it. Typically this resembles:
http://10.180.88.199:8181/vcs
You can access the Web Console using the User Name: admin and the
Password: password. Use the /opt/VRTSvcs/bin/hauser command to add
new users.
- The location of a VCS installation report, which is named
/var/VRTSvcs/installvcsReport.name_of_cluster. For example:
/var/VRTSvcs/installvcsReport.vcs_cluster2
- Additional instructions which may be necessary to complete startup or
11/26/2017

Page 12
Confidential Strictly for Internal use
installation on other systems.
- VCS installation completes.
VCS installation completed successfully

Once the Installation of Veritas Cluster is completed the service groups are created as
per the requirement as given Below

3.0 Administering and Troubleshooting VCS

3.1 Installing a VCS License

The utility halic installs a new permanent license, or updates a demo license,
while HAD is
running. You must have root privileges to use this utility. This utility must be
run on each
system in the cluster: it cannot install or update a license on remote nodes.

_ To install a new license


# halic key

The variable key represents the license key to be installed on the local system.

Note The utility halic must be run on each system in the cluster.

3.2 Starting VCS


The command to start VCS is invoked from the file /etc/rc3.d/S99vcs or
/sbin/rc3.d/S99vcs. When VCS is started on a system, it checks the state of its
local
configuration file and registers with GAB for cluster membership. If the local

11/26/2017

Page 13
Confidential Strictly for Internal use
configuration is valid, and if no other system is running VCS, it builds its state
from the
local configuration file and enters the RUNNING state.

Type the following command to start VCS:


# hastart [-stale|-force]

Note that -stale and -force are optional. The option -stale instructs the engine to
treat the local configuration as stale even if it is valid. The option -force instructs
the
engine to treat a stale, but otherwise valid, local configuration as valid.

If all systems are in ADMIN_WAIT, enter the following command from any system in
the
cluster to force VCS to use the configuration file from the system specified by the
variable
system:
# hasys -force system

Starting VCS on a Single Node


Type the following command to start an instance of VCS that does not require
the GAB
and LLT packages. Do not use this command on a multisystem cluster.
# hastart -onenode

3.3 Stopping VCS


The hastop command stops HAD and related processes. This command includes
the
following options:
hastop -local [-force | -evacuate]
hastop -sys system [-force | -evacuate]
hastop -all [-force]

Stopping VCS Without -force Option

When VCS is stopped on a system without using the -force option to hastop, it
enters

11/26/2017

Page 14
Confidential Strictly for Internal use
the LEAVING state, and waits for all groups to go offline on the system. Use the
output of
the command hasys -display system to verify that the values of the SysState and
the
OnGrpCnt attributes are non-zero. VCS continues to wait for the service groups
to go
offline before it shuts down. See Troubleshooting Resources on page 349 for
more
information.

Stopping VCS with Options Other Than -force

When VCS is stopped by options other than -force on a system with online
service
groups, the groups running on the system are taken offline and remain offline.
This is
indicated by VCS setting the attribute IntentOnline to 0. Using the option -force
enables
service groups to continue running while HAD is brought down and restarted
(IntentOnline remains unchanged).

3.4 Querying VCS

VCS enables you query various cluster objects, including resources, service
groups,
systems, resource types, agents, and clusters. You may enter query commands
from any
system in the cluster. Commands to display information on the VCS
configuration or
system states can be executed by all users: you do not need root privileges.

Querying Service Groups

_ To display the state of a service group on a system


# hagrp -state [service_group] -sys [system]

_ For a list of a service groups resources


# hagrp -resources [service_group]

11/26/2017

Page 15
Confidential Strictly for Internal use
_ For a list of a service groups dependencies
# hagrp -dep [service_group]

_ To display a service group on a system


# hagrp -display [service_group] -sys [system]

If service_group is not specified, information regarding all service groups is


displayed.

_ To display attributes of a system


# hagrp -display [service_group] [-attribute attribute]
[-sys system]

Note System names are case-sensitive.

3.5 Querying Resources


_ For a list of a resources dependencies
# hares -dep [resource]

_ For information on a resource


# hares -display [resource]
If resource is not specified, information regarding all resources is displayed.

_ To display details of an attribute


# hares -display -attribute [attribute]

_ To confirm an attributes values are the same on all systems


# hares -global resource attribute value ... | key... |
{key value}...

_ To display resources of a service group


# hares -display -group [service_group]

_ To display resources of a resource type


# hares -display -type [resource_type]

_ To display attributes of a system


# hares -display -sys [system]

Querying Resource Types


_ For a list of resource types
# hatype -list

_ For a list of all resources of a particular type


# hatype -resources resource_type
11/26/2017

Page 16
Confidential Strictly for Internal use
_ For information about a resource type
# hatype -display [resource_type]

If resource_type is not specified, information regarding all types is displayed.

Querying Resource Type Agents


_ For an agents run-time status
# haagent -display [agent]
If agent is not specified, information regarding all agents is displayed.
Run-Time Status Definition
Faults Indicates the number of agent faults and the time the faults began.
Messages Displays various messages regarding agent status.
Running Indicates the agent is operating.
Started Indicates the file is executed by the VCS engine (HAD).
Chapter 6, Administering VCS from the Command Line 95

Querying Systems
_ For a list of systems in the cluster
# hasys -list

_ For information about each system


# hasys -display [system]

Querying Clusters
_ For the value of a specific cluster attribute
# haclus -value attribute

_ For information about the cluster


# haclus -display

Querying Status
_ For the status of all service groups in the cluster, including resources
# hastatus

_ For the status of a particular service group, including its resources


# hastatus -group service_group [-group service_group]...

_ Todisplay the status in tabular format of all systems and a specific group and
resources (or all service groups if no group is specified)
11/26/2017

Page 17
Confidential Strictly for Internal use
# hastatus [-sound] [-group service_group]
The -sound option enables a bell to ring each time a resource faults.

_ Forthe status of cluster faults, including faulted service groups, resources,


systems, links, and agents
# hastatus -summary
Note Unless executed with the -summary option, hastatus continues to produce
output of online state transitions until you interrupt it with the command CTRL+C.

4.0 Basic Administrative Operations


Administering Service Groups

_ To start a service group and bring its resources online


# hagrp -online service_group -sys system

_ To start a service group on a system (System 1) and bring online only the
resources
already online on another system (System 2)
# hagrp -online service_group -sys system -checkpartial
other_system
If the service group does not have resources online on the other system, the
service
group is brought online on the original system and the checkpartial option is
ignored.
Note that the checkpartial option is used by the Preonline trigger during failover.
When a service group configured with Preonline =1 fails (system 1) fails over to
another system (system 2), the only resources brought online on system 1 are
those
that were previously online on system 2 prior to failover.

_ To stop a service group and take its resources offline


# hagrp -offline service_group -sys system

_ To stop a service group only if all resources are probed on the system
# hagrp -offline [-ifprobed] service_group -sys system

_ To switch a service group from one system to another


# hagrp -switch service_group -to system
The -switch option is valid for failover groups only.
A service group can be switched only if it is fully or partially online.

11/26/2017

Page 18
Confidential Strictly for Internal use
_ To freeze a service group (disable onlining, offlining, and failover)
# hagrp -freeze service_group [-persistent]
The option -persistent enables the freeze to be remembered when the cluster is
rebooted.

_ To thaw a service group (reenable onlining, offlining, and failover)


# hagrp -unfreeze service_group [-persistent]

_ To enable a service group


# hagrp -enable service_group [-sys system]
A group can be brought online only if it is enabled.
_ To disable a service group
# hagrp -disable service_group [-sys system]
A group cannot be brought online or switched if it is disabled.
_ To enable all resources in a service group
# hagrp -enableresources service_group
_ To disable all resources in a service group
# hagrp -disableresources service_group
Agents do not monitor group resources if resources are disabled.
_ To clear faulted, non-persistent resources in a service group
# hagrp -clear [service_group] -sys [system]
Clearing a resource automatically initiates the online process previously blocked
while waiting for the resource to become clear.
- If system is specified, all faulted, non-persistent resources are cleared from that
system only.
- If system is not specified, the service group is cleared on all systems in the
groups
SystemList in which at least one non-persistent resource has faulted.
Chapter 6, Administering VCS from the Command Line 101
Basic Administrative Operations

Administering Resources
_ To bring a resource online
# hares -online resource -sys system

_ To take a resource offline


# hares -offline [-ignoreparent] resource -sys system
The option -ignoreparent enables a resource to be taken offline even if its parent
resources in the service group are online. This option does not work for resources
in a
child group with firm dependencies when the parent group is online. When a
resource with some online parent resources is taken offline using the
-ignoreparent option, the offlined resource is brought online again if its service
group fails over or is switched to another system.
11/26/2017

Page 19
Confidential Strictly for Internal use
_ To take a resource offline and propagate the command to its children
# hares -offprop [-ignoreparent] resource -sys system
Similar to the service group -offline command, this command signals that its
children should be taken offline. This action continues to the leaves of the
resources
subtree. The option -ignoreparent enables a resource to be taken offline even if its
parent resources in the service group are online. This option does not work for
resources in a child group with firm dependencies when the parent group is
online.
When a resource with some online parent resources is taken offline using the
-ignoreparent option, the offlined resource is brought online again if its service
group fails over or is switched to another system.

_ To prompt a resources agent to immediately monitor the resource on a


particular
system
# hares -probe resource -sys system
Though the command may return immediately, the monitoring process may not
be
completed by the time the command returns.
Basic Administrative Operations
102 VERITAS Cluster Server Users Guide, 3.5

_ To clear a resource
Initiate a state change from RESOURCE_FAULTED to RESOURCE_OFFLINE:
# hares -clear resource [-sys system]
Clearing a resource automatically initiates the online process previously blocked
while waiting for the resource to become clear. If system is not specified, the fault
is
cleared on each system in the service groups SystemList attribute. (See also the
service group command to clear faulted, non-persistent resources, hagrp -clear,
.)
This command clears the resources parents automatically. Persistent resources
whose
static attribute Operations is defined as None cannot be cleared with this
command
and must be physically attended to, such as replacing a raw disk. The agent then
updates the status automatically.

Administering Systems

11/26/2017

Page 20
Confidential Strictly for Internal use
_ To force a system to start while in ADMIN_WAIT
# hasys -force system
This command overwrites the configuration on systems running in the cluster.
Before
using it, verify that the current VCS configuration is valid.

_ To modify a systems attributes


# hasys -modify modify_options
Some attributes are internal to VCS and cannot be modified. For details on
system
attributes, see The -modify Option on page 104.

_ To display the value of a systems node ID as defined in the file


/etc/llttab
# hasys -nodeid node_ID
Chapter 6, Administering VCS from the Command Line 103
Basic Administrative Operations

_ To freeze a system (prevent groups from being brought online or switched on


the
system)
# hasys -freeze [-persistent] [-evacuate] system
The option -persistent enables the freeze to be remembered when the cluster is
rebooted. Note that the cluster configuration must be in read/write mode and
must
be saved to disk (dumped) to enable the freeze to be remembered.
The option -evacuate fails over the systems active service groups to another
system in the cluster before the freeze is enabled.

_ Tothaw or unfreeze a frozen system (reenable onlining and switching of


service
groups)
# hasys -unfreeze [-persistent] system

Administering Clusters
_ To modify a cluster attribute
# haclus [-help [-modify]]

11/26/2017

Page 21
Confidential Strictly for Internal use
Basic Configuration Operations
Commands listed in the following sections permanently affect the configuration
of the
cluster. If the cluster is brought down with the command hastop -all or made
read-only, the main.cf file and other configuration files written to disk reflect the
updates.

Specifying Values Preceded by a Dash (-)


When specifying values in a command-line syntax, you must prefix values
beginning with
a dash (-) with a percentage sign (%). If a value begins with a percentage sign,
you must
prefix it with another percentage sign. (The initial percentage sign is stripped by
HAD and
does not appear in the configuration file.)

The -modify Option


Most configuration changes are made using the -modify options of the
commands
haclus, hagrp, hares, hasys, and hatype. Specifically, the -modify option of these
commands changes the attribute values stored in the VCS configuration file. By
default,
all attributes are global, meaning that the value of the attribute is the same for all
systems.
Note VCS must be in read/write mode before you can change the configuration.

Adding Service Groups


_ To add a service group to your cluster
# hagrp -add service_group
The variable service_group must be unique among all service groups defined in
the
cluster.
This command initializes a service group that is ready to contain various
resources.
To employ the group properly, you must populate its SystemList attribute to
define
the systems on which the group may be brought online and taken offline. (A
system

11/26/2017

Page 22
Confidential Strictly for Internal use
list is an association of names and integers that represent priority values.)

Modifying Service Group Attributes


_ To modify a service group attribute
# hagrp -modify service_group attribute value
The variable value represents:
system_name1 priority system_name2 priority2
During a failover (with the attribute FailOver Policy set to Priority), faulted
applications
fail over to the system with lowest number designated in the SystemList
association.
Populating the system list is a way to give hints to HAD regarding which
machine in a
balanced cluster is best equipped to handle a failover.
For example, to populate the system list of service group groupx with Systems A
and B,
type:
# hagrp -modify groupx SystemList -add SystemA 1 SystemB 2
Similarly, to populate the AutoStartList attribute of a service group, type:
# hagrp -modify groupx AutoStartList SystemA SystemB

You may also define a service group as parallel. To set the Parallel attribute to 1,
type the
following command. (Note that the default for this attribute is 0, which
designates the
service group as a failover group.):
# hagrp -modify groupx Parallel 1
This attribute cannot be modified if resources have already been added to the
service
group.

Additional Considerations for Modifying Service Group Attributes

You can modify the attributes SystemList, AutoStartList, and Parallel only by
using the
command hagrp -modify. You cannot modify attributes created by the system,
such as
the state of the service group. If you are modifying a service group from the
command
11/26/2017

Page 23
Confidential Strictly for Internal use
line, the VCS server immediately updates the configuration of the groups
resources
accordingly.
For example, suppose you originally defined the SystemList of service group
groupx as
SystemA and SystemB. Then after the cluster was brought up you added a new
system to
the list:
# hagrp -modify groupx SystemList -add SystemC 3
The SystemList for groupx changes to SystemA, SystemB, SystemC, and an entry
for
SystemC is created in the groups resource attributes, which are stored on a per-
system
basis. These attributes include information regarding the state of the resource on
a
particular system.
Next, suppose you made the following modification:
# hagrp -modify groupx SystemList SystemA 1 SystemC 3 SystemD 4
Using the option -modify without other options erases the existing data and
replaces it
with new data. Therefore, after making the change above, the new SystemList
becomes
SystemA=1, SystemC=3, SystemD=4. SystemB is deleted from the system list,
and each
entry for SystemB in local attributes is removed.

More About Modifying the SystemList Attribute


You can modify the SystemList attribute only with the commands -modify, -add,
-update, -delete, or -delete -keys. If you modify SystemList using the command
hagrp -modify without other options (such as -add or -update), the service
groups
must first be taken offline on the systems being modified. The modification fails if a
service
group is not offline completely.
Chapter 6, Administering VCS from the Command Line 109
Basic Configuration Operations
If you modify SystemList using the command hagrp -modify with the options
-delete
or -delete -keys, any system to be deleted that is not offline is not removed, but
deleting or modifying the offline systems proceeds normally.
If you modify SystemList to add a system that has not been defined by the
command
11/26/2017

Page 24
Confidential Strictly for Internal use
hasys -add, the system is not added, but adding other valid systems proceeds
normally.

Adding Resources
_ To add a resource
# hares -add resource resource_type service_group
This command creates a new resource, resource, which must be a unique name
throughout
the cluster, regardless of where it resides physically or in which service group it
is placed.
The resource type is resource_type, which must be defined in the configuration
language.
The resource belongs to the group service_group.
When new resources are created, all non-static attributes of the resources type,
plus their
default values, are copied to the new resource. Three attributes are also created
by the
system and added to the resource:
_ Critical (default = 1). If the resource or any of its children faults while online,
the
entire service group is marked faulted and failover occurs.
_ AutoStart (default = 1). If the resource is set to AutoStart, it is brought online in
response to a service group command. All resources designated as AutoStart=1
must
be online for the service group to be considered online. (This attribute is
unrelated to
AutoStart attributes for service groups.)
_ Enabled. If the resource is set to Enabled, the agent for the resources type
manages
the resource. The default is 1 for resources defined in the configuration file
main.cf,
0 for resources added on the command line.
Note Adding resources on the command line requires several steps, and the
agent must
be prevented from managing the resource until the steps are completed. For
resources defined in the configuration file, the steps are completed before the
agent
is started.
Basic Configuration Operations
110 VERITAS Cluster Server Users Guide, 3.5

11/26/2017

Page 25
Confidential Strictly for Internal use
Modifying Attributes of a New Resource
_ To modify a new resource
# hares -modify resource attribute value
The variable value depends on the type of attribute being created.

_ To set a new resources Enabled attribute to 1


# hares -modify resourceA Enabled 1
The resources agent is started on a system when its Enabled attribute is set to 1
on that
system. Specifically, the VCS engine begins to monitor the resource for faults.
Agent
monitoring is disabled if the Enabled attribute is reset to 0.

Additional Considerations for Modifying Attributes


Resource names must be unique throughout the cluster and you cannot modify
resource
attributes defined by the system, such as the resource state.

Linking Resources
_ To specify a dependency relationship, or link, between two resources
# hares -link parent_resource child_resource
The variable parent_resource depends on child_resource being online before going
online itself. Conversely, parent_resource must take itself offline before
child_resource
goes offline.
For example, before an IP address can be configured, its associated NIC must be
available, so for resources IP1 of type IP and NIC1 of type NIC, specify the
dependency as:
# hares -link IP1 NIC1

Additional Considerations for Linking Resources


A resource can have an unlimited number of parents and children. When linking
resources, the parent cannot be a resource whose Operations attribute is equal to
None or
OnOnly. Specifically, these are resources that cannot be brought online or taken
offline by
an agent (None), or can only be brought online by an agent (OnOnly).
Loop cycles are automatically prohibited by the VCS engine. You cannot specify
a
resource link between resources of different service groups.

11/26/2017

Page 26
Confidential Strictly for Internal use
Deleting and Unlinking Service Groups and Resources
_ To delete a service group
# hagrp -delete service_group

_ To unlink service groups


# hagrp -unlink parent_group child_group

_ To delete a resource
# hares -delete resource
Note that deleting a resource wont take offline the object being monitored by the
resource. The object remains online, outside the control and monitoring of VCS.

_ To unlink resources
# hares -unlink parent_resource child_resource
Note You can unlink service groups and resources at any time. You cannot delete
a
service group until all of its resources are deleted.

Adding, Deleting, and Modifying Resource Types


After creating a resource type, use the command haattr to add its attributes . By
default, resource type
information is stored in the types.cf configuration file.

_ To add a resource type


# hatype -add resource_type

_ To delete a resource type


# hatype -delete resource_type
You must delete all resources of the type before deleting the resource type.

_ To add or modify resource types in main.cf without shutting down VCS


# hatype -modify resource_type SourceFile "./resource_type.cf"
The information regarding resource_type is stored in the file
config/resource_type.cf, and an
include line for resource_type.cf is added to the main.cf file.

_ To set the value of static resource type attributes


# hatype -modify ...

Adding, Deleting, and Modifying Resource and Static


Resource Attributes

11/26/2017

Page 27
Confidential Strictly for Internal use
_ To add a resource attribute
# haattr -add resource_type attribute [value]
[dimension][default ...]
The variable value is a -string (default) or -integer.
The variable dimension is -scalar (default), -keylist, -association, or -vector.
The variable default is the default value of the attribute and must be compatible
with
the value and dimension. Note that this may include more than one item, as
indicated
by ellipses (...).

_ To delete a resource attribute


# haattr -delete resource_type attribute
Chapter 6, Administering VCS from the Command Line 113
Basic Configuration Operations

_ To add a static resource attribute


# haattr -add -static resource_type attribute [value]
[dimension] [default ...]
_ To delete a static resource attribute
# haattr -delete -static resource_type attribute

_ To modify the default value of a resource attribute


# haattr -default resource_type attribute new_value ...
The variable new_value refers to the attributes new default value.

Starting and Stopping VCS Agents Manually


_ To start and stop agents manually
# haagent -start agent -sys system
# haagent -stop agent -sys system
Note Under normal conditions, VCS agents are started and stopped
automatically.
After issuing the commands above, a message is displayed instructing the user to
look for
messages in the log file.

11/26/2017

Page 28
Confidential Strictly for Internal use

Das könnte Ihnen auch gefallen