Sie sind auf Seite 1von 18

SERVER CONFIGURATION REQUIRED AFTER FIRST BOOT OF INSTALLATION.

1. Convert 32 bit kernel to 64 bit kernel with JFS as default fils system

Steps: -
Creating link in root
#ln –fs /usr/lib/boot/unix_64 /unix

Creating link at unix kernel location


#ln –fs /usr/lib/boot/unix_64 /usr/lib/boot/unix

Putting image on the device from which system boots i.e at cris its hdsik3.
#bosboot –ad /dev/ipldevice
bosboot creates boot image
-d device Specifies the boot device.
-a Creates complete boot image and device.
#shutdown –Fr or reboot

2. Give IP, Hostname to local machines

3. Increasing Paging space

Steps: -

#smit lvm  Paging Space  Change / Show Characteristics of a Paging Space  hd6 Just enter the
number of additional LP’s
EX: if current size of paging is 512 MB and you want to increase to 32 GB, then you have to calculate
how many additional PP’s are required to be 32 GB.
Size of PP for rootvg is 128 mb, our 1 LP is equal to 1 PP therefore LP size is 128 MB each.

LP = Total disk size / PP size


32000/128
250 LP

Since paging is of 512 MB, it uses 4 PP where each pp is of 128 MB.

Therefore additional LP’s we required to creaste 32 GB paging is


250 – 4 PP’s ( 512 mb current paging size)
246 LP’s are required to be 32 GB of paging space
4. Timming the network interface for concurrency OR Creating Etherchannel

For this we have to create a etherchannel and in that we have to add 2 network interfaces,
ehterchannel is like a creating a single group of network by assiging a single IP.

Step a: -

Change the setting of Network adapters to 10/100/100/ MBPS Full/Half duplex. Please verify
what setting is to be keep from network administrator.

Steps below are the conversion from Auto-Negotitiion mode to 100 MBPS Full Duplex, you can
have your own seeting also,

If the network adapter you choosen for etherchannel is up and running, make it down and
remove it from system or else use the adapters which are not in use currently.
#ifconfig –a
en4:
flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPR
T,64BIT,CHECKSUM_OFFLOAD,PSEG,CHAIN>
inet 192.168.100.12 netmask 0xffffff00 broadcast 192.168.100.255
inet 10.129.1.12 netmask 0xffffff00 broadcast 10.129.1.255

UP means network adapter en4 is up and running.

#ifconfig <network adapter name i.e en0, en2> down


or
#ifconfig en2 down

#ifconfig <network adapter name i.e en0, en2> detach

Smitty Mode of operation: -


smit ethernet  Adapters  Change / Show Characteristics of an Ethernet Adapter 
select ent0 or ent1 or ent3, of whom you want to change settings.
Select media speed=100_Full_Duplex
and press enter

Command Mode of operation: -


Network media speed options available can be check usiong below command.
bash-3.00$ lsattr -E -l ent2 -a media_speed -R
10_Half_Duplex
10_Full_Duplex
100_Half_Duplex
100_Full_Duplex
Auto_Negotiation

I want to change to 100 MBPS Full duplex, below is the command


#chdev -l ent2 -a media_speed=100_Full_Duplex

Steps b: -

Smit devicesCommunicaation  EtherChannel / IEEE 802.3ad Link Aggregation


Add An EtherChannel / Link Aggregation 
Or use
Smitty Fast path
#smit etherchannel

Select ent0 and ent2 i.e 1 port from network PCI card and 1 port from onboard network
interface. In case of any error i.e you are working currently on ent0 netowork card, then you
cant use it for etherchannel since its in use.
So go for second port from PCI card and any port from onboard.

Select ent1 and ent2


> ent1 │
> ent2

You will get this screen


EtherChannel / Link Aggregation Adapters ent1,ent2

Just press enter, it will say ent4 is created.

Step c: -

Know start the etherchannel it will name it as ent4 if available, ent5 or any
Our etherchannel logical name is ent4.

First check the name of your etherchannel.

#Smit etherchannel List All EtherChannels / Link Aggregations 


EtherChannel / Link Aggregation: ent4
Status: Available
Attributes:
adapter_names ent0,ent2 EtherChannel Adapters

From above output you came to know that your etherchannel is ent4 which consist 2
adapters ent0 and ent2.
Know configure the newly created etherchannel adapter ent4.

Smit mktcpip select en4  Enter the below information given by your network
administrator.

* HOSTNAME [utscstm2]
* Internet ADDRESS (dotted decimal) [192.168.100.12]
Network MASK (dotted decimal) [255.255.255.0]
* Network INTERFACE en4
NAMESERVER
Internet ADDRESS (dotted decimal) []
DOMAIN Name []
Default Gateway
Address (dotted decimal or symbolic name) []
Cost [0]
Do Active Dead Gateway Detection? no
Your CABLE Type N/A
START Now Yes

Press enter and do similar setting on primary/secondary server.

Step d: -

Go to /etc/hosts
Entry will be there: - 192.168.100.11 utscstm1 change utscstm1 to utsvcstm1_boot
Internet Address Hostname # Comments
192.168.100.11 utscstm1_boot
192.168.100.12 utscstm2_boot
10.129.1.11 utscstm1_svc utscstm1
10.129.1.12 utscstm2_svc utscstm2

Put the entry in .rhosts which is in /


utscstm1
utscstm2
utscstm1_boot
utscstm2_boot
utscstm1_svc
utscstm2_svc

Do the same above for secondary or other server.

Test both the server by removing cables one by one for concurrency.

Etherchannel is created know move to the installation of Cluster software and configuration.
CLUSTER PRE-REQUISTE

Before installing cluster software, use installp and preview the cluster cd content, if any pre-requites
filesets are required install it first from IBM AIX O.S CD.

Install HACMP software on both the server, this cluster requires the following filesets
*.adt.libm 5.1.0.0, *.adt.syscalls 5.1.0.0, *.adt.data 5.1.0.0, *.rsct.compat.client.hacmp 2.3.1.0,
*.rsct.compat.basic.hacmp 2.3.1.0, *.rsct.compat.client.hacmp 2.2.1.30, *.rsct.compat.client.hacmp
2.2.1.30,

1. First Create Disk based/ NON IP based heartbeat.

STEP I

Ensure that the disks to be used for disk heartbeating are assigned and configured to
each cluster node.

Enter: -

lspv -> ensure that a PVID is assigned to the disk on each cluster node
If a PVID is not assigned, run one of the following commands:
chdev -l hdisk8 -a pv=yes

STEP II

Create heartbeat as VG smit VG  Add original vg  vg Name = heartbeatvg,


PP size =8 MB, PV name =hdisk8, create VG concurrency =yes

First verify the major number of heartbeatvg


#ls –l /dev/heartbeatvg
you will get major number as 45 , 46, ………
if its 46 the execute this command on secondary server
# importvg –V 46 –y heartbeatvg hdisk8

OR

Create an enhanced concurrent mode volume group on the disk or disks in question using
SMIT. Enter: smitty hacmp  System Management (C-SPOC) HACMP Concurrent
Logical Volume Management  Concurrent Volume Groups Create a Concurrent Volume
Group (with Datapath Devices, if applicable) 
Press F7 to select each cluster node. Select the PVID of the disk to be added to the Volume
Group. Enter the Volume Group Name, Desired Physical Partition Size, and major number.
Enhanced Concurrent Mode should be set to True.

STEP III (Skip this step if already done).

Put the entry of boot-IP’s and service-IP and there label int /etc/hosts
192.168.100.11 utscstm1_boot
192.168.100.12 utscstm2_boot
10.129.1.11 utscstm1_svc utscstm1
10.129.1.12 utscstm2_svc utscstm2

Put the entry in .rhosts


utscstm1
utscstm2
utscstm1_boot
utscstm2_boot
utscstm1_svc
utscstm2_svc

STEP V

Copy the .rhosts file to from root to the below location


#cp /.rhosts /usr/es/sbin/cluster/etc/rhosts
Create same above for secondary server also you can just import the vg that is created in
primary server

Create a diskhb network. Enter: -

smitty hacmp  Extended Configuration Extended Topology Configuration


Configure HACMP Networks Add a Network to the HACMP cluster Enter 
when prompted to select, Choose diskhb.
Enter the network name or accept the default.
Network Name net_diskhb_01
* Network Type [diskhb]

STEP VI

Add each disk-node pair to the diskhb network. Enter:

smitty hacmp  Extended Configuration  Extended Topology  Configuration


Configure HACMP Communication  Interfaces/Devices  Add Communication
Interfaces/Devices  Add Pre-Defined Communication Interfaces and Devices

Communication Devices  Choose your diskhb Network Name.


For Devices Name, enter a unique name; For device path, enter /dev/vpath# or /dev/hdisk#;
For nodename, enter the node on which this device resides.

Repeat step VI for second node in the cluster.

Device name :- utscstm1_hb


Newtwork type :- diskhb
Network name :- net_diskhb_01
Device path :- /dev/hdisk8
Node Name :- utscstm1

STEP VII

Verify communication of heartbeat from both the servers.


Run the following command on the first node to put it in Receive Mode:
#/usr/sbin/rsct/bin/dhb_read -p hdisk8 -r (replace hdisk# with rvpath# if using SDD)
The following should be displayed:
Receive Mode:
Waiting for Response . . .
Run the following command on a different node to put it in Transmit Mode:
#/usr/sbin/rsct/bin/dhb_read -p hdisk8 -t (replace hdisk# with rvpath if using SDD)
If communication is successful, the following should be displayed:
Link operating normally.

Or

#/usr/sbin/rsct/bin/dhb_read –p –rhdisk8 –r ( on main server }


#/usr/sbin/rsct/bin/dhb_read –p –rhdisk8 –t ( on secondary server)

========================= DISK BASED HEART BEAT CREATION DONE =================


GENERAL CONFIGURATION OF CLUSTER/HACMP

STEP I

- Check also that the following daemon is running :


# lssrc –g clcomdES
clcomdES clcomdES 16310 active
otherwise start it with startsrsc –s clcomdES

- godm TCP/IP service must be active ( refresh –s inetd on all nodes)


- Volume Groups imported on all nodes, not automatically activable at boot time,
If AIX 5.2, they must be concurrent capable (bos.clvm.enh fileset installed)

STEP II

1) Configure Cluster
# smit hacmp
 Extended Configuration
 Extended Topology Configuration
 Configure an HACMP Cluster
 Add/Change/Show an HACMP Cluster
Assign a unique Cluster Name (< 32 characters)

* Cluster Name [utscstm]

NOTE: HACMP must be RESTARTED on all nodes in order for change to take effect

2) Configure Nodes
# smit hacmp
 Extended Configuration
 Extended Topology Configuration
 Configure HACMP Nodes
 Add a Node to the HACMP Cluster

Give Node Name, each node has a unique name (<32 characters)
Communication Path to Node: Press F4 and select an IP label (e.g.: boot address)
Node Name utscstm1
Communication Path to Node [192.168.100.11]

After this creates other node also on same primary server, we have utscstm2 as
secondary cluster.

Node Name utscstm2


Communication Path to Node [192.168.100.12]
Repeat this step for all nodes.
3) Configure IP Networks
# smit hacmp
 Extended Configuration
 Extended Topology Configuration
 Configure HACMP Networks
 Add a Network to the HACMP Cluster
Go to # Pre-defined IP-based Network Types

select network type (e.g.: ether) check that Netmask is correct

* Network Name [net_ether_01]


* Network Type ether
* Netmask [255.255.255.0]
* Enable IP Address Takeover via IP Aliases [Yes]
IP Address Offset for Heartbeating over IP Aliases []

It is mandatory to modify the following field:


* Enable IP Address Takeover via IP Aliases [No]
Repeat this step for all IP networks (interconnect, administration).

then Configure Communication Interfaces on these IP networks

4. Then Configure Communication Interfaces on these IP networks


# smit hacmp
 Extended Configuration
 Extended Topology Configuration
 Configure HACMP Communication Interfaces/Devices
 Add Communication Interfaces/Devices
 Add Pre-defined Communication Interfaces and Devices
 Communication Interfaces

Select a previously defined network give IP Label and Node Name

* IP Label/Address [utscstm1_boot]
* Network Type ether
* Network Name net_ether_01
* Node Name [utscstm1]
Network Interface []

Do for all other boot IP available: -

* IP Label/Address [utscstm2_boot]
* Network Type ether
* Network Name net_ether_01
* Node Name [utscstm2]
Network Interface []
Repeat the above same for secondary server also.
Repeat this step for all “boot and standby” addresses of all IP networks and of all nodes. E.g.:
node1_boot, node1_stby, node2_boot, node2_stby

5) Configure persistent IP addresses for network(s) where an initial “boot” address which will be
replaced by a “service” address

# smit hacmp
 Extended Configuration
 Extended Topology Configuration
 Configure HACMP Persistent Node IP Label / Addresses
 Add a Persistent Node IP Label / Address
 Select a Node and give:

* Network Name [] Press F4


* Node IP Label / Address [ ] Should usually correspond to node hostname
or Press F4 to select from /etc/hosts

OR

* Node Name utscstm1


* Network Name [net_ether_01]
* Node IP Label/Address [utscstm1_svc]

Repeat this step for the other interface(s) on other node(s) …

6) Synchronize configuration. It will also setup persistent addresses

# smit hacmp
 Extended Configuration
 Extended Verification and Synchronization

# netstat –i shows persistent IP addresses on the same interfaces as boot addresses.

RESOURCE GROUP CONFIGURATION


From one node, you will execute the following steps:
# smit hacmp
 Extended Configuration
 Extended Resource Configuration

Things required for configuring resource groups


1. Configure HACMP Service IP Labels/Addresses (srv address)
2. Define Application Servers
3. Define Resource Groups
4. Configure Resources for each Resource Group Customization
5. Synchronize Cluster Resources
6. Start HACMP on nodes

1) Configure HACMP Service IP Labels/Addresses


# smit hacmp
 Extended Configuration
 Extended Resource Configuration
 HACMP Extended Resources Configuration
 Configure HACMP Service IP Labels/Addresses
 Add a Service IP Label/Address

Select: Configurable on Multiple Nodes


Select a network name i.e net_ether_01 (10.129.1.0/24 192.168.100.0/24)

* IP Label/Address [utscstm1_svc]
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address []

Press enter

Do same for second service IP.


* IP Label/Address [utscstm2_svc]
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address []

Give interface service IP label or address (on the same subnet as “boot” address)
You shouldn’t specify an Alternate HW Address to accompany IP Label/Address
because AIX 5L does “gratuitous ARP” update.
Don’t specify an Alternate HW Address for Ethernet Gigabit adapters

Repeat this step for ALL service IP labels.


Do the above on primary server and same on secondary server.
2) Define Application Servers
# smit hacmp
Extended Configuration
 Extended Resource Configuration
 HACMP Extended Resources Configuration
 Configure HACMP Application Servers
 Add an Application Server

Create application server name as app_server1 which will start the script route_add_def.sh
This script contain

bash-3.00# vi /usr/es/sbin/cluster/route_add_def.sh
route delete default
route add default 10.129.1.5
save it

give execute permission to above file


route_del_def does’nt contain any thing, its blank.

Server Name app_server1


New Server Name [app_server1]
Start Script [/usr/es/sbin/cluster/route_add_def.sh]
Stop Script [/usr/es/sbin/cluster/route_del_def.sh]
Press enter

Server Name utscstm1


New Server Name [utscstm1]
Start Script [/usr/sbin/cluster/events/RUNHA_utscstm1.sh monitor]
Stop Script [/usr/sbin/cluster/events/RUNHA_utscstm1.sh failover]
Press Enter

Server Name utscstm2


New Server Name [utscstm2]
Start Script [/usr/sbin/cluster/events/RUNHA_utscstm2.sh monitor]
Stop Script [/usr/sbin/cluster/events/RUNHA_utscstm2.sh failover]
Press Enter

Give:
- Server name (symbolic name for the resource)
- Start and Stop scripts full pathnames (must exist on ALL NODES, in non-shared filesystems).

Repeat this step for ALL Application Servers.

3) Define Resource Groups


# smit hacmp
 Extended Configuration
 Extended Resource Configuration
 HACMP Extended Resource Group Configuration
 Add a Resource Group

Select the Resource Group Management Policy: cascading, rotating, concurrent or custom
Give Resource Group Name,
Inter-Site Management Policy (leave default ignore)
Give the list of Participating Node Names: for cascading the order defines the priority.

Resources Group for primary server resources


* Resource Group Name [rsg1]
* Inter-Site Management Policy [ignore]
* Participating Node Names / Default Node Priority [utscstm1 utscstm2]

Resources Group for Secondary server resources


Resource Group Name rsg2
Resource Group Management Policy cascading
Inter-Site Management Policy ignore
Participating Node Names / Default Node Priority [utscstm2 utscstm1 ]

Resources Group for Concurrent heartbeat.


While attempting to select for cascading, rotating, concurrent,
Select concurrent.

Resource Group Name rsg3


Resource Group Management Policy concurrent
Inter-Site Management Policy ignore
Participating Node Names / Default Node Priority [utscstm1 utscstm2]

Note: -
Toggle between Ignore/Cascading/Concurrent/Rotating.
CASCADING resources are resources which may be assigned to be taken over by multiple sites in a
prioritized manner. When a sites fails, the active site with the highest priority acquires the resource.
When the failed site rejoins, the site with the highest priority acquires the resource.

ROTATING resources are resources which may be acquired by any site in its resource chain. When a
site fails, the resource will be acquired by the highest priority standby site. When the failed node
rejoins, the resource remains with its new owner. Ignore should be used if sites and replicated
resources are not defined or being used

Repeat this step for all Resource Groups

4) Configure Resources for each Resource Group


# smit hacmp
 Extended Configuration
 Extended Resource Configuration
 HACMP Extended Resource Group Configuration
 Change/Show Resources and Attributes for a Resource Group

Select a Resource Group name i.e rsg1, rsg2 and rsg3 as created above.

Define the resources belonging to the resource group (separated with pace):
Inactive Takeover Applied (true if you want first starting node to take all resources)
Cascading Without Fallback Enabled (true to decide when fallback will occur recommended if
HACMP Cluster Services started in /etc/inittab)

Application Servers
Service IP Labels / Addresses (Give ALL Service IP labels separated by space, in case of several
networks) Volume Groups give the name(s) separated by space Use forced varyon of volume groups, if
necessary (true for AIX mirrored VGs)

Filesystems (empty is All) leave empty


Filesystems Consistency Check fsck
Filesystems Recovery Method leave sequential if nested recursive mount
points otherwise, Parallel when many.

Filesystems mounted before IP configured (TRUE for NFS (cross)-mount)


Filesystems/Directories to Export (must belong to Volume Groups listed before)
Filesystems/Directories to NFS mount (must belong to previous Export list)
Network for NFS mount

Repeat this step for all Resource Groups

Example of rsg1: -

Resource Group Name rsg1


Resource Group Management Policy cascading
Inter-site Management Policy ignore
Participating Node Names / Default Node Priority utscstm1 utscstm2
Dynamic Node Priority (Overrides default) []
Inactive Takeover Applied false
Cascading Without Fallback Enabled false

Application Servers [app_server1 utscstm1]


Service IP Labels/Addresses [utscstm1_svc]
Volume Groups [utscstm1_vg1 utscstm1_vg2]
Use forced varyon of volume groups, if necessary false
Automatically Import Volume Groups false

Filesystems (empty is ALL for VGs specified) []


Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems mounted before IP configured false
Filesystems/Directories to Export []
Filesystems/Directories to NFS Mount []
Network For NFS Mount []

Tape Resources []
Raw Disk PVIDs []

Fast Connect Services []


Communication Links []

Primary Workload Manager Class []


Secondary Workload Manager Class []

Miscellaneous Data []

Example of rsg2 : -

Resource Group Name rsg2


Resource Group Management Policy cascading
Inter-site Management Policy ignore
Participating Node Names / Default Node Priority utscstm2 utscstm1
Dynamic Node Priority (Overrides default) []
Inactive Takeover Applied false
Cascading Without Fallback Enabled false

Application Servers [app_server1 utscstm2]


Service IP Labels/Addresses [utscstm1_svc]

Volume Groups [utscstm2_vg1 utscstm2_vg2]


Use forced varyon of volume groups, if necessary false
Automatically Import Volume Groups false

Filesystems (empty is ALL for VGs specified) []


Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems mounted before IP configured false
Filesystems/Directories to Export []
Filesystems/Directories to NFS Mount []
Network For NFS Mount []

Tape Resources []
Raw Disk PVIDs []

Fast Connect Services []


Communication Links []

Primary Workload Manager Class []


Secondary Workload Manager Class []

Miscellaneous Data []

Example of rsg3 i.e heartbeatvg

Resource Group Name rsg3


Resource Group Management Policy concurrent
Inter-site Management Policy ignore
Participating Node Names / Default Node Priority utscstm1 utscstm2

Concurrent Volume Groups [heartbeatvg]


Use forced varyon of volume groups, if necessary false
Automatically Import Volume Groups false

Application Servers []

Tape Resources []
Raw Disk PVIDs []
Disk Fencing Activated false

Fast Connect Services []


Communication Links []
Workload Manager Class []

Miscellaneous Data []
5. Synchronize Cluster Resources (everytime you change the configuration)

# smit hacmp
 Extended Configuration
 Extended Verification and Synchronization
 which allows to:
* Verify, Synchronize or Both [Both]
Force synchronization if verification fails? [No]
* Verify changes only? [No]
* Logging [Standard / Verbose]
In case of problem, select Verbose Logging and look to log files:
/var/hacmp/clverify/clverify.log or /var/hacmp/clverify/…

4. Now you can start HACMP on all nodes (several nodes at the same time)
# smit clstart
* Start now, on system restart or both now
Start Cluster Services on these nodes [node1] you can specify several nodes

================== END OF HACMP CONFIGURATION ============

Creation of file sytems and volume group.


Create internalvg, utsccgm1_vg1 and utsccgm1_vg2 on one server and internalvg,
utsccgm2_vg1 and utsccgm2_vg2 on another server.

Use the exportvg and importvg commands to create VG and file system on another
server, first unmount all the file system currently created

#Smit fs  Unmount a File System


Unmount ALL mounted file systems? Yes (except /, /tmp, /usr)

Exit.

Know export the vg and import it on another server.

#umount
#varyoffvg internalvg
#exportvg internalvg
# ls -l /dev/inter*
crw-rw---- 1 root system 49, 0 Aug 06 12:28 /dev/internalvg

Know this 49 is your major number you can create the same VG with same
characterstics on another server by using below command and major number of this
disk.

On Another server
#importvg –y internalvg –V 49 hdisk0
this means you are creating internalvg on hdisk0 of m2 server.

#chvg –an –Qn internalvg

Das könnte Ihnen auch gefallen