Sie sind auf Seite 1von 3

10/15/2017 Document 1012078.

Solaris Cluster How to Changing Cluster Interconnect IP Addresses (Doc ID 1012078.1)

APPLIES TO:

Solaris Cluster - Version 3.0 and later


Oracle Solaris on SPARC (64-bit)
Oracle Solaris on x86-64 (64-bit)

GOAL

This document describes Solaris Cluster 3.x and Solaris Cluster 4.x Changing Cluster Interconnect IP addresses.
Since Solaris Cluster 3.2 there is an official procedure to do such a change via clsetup.
For Sun Cluster 3.0 and 3.1 there is an unsupported procedure available.

SOLUTION

Changing Solaris Cluster 4.x Interconnect IP addresses

The easy way is to use clsetup, which lets you specify the maximum number of nodes, private networks, and zone clusters
you want the netmask to permit. The clsetup command then calculates and displays two possible netmasks to choose
from, one for exactly the numbers you specify and a second one for double the numbers you specified (to accommodate
possible future growth).

Follow the procedure


How to Change the Private Network Address or Address Range of an Existing Cluster

in Chapter 7. Administering Cluster Interconnects and Public Networks of Oracle Solaris Cluster System Administration
Guide 4.2

"Ensure that remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser is enabled to all cluste

If not available you will hit: Bug 15382211 SUNBT6531268 cluster set-netprops doesn't work due to remote a

Changing Solaris Cluster 3.2 and 3.3 Interconnect IP addresses

Nearly the same procedure for SC 3.x:

Follow the procedure


How to Change the Private Network Address or Address Range of an Existing Cluster

in Chapter 7. Administering Cluster Interconnects and Public Networks of Oracle Solaris Cluster System Administration
Guide 3.3 3/13 update2

From discussions on technical alias:


In general, cluster requires an entire range of bits to be reserved for its own allocation, and the minimum is 5 bits. 2 for
the host IDs (0x01, 0x10) and 3 for the private subnets for 6 subnets. If we had allowed only for 2 bits for the subnets, it
would not be enough. One for e1000g3, one for e1000g7, and one for clprivnet, plus the unusable 0x00 and 0x11 bits. So
at least 3 is required for the subnets.

For a discussion of the addressing requirement, and in the context of what is minimally possible, we would not consider
zone-clusters. The following factors are in play.

1) Number of nodes (N). This needs to account for possible future expansion if desired. Each node's ID is used as the
least significant bits in its IP addresses. So for instance, N == 64 would require 64 slots, plus 2, since all 0's and all 1's
cannot be used. Hence, at least 7 bits for N == 64.

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=v1ds5canc_167&id=1012078.1 1/3
10/15/2017 Document 1012078.1

2) Number of private networks (P). One subnet is allocated per private network, so that the packets could be segregated
appropriately. Hence, if two NICs per node are used for the interconnect, P == 2. Note also that P should account for
future expansion if desired.

So some number of bits are for the node ID, and some number of bits are prepended for the subnets. How many bits
must we prepend, based on P?

Cluster maintains that 3/4 of all available addresses be reserved for P. Default P == 10, so we need 16 subnets, since (3/4
* 16) - 2 = 10. "- 2" because all 0's and all 1's cannot be used. To get 16 subnets, we need 4 bits. (The remaining 1/4 is
used for clprivnet addresses and local zones.)

Hence considering both N and P, we have a default of (4 + 7) = 11 bits required for the cluster interconnect.

An Example:

nxge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3


inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255

Focussing only on the least significant 11 bits:

subnet ID node ID
--------------------------
e1000g1: 0001 0000001
nxge1 0010 0000001
clprivnet0 1000 0000001

Minimum Setup
For a two-node cluster with two private networks and no chance for zone-clusters,
N = 2, requiring 2 bits (4 slots - all 0's and all 1's)
P = 2, requiring 3 bits, so that (3/4 * 8) - 2 = 4 (which is > P)
So a minimum of 5 bits is required. But given these 5 bits, the real maximum node count supported is 2, but up to 4
private network is still possible.

Changing Sun Cluster 3.0 and 3.1 Interconnect IP addresses

NOTE: This is not a supported procedure. The supported way to make this change is to reinstall and change these IPs at
install time.

Background: It may sometimes be necessary for a customer to change the private interconnect on Sun Cluster 3.0 or 3.1.
The IP address range used the for the interconnects is outlined in RFC 1918 as a non-range used for private internets. The
range that is reserved is 172.16/12 which encompasses 172.16.0.0 through 172.31.255.255 and may be used on internal
networks, particularly those that employ the use of NAT for their public network access. In most cases this change should
be made at install time through scinstall.

However, when systems are moved to new networks this conflict can sometimes develop. It is for this purpose that this
document has been written. It is intended that this only be done by trained Oracle personnel and a backup is strongly
recommended in case there is a problem.

1) Back up /etc/cluster/ccr either via tar, cpio or using system's backup software.

2) Shut down both nodes of the cluster:

# /usr/cluster/bin/scshutdown -y

3) Boot both nodes outside of cluster:

obp> boot -x

4) On both nodes edit the /etc/cluster/ccr/infrastructure file and change the IP addresses assigned to the interconnects as
well as the network address and netmask if necessary. It is also possible to edit this file on one node and copy it to the
other as it should be the same on both nodes. The following are the lines that may need to be changed:

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=v1ds5canc_167&id=1012078.1 2/3
10/15/2017 Document 1012078.1
cluster.properties.private_net_number 172.16.0.0
cluster.properties.private_netmask 255.255.0.0
cluster.nodes.1.adapters.1.properties.netmask 255.255.255.128
cluster.nodes.1.adapters.1.properties.ip_address 172.16.0.129
cluster.nodes.1.adapters.2.properties.netmask 255.255.255.128
cluster.nodes.1.adapters.2.properties.ip_address 172.16.1.1
cluster.nodes.2.adapters.1.properties.netmask 255.255.255.128
cluster.nodes.2.adapters.1.properties.ip_address 172.16.0.130
cluster.nodes.2.adapters.2.properties.netmask 255.255.255.128
cluster.nodes.2.adapters.2.properties.ip_address 172.16.1.2

5) On both nodes re-compute the checksum and update the new infrastructure files with the new checksum:

node1# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o


node2# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure
The -o option is used on the master which need to be booted first into cluster.
Note: There is a new syntax for ccradm for SC3.2u2 an higher, which is
not covered here because procedure is for 3.0 and 3.1.

6) Shut down both nodes and boot back into cluster:

node1# shutdown
obp> boot
node2# shutdown
obp> boot

7) At this point the IPs should be changed. Verify the change:

#ifconfig -a
qfe2: flags=1008843 mtu 1500 index 3
inet 192.168.1.1 netmask ffffff80 broadcast 192.168.1.127
ether 8:0:20:88:c1:81
qfe0: flags=1008843 mtu 1500 index 4
inet 192.168.0.129 netmask ffffff80 broadcast 192.168.0.255
ether 8:0:20:88:c1:81

Didn't find what you are looking for?

https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=v1ds5canc_167&id=1012078.1 3/3

Das könnte Ihnen auch gefallen