Sie sind auf Seite 1von 17

Adding a 3rd node to a RAC 11g

Adding a 3rd node to a RAC 11g


by Marcio Ribeiro

Ok fellows, now that we learned how to create a RAC 11g, I will show how to add a 3rd
node to a RAC.

The pre-requisites for this article is prepare a Two-Node RAC 11g as in the article below

Two-Nodes RAC 11g without graphical interface

The main steps to add a node to a RAC are:

1. Install and configure OS and hardware for new node


2. Add Oracle Clusterware to the new node

3. Configure ONS for the new node

4. Add Database software to the new node

5. Add ASM home to new node

6. Add RAC home to new node

7. Add listener to new node

8. Add database instance to new node

Chapter I – Creating the new VM


Create a new VM machine with this parameters:

Definition

Memory: 512MB

Processors: 1

Hard Disk: 8GB

Ethernet

Ethernet 2

Linux Software

Version Oracle Enterprise Linux 4

Kernel 2.6.9-42.0.0.0.1.EL

Partitioning

swap 1024MB

/ (ext3) 7161MB

Network configuration

Servername mrn3

Interface eth0
IP Address 192.168.0.103

Netmask 255.255.255.0

Interface eth1

IP Address 10.0.0.3

Netmask 255.255.255.0

Using the article as reference, configure kernel parameters, install additional packages,
create oracle user and its groups.

Chapter II – Configuring Hosts and OCFS2

Network configuration

In all three nodes, edit the /etc/hosts file and add the interfaces for the new node. The file
should have this configuration:

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

# Public interface

192.168.0.101 mrn1
192.168.0.102 mrn2

192.168.0.103 mrn3

# Private interface

10.0.0.1 mrn1-priv

10.0.0.2 mrn2-priv

10.0.0.3 mrn3-priv

# Oracle Vips

192.168.0.111 mrn1-vip

192.168.0.112 mrn2-vip

192.168.0.113 mrn3-vip

Now edit /etc/hosts.equiv file and add these lines in all nodes

+mrn1 oracle

+mrn2 oracle

+mrn3 oracle

+mrn1-priv oracle

+mrn2-priv oracle

+mrn3-priv oracle

SSH Configuration

In node mrn3, log as oracle user and execute the following commands in both nodes:
ssh-keygen -t rsa

ssh-keygen -t rsa1

ssh-keygen -t dsa

cd /home/oracle/.ssh

cat *.pub >> authorized_keys

Export the authorized_keys to all nodes:

scp authorized_keys oracle@mrn1:/tmp

scp authorized_keys oracle@mrn2:/tmp

Import the authorized_keys from all nodes:

scp oracle@mrn1:/home/oracle/.ssh/authorized_keys /tmp/aut_mrn1

scp oracle@mrn2:/home/oracle/.ssh/authorized_keys /tmp/aut_mrn2

Include the authorized_keys to mrn3


cd /home/oracle/.ssh

cat /tmp/aut_mrn1 >> authorized_keys

cat /tmp/aut_mrn2 >> authorized_keys

In nome mrn1, log as oracle and execute this command:

cd /home/oracle/.ssh

cat /tmp/authorized_keys >> authorized_keys

Repeat the same procedure in mrn2

Teste all possible connections using ssh. No password should be required.

Directory Creation

In node mrn3, create the following directories:

mkdir -p /oracle

mkdir -p /oracrs

mkdir -p /oracrs/ocr

mkdir -p /oracrs/votedisk
mkdir -p /oracle/inventory

mkdir -p /oracle/crshome

mkdir -p /oracle/dbhome

chown oracle:oinstall /oracle -R

chown oracle:oinstall /oracrs -R

chmod 775 /oracrs/ocr

chmod 775 /oracrs/votedisk

OCFS2 Configuration

Execute this command in mrn1, mrn2:

o2cb_ctl -C -i -n mrn3 -t node -a number=3 -a ip_address=10.0.0.3 -a


ip_port=7777 -a cluster=mrrac

This command will include in ocfs2 configuration the information for the new node. Now
copy /etc/ocfs2/cluster.conf from mrn1 to mrn3. Then configure the ocfs2 and o2cb
service in mrn3.

In all three nodes, the /etc/ocfs2/cluster.conf file should contain the following
configuration:
cluster:

node_count = 3

name = mrrac

node:

ip_port = 7777

ip_address = 10.0.0.1

number = 1

name = mrn1

cluster = mrrac

node:

ip_port = 7777

ip_address = 10.0.0.2

number = 2

name = mrn2

cluster = mrrac

node:

ip_port = 7777

ip_address = 10.0.0.3

number = 3

name = mrn3

cluster = mrrac

Restart all servers


Note: if you get an error message saying “no free slots”, check the “Max Node Slots”
specified for the partition using this command:

debugfs.ocfs2 -R "stats" /dev/sda1 |grep Max

debugfs.ocfs2 -R "stats" /dev/sdb1 |grep Max

If the number is lower than 3, increase it by using:

tunefs.ocfs2 -N 3 /dev/sda1

tunefs.ocfs2 -N 3 /dev/sdb1

Chapter II – Configure Clusterware

Before we start clusterware installation, lets use the Cluster Verity utility to check if all
pre-requisites are ok:

./runcluvfy.sh stage -pre crsinst -n mrn1,mrn2,mrn3

No, using the node mrn1, go to crshome directory and execute this command to add the
new node:
cd /oracle/crshome/oui/bin

./addNode.sh -silent CLUSTER_NEW_NODES={mrn3} \

CLUSTER_NEW_PRIVATE_NODE_NAMES={mrn3-priv} \

CLUSTER_NEW_VIRTUAL_HOSTNAMES={mrn3-vip}

After the installation, you'll be prompted to execute the following commands as root user:

In mrn1

/oracle/crshome/install/rootaddnode.sh

In mrn3

/oracle/crshome/root.sh

Chapter III – Configure ONS

We need to configure the Oracle Notification Service for the new node.

Using mrn3, log as oracle execute the following commands:


export ORACLE_HOME=/oracle/crshome

cd $ORACLE_HOME/bin

./racgons add_config mrn3:6150

Chapter IV – Configure Database Software

Using mrn1 node, connect as oracle and execute these commands:

export ORACLE_HOME=/oracle/dbhome/db1

cd $ORACLE_HOME/oui/bin

./addNode.sh -silent CLUSTER_NEW_NODES={mrn3}

Chapter V – Configure a listener for the new node

Using mrn3 node, connect as oracle and create these symbolic links

ln -s /oracrs/ocr/listener.ora $ORACLE_HOME/network/admin/listener.ora

ln -s /oracrs/ocr/sqlnet.ora $ORACLE_HOME/network/admin/sqlnet.ora

ln -s /oracrs/ocr/tnsnames.ora $ORACLE_HOME/network/admin/tnsnames.ora

Edit the listener.ora file and add this entry

LSNR_MRN3 =
(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = mrn3-vip)(PORT = 1521))

Register the listener in the cluster

srvctl add listener -n mrn3 -o $ORACLE_HOME -l LSNR_MRN3

srvctl start listener -n mrn3 -l LSNR_MRN3

Chapter VI – Configure the ASM for the new node

First, lets create simbolic links for the raw devices used by ASM:

mkdir -p /oracrs/asm

chown oracle:oinstall /dev/raw/raw*

chmod 640 /dev/raw/*

ln -s /dev/raw/raw3 /oracrs/asm/asm1

ln -s /dev/raw/raw4 /oracrs/asm/asm2

ln -s /dev/raw/raw5 /oracrs/asm/asm3

Incluse these lines in /etc/rc.local file:


chown oracle:oinstall /dev/raw/raw*

chmod 640 /dev/raw/raw*

In nodes mrn1 and mrn2, create this symbolic link

ln -s /oracrs/ocr/spfile+ASM.ora $ORACLE_HOME/dbs/spfile+ASM3.ora

In node mrn3, create these symbolic links

ln -s /oracrs/ocr/spfile+ASM.ora $ORACLE_HOME/dbs/spfile+ASM1.ora

ln -s /oracrs/ocr/spfile+ASM.ora $ORACLE_HOME/dbs/spfile+ASM2.ora

ln -s /oracrs/ocr/spfile+ASM.ora $ORACLE_HOME/dbs/spfile+ASM3.ora

Using mrn1, we need to change some parameters for the all ASM instance:

export ORACLE_SID=+ASM1

sqlplus / as sysdba

alter system set cluster_database_instances=3 scope=spfile;

Restart the ASM in nodes mrn1 and mrn2.

Using mrn1, we need to change some parameters for the new ASM instance:
export ORACLE_SID=+ASM1

sqlplus / as sysdba

alter system set instance_name='+ASM3' scope=spfile sid='+ASM3';

alter system set instance_number=3 scope=spfile sid='+ASM3';

In node mrn3

export ORACLE_SID=+ASM3

sqlplus / as sysdba

startup

Now, let's registrate ASM in the cluster:

srvctl add asm -n mrn3 -i +ASM3 -o $ORACLE_HOME

srvctl start asm -n mrn3

Chapter VII – Configure the database for the new node

First, let's create the symbolic links for database spfile

In mrn1 and mrn2, create this symbolic link


ln -s /oracrs/ocr/spfileMR11G.ora $ORACLE_HOME/dbs/spfileMR11G3.ora

In mrn3 , create these symbolic links

ln -s /oracrs/ocr/spfileMR11G.ora $ORACLE_HOME/dbs/spfileMR11G1.ora

ln -s /oracrs/ocr/spfileMR11G.ora $ORACLE_HOME/dbs/spfileMR11G2.ora

ln -s /oracrs/ocr/spfileMR11G.ora $ORACLE_HOME/dbs/spfileMR11G3.ora

Using mrn1, we need to change some global parameters for all database instances:

export ORACLE_SID=MR11G1

sqlplus / as sysdba

alter system set cluster_database_instances=3 scope=spfile;

Restart the database in nodes mrn1 and mrn2.

Create the undo tablespace for node mrn3

create undo tablespace undo3 datafile '+DATA' size 200M AUTOEXTEND ON


NEXT 5120K MAXSIZE UNLIMITED;
Create temporary tablespace for node mrn3

create temporary tablespace TEMP3 tempfile '+DATA' size 200M AUTOEXTEND


ON NEXT 640K MAXSIZE UNLIMITED;

Add three more redo log groups:

alter database add logfile thread 3 group 7;

alter database add logfile thread 3 group 8;

alter database add logfile thread 3 group 9;

alter database enable thread 3;

Using mrn1, we need to change some parameters for the new database instance:

alter system set sga_target=200M scope=spfile sid='MR11G3';

alter system set instance_name='MR11G3' scope=spfile sid='MR11G3';

alter system set instance_number=3 scope=spfile sid='MR11G3';

alter system set thread=3 scope=spfile sid='MR11G3';

alter system set undo_tablespace='UNDO3' scope=spfile sid='MR11G3';

Register and start the new instance


srvctl add instance -d MR11G -i MR11G3 -n mrn3

srvctl start instance -d MR11G -i MR11G3

Ok, our 3rd node is up and running

SQL> l

1* select instance_num, instance_name, host_name, status from


gv$instance

SQL> /

INSTANCE_NUM INSTANCE_NAME HOST_NAME STATUS

------------ --------------- --------------- ------------

1 MR11G1 mrn1 OPEN

2 MR11G2 mrn2 OPEN

3 MR11G3 mrn3 OPEN

Das könnte Ihnen auch gefallen