Sie sind auf Seite 1von 111

Installing Oracle Database 10g Rel.2 (

0) RAC on Red Hat Enterprise

Linux AS Version 4 Update 5 using VMware Server

One of the biggest obstacles preventing people from setting up test RAC environments is the
requirement for shared storage. In a production environment, shared storage is often provided by
a SAN or high-end NAS device, but both of these options are very expensive when all you want
to do is get some experience installing and using RAC. A cheaper alternative is to use a FireWire
disk enclosure to allow two machines to access the same disk(s), but that still costs money and
requires two servers. A third option is to use VMware Server to fake the shared storage.

Using VMware Server you can run multiple Virtual Machines on a single server, allowing you to
run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks,
overcoming the obstacle of expensive shared storage.

1. VMware Server Installation

For this article, I will use Windows XP Professional with Service Pack 2 as the host OS and Red
Hat Enterprise Linux AS Version 4 Update 5 as the guest OS. I have
demonstrated the installation process with screen shots. Detailed explanation will be added
where necessary.
Click the OK button and continue.
Enter the serial number.
Double-click the VMware Server Console icon on your desktop.
Click the OK button.
2. Virtual Machine Setup
Click File  New  Virtual Machine.
Uncheck Make this virtual machine private.
Uncheck Allocate all disk space now and check Split disk into 2 GB files.
Click Edit virtual machine settings.
Click the Add… button.
Select Ethernet Adapter and click the Next button.
Again click Edit, select the CD-ROM, browse the ISO image and click OK button.
3. Guest Operating System Installation
Click the Start this virtual machine.
Click the Yes button.
Click the Proceed button.
Hint: The date & time should be smaller than the host machine. This will help to synchronize time
later on.
Click the Continue button.
4. Oracle Installation Prerequisites
Perform the following steps as the root user.
The /etc/hosts file must contain the following information. localhost.localdomain localhost

# Public rac1 rac2
#Private rac1-priv rac2-priv
#Virtual rac1-vip rac2-vip

Run these commands.

# service sendmail stop

# chkconfig --level 345 sendmail off

Add the following lines to the /etc/sysctl.conf file.

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Run the following command to change the current kernel parameters.

/sbin/sysctl -p

Add the following lines to the /etc/security/limits.conf file.

* soft nproc 2047

* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

Add the following line to the /etc/pam.d/login file.

session required /lib/security/

Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as


Alternatively, this alteration can be done using the GUI tool (Applications > System Settings >
Security Level). Click on the SELinux tab and disable the feature.

Set the hangcheck kernel module parameters by adding the following line to the
/etc/modprobe.conf file.
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

To load the module immediately, execute: modprobe -v hangcheck-timer

Create the new groups and users.

groupadd oinstall
groupadd dba
groupadd oper

useradd -g oinstall -G dba oracle

passwd oracle

Create the directories in which the Oracle software will be installed.

mkdir -p /u01/crs/oracle/product/10.2.0/crs
mkdir -p /u01/app/oracle/product/10.2.0/db_1
mkdir /u02
chown -R oracle:oinstall /u01 /u02
chmod -R 775 /u01 /u02

During the installation, both RSH and RSH-Server were installed. Enable remote shell and rlogin
by doing the following.

chkconfig rsh on
chkconfig rlogin on
service xinetd reload

Create the /etc/hosts.equiv file as the root user.

touch /etc/hosts.equiv
chmod 600 /etc/hosts.equiv
chown root:root /etc/hosts.equiv

Edit the /etc/hosts.equiv file to include all the RAC nodes:

+rac1 oracle
+rac2 oracle
+rac1-priv oracle
+rac2-priv oracle

Login as the oracle user and add the following lines at the end of the .bash_profile file.

# Oracle Settings
TMP=/tmp; export TMP

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
PATH=/usr/sbin:$PATH; export PATH



if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
ulimit -u 16384 -n 65536

5. Install VMware Client Tools VMware client tools are now installed.
Login as the root user on the rac1 virtual machine, then select the "VM --> Install VMware
Tools..." option from the main VMware Server Console menu.

This should mount a virtual CD containing the VMware Tools software. Double-click on the CD
icon labeled "VMware Tools" to open the CD. Right-click on the ".rpm" package and select the
"Open with 'Install Packages'" menu option.
Click the "Continue" button on the "Completed System Preparation" screen and wait for the
installation to complete.
Once the package is loaded, the CD should unmount automatically. You must then run the
"" script as the root user. The following listing is an example of the output
you should expect.


Stopping VMware Tools services in the virtual machine:

Guest operating system daemon: [ OK ]
Trying to find a suitable vmhgfs module for your running kernel.

The module bld-2.6.9-11.EL-i686up-RHEL4 loads perfectly in the running kernel.

pcnet32 30409 0
Unloading pcnet32 module

Trying to find a suitable vmxnet module for your running kernel.

The module bld-2.6.9-11.EL-i686up-RHEL4 loads perfectly in the running kernel.

Detected version 6.8.

Please choose one of the following display sizes (1 - 13):

[1] "640x480"
[2] "800x600"
[3] "1024x768"
[4] "1152x864"
[5] "1280x800"
[6] "1152x900"
[7] "1280x1024"
[8] "1376x1032"
[9] "1400x1050"
[10] "1680x1050"
[11] "1600x1200"
[12]< "1920x1200"
[13] "2364x1773"
Please enter a number between 1 and 13:

[12] 3

X Window System Version 6.8.2

Release Date: 9 February 2005
X Protocol Version 11, Revision 0, Release 6.8.2
Build Operating System: Linux 2.6.9-11.EL i686 [ELF]
Current Operating System: Linux rac1.localdomain 2.6.9-22.EL #1 Sat Oct 8 17:48:27 CDT 2005
Build Date: 07 October 2005
Build Host: x8664-build.home.local

Before reporting problems, check http://wiki.X.Org

to make sure that you have the latest version.
Module Loader present
OS Kernel: Linux version 2.6.9-22.EL (buildcentos@louisa.home.local)
(gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)) #1 Sat Oct 8 17:48:27 CDT 2005 P
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(++) Log file: "/tmp/vmware-config0/XF86ConfigLog.3674", Time: Thu Apr 13 21:17:37 2006
(++) Using config file: "/tmp/vmware-config0/XF86Config.3674"

X is running fine with the new config file.

(WW) VMWARE(0): Failed to set up write-combining range (0xf0000000,0x1000000)

Starting VMware Tools services in the virtual machine:
Switching to guest configuration: [ OK ]
Guest filesystem driver: [ OK ]
Guest vmxnet fast network device: [ OK ]
DMA setup: [ OK ]
Guest operating system daemon: [ OK ]

The configuration of VMware Tools e.x.p build-22874 for Linux for this running
kernel completed successfully.

You must restart your X session before any mouse or graphics changes take

You can now run VMware Tools by invoking the following command:
"/usr/bin/vmware-toolbox" during an XFree86 session.

To use the vmxnet driver, restart networking using the following commands:
/etc/init.d/network stop
rmmod pcnet32
rmmod vmxnet
depmod -a
modprobe vmxnet
/etc/init.d/network start


--the VMware team

The VMware client tools are now installed.

6. Time Synchronization

a) As root on rac1 run vmware-toolbox and Select the “Time synchrononization between the
virtual machine and the host operating system” option. This is the sample screen shot of rac2
machine just for demonstaration.
b) Edit the /boot/grub/grub.conf file and enter “clock=pit nosmp noapic nolapic” to the kernel line.
c) Reboot the machine.

Note: Time Zone of the host and guest operating systems should match.
7. Create Shared Disks

Shut down the rac1 virtual machine using the following command.

# shutdown -h now
Create a directory E:\rac\shared on the host system to hold the shared virtual disks.

On the VMware Server Console, click the "Edit virtual machine settings" button. On the "Virtual
Machine Settings" screen, click the "Add..." button.
Click the “Next” button.

Select the hardware type of "Hard Disk" and click the "Next" button.
Accept the "Create a new virtual disk" option by clicking the "Next" button.
Accept the "SCSI" option by clicking the "Next" button.
Set the disk size to "2.0" GB and uncheck the "Allocate all disk space now" option, then click the
"Next" button.
Set the disk name to "E:\rac\shared\ocr.vmdk" and click the "Advanced" button.
Set the virtual device node to "SCSI 1:0" and the mode to "Independent" and "Persistent", then
click the "Finish" button.
Repeat the previous hard disk creation steps 2 more times, using the following values:

# 2.0 GB
File Name: E:\rac\shared\votingdisk.vmdk
Virtual Device Node: SCSI 1:1
Mode: Independent and Persistent

# 30.0 GB
File Name: E:\rac\shared\shareddisk.vmdk
Virtual Device Node: SCSI 1:2
Mode: Independent and Persistent

At the end of this process, the virtual machine should look something like the picture below.
Edit the contents of the "E:\rac\rac1\Red Hat Enterprise Linux 4.vmx" file using a text editor,
making sure the following entries are present. Some of the tries will already be present, some will
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "VIRTUAL"

scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.fileName = “E:\rac\shared\ocr.vmdk"
scsi1:0.deviceType = "plainDisk"
scsi1:0.redo = ""

scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.fileName = “E:\rac\shared\votingdisk.vmdk"
scsi1:1.deviceType = "plainDisk"
scsi1:1.redo = ""

scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.fileName = “E:\rac\shared\shareddisk.vmdk"
scsi1:2.deviceType = "plainDisk"
scsi1:2.redo = ""

Start the rac1 virtual machine by clicking the "Start this virtual machine" button on the VMware
Server Console. When the server has started, log in as the root user so you can partition the
disks. The current disks can be seen by issueing the following commands.

# cd /dev
# ls sd*
sda sda1 sda2 sdb sdc sdd

Use the "fdisk" command to partition the disks sdb to sdd. The following output shows the
expected fdisk output for the sdb disk.

# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 1305.

There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n

Command action
e extended
p primary partition (1-4)
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdb1 1 1305 10482381 83 Linux

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.
In each case, the sequence of answers is "n", "p", "1", "Return", "Return", "p" and "w".

Once all the disks are partitioned, the results can be seen by repeating the previous "ls"

# cd /dev
# ls sd*
sda sda1 sda2 sdb sdb1 sdc sdc1 sdd sdd1

Edit the /etc/sysconfig/rawdevices file, adding the following lines.

/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1

Restart the rawdevices service using the following command.

service rawdevices restart

Run the following commands:

chown root:oinstall /dev/raw/raw1

chmod 640 /dev/raw/raw1
chown oracle:oinstall /dev/raw/raw2
chown 640 /dev/raw/raw2

Create the file /etc/udev/permissions.d/49-oracle.permissions and add the following lines to it:

# Voting Disks
8. Clone the Virtual Machine

Shut down the rac1 virtual machine using the following command.

# shutdown -h now

Copy the contents of the rac1 virtual machine into "E:\rac\rac2".

Edit the contents of the "E:\rac\rac2\Red Hat Enterprise Linux 4.vmx" file, making the following

displayName = "rac2"

In the VMware Server Console, select the File > Open menu options and browse for the
"E:\rac\rac2\Red Hat Enterprise Linux 4.vmx" file. Once opened, the rac2 virtual machine is
visible on the console. Start the rac2 virtual machine by clicking the "Start this virtual machine"
button and click the "Always Create" button on the subsequent "Question" screen.

Ignore any errors during the server startup. We are expecting the networking components to fail
at this point.

Log in to the rac2 virtual machine as the root user and start the "Network Configuration" tool
(Applications > System Settings > Network).

Highlight the "eth0" interface and click the "Edit" button on the toolbar and alter the IP address to
"" in the resulting screen.

Click on the "Hardware Device" tab and click the "Probe" button. Then accept the changes by
clicking the "OK" button.

Repeat the process for the "eth1" interface, this time setting the IP Address to "".

Click on the "DNS" tab and change the host name to "rac2", then click on the "Devices" tab.

Once you have finished, save the changes (File > Save) and activate the network interfaces by
highlighting them and clicking the "Activate" button.

9. Install and Configure Oracle Cluster File System (OCFS2)

1) Install OCFS2 --> both nodes

I will install the OCFS2 rpms onto two rac nodes. The installation process is simply a matter of
running the following command
on both Oracle RAC nodes in the cluster as the root user:

# rpm -ivh ocfs2-2.6.9-55.ELsmp-1.2.7-1.el4.i686.rpm \

ocfs2console-1.2.7-1.el4.i386.rpm \

Preparing... ########################################### [100%]

1:ocfs2-tools ########################################### [ 33%]
2:ocfs2-2.6.9-55.ELsmp ########################################### [ 67%]
3:ocfs2console ########################################### [100%]

2)Disable SELinux (RHEL4 U2 and higher) --> both nodes

a) /usr/bin/system-config-securitylevel

b) Now, click the SELinux tab and check off the "Enabled" checkbox.
After clicking [OK], you will be presented with a warning dialog.
Simply acknowledge this warning by clicking "Yes".

c) After making this change on both nodes in the cluster, each node will need
to be rebooted to implement the change

3) Configure OCFS2 --> both nodes

This will need to be done on both Oracle RAC nodes in the cluster as the root user:

# ocfs2console

Using the ocfs2console GUI tool, perform the following steps:

1)Select [Cluster] -> [Configure Nodes...]. This will start the OCFS2 Cluster Stack
. Acknowledge this Information dialog box by clicking [Close].
You will then be presented with the "Node Configuration" dialog.

2)On the "Node Configurtion" dialog, click the [Add] button.

This will bring up the "Add Node" dialog.

3)In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster.
Leave the IP Port set to its default value of 7777. In my example, I added both nodes using
rac1 / for the first node and rac2 / for the second node.
Note: The node name you enter "must" match the hostname of the machine and the IP addresses
use the private interconnect.
Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active" .
Click [Close] on the "Node Configuration" dialog.
After verifying all values are correct, exit the application using [File] -> [Quit].
This needs to be performed on both Oracle RAC nodes in the cluster

4)After exiting the ocfs2console, you will have a /etc/ocfs2/cluster.conf similar to the following.
This process needs to be completed on both Oracle RAC nodes in the cluster and the OCFS2
file should be exactly the same for both of the nodes:

ip_port = 7777
ip_address =
number = 0
name = rac1
cluster = ocfs2

ip_port = 7777
ip_address =
number = 1
name = rac2
cluster = ocfs2

node_count = 2
name = ocfs2

4) O2CB Cluster Service --> both nodes

Before we can do anything with OCFS2 like formatting or mounting the file system,
we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a result
of the configuration process performed above). The stack includes the following services:

NM: Node Manager that keep track of all the nodes in the cluster.conf
HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster
TCP: Handles communication between the nodes
DLM: Distributed lock manager that keeps track of all locks, its owners and status
CONFIGFS: User space driven configuration file system mounted at /config
DLMFS: User space interface to the kernel space DLM

/etc/init.d/o2cb status

Module "configfs": Loaded

Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

5) Configure O2CB to Start on Boot and Adjust O2CB Heartbeat Threshold --> both nodes

All of the tasks within this section will need to be performed on both nodes in the cluster.

Set the on-boot properties as follows:

# /etc/init.d/o2cb offline ocfs2

# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.

The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
<ENTER> without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y

Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 30000
Specify network keepalive delay in ms (>=1000) [2000]: 2000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

Now activate it

# /etc/init.d/o2cb load
# /etc/init.d/o2cb online ocfs2

6) Format the OCFS2 File System --> 1-Node (rac1)

a) Unlike the other tasks in this section, creating the OCFS2 file system should only be executed
on one of nodes in the RAC cluster. I will be executing all commands in this section from rac1

b) If the O2CB cluster is offline, start it. The format operation needs the cluster to be online,
as it needs to ensure that the volume is not mounted on some node in the cluster.

# /etc/init.d/o2cb load
# /etc/init.d/o2cb online ocfs2

# mkfs.ocfs2 -b 4K -C 256K -N 4 -L dbfiles /dev/sdd1

7)Mount the OCFS2 File System --> both nodes

Mounting the file system will need to be performed on both nodes in the Oracle RAC cluster
as the root user account using the OCFS2 label dbfiles!

First, here is how to manually mount the OCFS2 file system from the command-line.
Remember that this needs to be performed as the root user account:

# mount -t ocfs2 -o datavolume,nointr -L "dbfiles" /u02

If the mount was successful, you will simply get your prompt back. We should, however,
run the following checks to ensure the file system is mounted correctly.
Let's use the mount command to ensure that the new file system is really mounted.
This should be performed on both nodes in the RAC cluster:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=
configfs on /config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdd1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

8)Configure OCFS2 to Mount Automatically at Startup --> both nodes

We start by adding the following line to the

vi --> /etc/fstab file
on both nodes in the RAC cluster:

LABEL=dbfiles /u02 ocfs2 _netdev,datavolume,nointr 00

9)Check Permissions on New OCFS2 File System --> both nodes

Use the ls command to check ownership. The permissions should be set to 0775 with
owner "oracle" and group "oinstall".

Let's first check the permissions:

# ls -ld /u02
drwxr-xr-x 3 root root 4096 Sep 3 00:42 /u02

As we can see from the listing above, the oracle user account (and the oinstall group) will
not be able to write to this directory. Let's fix that:

# chown oracle:oinstall /u02

# chmod 775 /u02

Let's now go back and re-check that the permissions are correct for both Oracle RAC nodes in
the cluster:

# ls -ld /u02
drwxrwxr-x 3 oracle oinstall 4096 Sep 3 00:42 /u02

10)Create Directory for Oracle Clusterware Files --> 1 node (rac1)

The following tasks only need to be executed on one of nodes in the RAC cluster.

I will be executing all commands in this section from rac1 only.

# mkdir -p /u02/oradata
# chown -R oracle:oinstall /u02/oradata
# chmod -R 775 /u02/oradata
# ls -l /u02/oradata
total 4

drwxrwxr-x 2 oracle oinstall 4096 Sep 3 00:45 orcl

11)Reboot Both Nodes --> both nodes

Before starting the next section, this would be a good place to reboot both of the nodes in the
RAC cluster.
When the machines come up, ensure that the cluster stack services are being loaded and the
new OCFS2
file system is being mounted:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=
configfs on /config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdd1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

If you modified the O2CB heartbeat threshold, you should verify that it is set correctly:

# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold


12)How to Determine OCFS2 Version --> both nodes

To determine which version of OCFS2 is running, use:

# cat /proc/fs/ocfs2/version
OCFS2 1.2.7 Tue Oct 9 16:15:42 PDT 2007 (build d443ce77532cea8d1e167ab2de51b8c8)

The shared disks are now configured.

Edit the /home/oracle/.bash_profile file on the rac2 node to correct the ORACLE_SID value.


Start the rac1 virtual machine and restart the rac2 virtual machine. While starting up, the "Kudzu"
detection screen may be displayed. Press a key and accept the configuration change on the
following screen.

When both nodes have started, check they can both ping all the public and private IP addresses
using the following commands.

ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv

At this point the virtual IP addresses defined in the /etc/hosts file will not work, so don't bother
testing them. It is a good idea to make a consistent backup of this virtual environment. Shutdown
both the RAC nodes and compress the main rac folder in E drive. The virtual machine setup is
now complete.

Note: You can also configure ocfs2 on one node before cloning the virtual machine.
10. Oracle Clusterware and DB Installation
OCR home: /u01/crs/oracle/product/10.2.0/crs
OCR Location: /dev/raw/raw1
Voting Disk Location:/dev/raw/raw2
Oracle Software Home: /u01/app/oracle/product/10.2.0/db_1
Database Files location: /u02/oradata