Beruflich Dokumente
Kultur Dokumente
Rac10gR2OnAIX................................................................................................................................................1
1. Introduction.........................................................................................................................................1
1.1. What you need to know......................................................................................................1
1.1.1. Software required for install...............................................................................1
1.1.2. Processor Model.................................................................................................1
1.2. Installation steps............................................................................................................................................1
1.3. Schematic......................................................................................................................................................2
1.3.1. Hardware/software configuration BEFORE Oracle software install............................................2
1.3.2. Hardware/software configuration AFTER Oracle software install..............................................2
1.4. Installation Method.......................................................................................................................................2
2. Prepare the cluster nodes for Oracle RAC.......................................................................................................2
2.1. User Accounts..................................................................................................................................2
2.2. Check AIX Filesets..........................................................................................................................3
2.3. Check For Required APARS...........................................................................................................3
2.4. Check Shell Limits...........................................................................................................................3
2.5. Check System and Network Parameters..........................................................................................3
2.6. Check Network Interfaces................................................................................................................3
2.7. Check RSH or SSH Configuration..................................................................................................4
2.8. Create Symbolic Link for VIP Configuration Assistant..................................................................4
2.9. Set Environment Variables..............................................................................................................4
2.10. Run rootpre.sh................................................................................................................................5
4. Oracle Clusterware Installation and Configuration.........................................................................................5
4.1. Oracle Clusterware Install................................................................................................................5
4.2. Verifying Oracle Clusterware Installation.....................................................................................16
5. Oracle Clusterware patching..........................................................................................................................18
5.1. Patch Oracle Clusterware to 10.2.0.3.............................................................................................18
10. Oracle RAC Database Home Software Install.............................................................................................22
10.1. Install RAC 10.2.0.1....................................................................................................................22
11. Oracle RAC Software Home Patching........................................................................................................26
11.1. Patch Oracle RAC to 10.2.0.3......................................................................................................26
12. Oracle RAC Database Creation...................................................................................................................28
12.1. Oracle RAC Listeners Configuration...........................................................................................28
12.2. Oracle RAC Database Configuration...........................................................................................31
12.2.1. Oracle RAC Database Instance has been created........................................................35
i
Rac10gR2OnAIX
1. Introduction
GPFS is IBM’s shared filesystem (cluster filesystem) and stands for General Parallel Filesystem. It will be
used to store the Oracle Clusterware files (OCR and Vote) and the Oracle database files. At the time of this
deployment the certified version of GPFS was 2.3, however, GPFS 3.1 has now been certified for 10.2.0.3 and
above. Prior to commencing your install check to see which technology components are certified from
http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html
This document covers the install on a 64-Bit kernel for a P-Series server.
Do note that all 10gR2 installations require a 64-Bit kernel.
• Preparation
♦ Install the Oracle Clusterware (using the push mechanism to install on the other nodes in the
cluster)
♦ Patch the Clusterware to the latest patchset
• Establish RAC Database
Rac10gR2OnAIX 1
♦ Install an Oracle Software Home for RAC Database
♦ Patch the RAC Database Home to the latest patchset
♦ Create the RAC Database Instances
1.3. Schematic
• dbpremia1
• dbpremia2
The table below, provides more information with respect to the end result of the installation :
• Voting Disks
• Oracle Cluster Registry (OCR)
• Oracle RAC Database Files
# smit user
Since HACMP will not be used for this installation, use the following command to ensure that it is not
installed - if it is, the necessary action must be taken to remove it, before proceeding.
#lslpp -l cluster.es.*
#lslpp -l rsct.hacmp.rte
#lslpp -l rsct.compat.basic.hacmp.rte
#lslpp -l rsct.compat.clients.hacmp.rte
The maximum number of processes for the oracle user should be 2048 or greater.
Use the following command to verify the network parameters, match these against those documented in the
10g RAC installation guide for AIX.
#/usr/sbin/no -a | more
The environment that this document is based on, had the following configuration:
Below is a sample entry of each file for a particular node - this should be replicated to all nodes, and the
.rhosts file for the operating system accounts of root and oracle
Show hosts.equiv and .rhosts Sample Hide hosts.equiv and .rhosts Sample
dbpremia1 root
dbpremia2 root
dbpremia1 oracle
dbpremia2 oracle
The value for hostname was both the public and private aliases for each node. This command should succeed
with each alias without having to prompt for a password. You should also be able to rsh to the node itself
using both public and private aliases.
In addition to that enter the following lines into the oracle users .profile file:
if [ -t 0 ]; then
stty intr ^C
fi
# ./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_07-10-08.15:28:19
Saving the original files in /etc/ora_save_07-10-08.15:28:19....
Copying new kernel extension to /etc....
Loading the kernel extension from /etc
Invoke the oracle universal installer from the second dvd of the 10g media pack, as shown below
♦ Notice that you will be asked whether you have run the rootpre.sh script
• Actions
♦ This was already run a a pre requisite step in the previous chapter
♦ Enter Y to proceed
You will then be presented with a screen for specifying the details of the oracle inventory. Specify the path
and the operating system group as seen in figure 2.
♦ Specifying the location for where the oracle inventory will be created
♦ Specify the operating system group name that will have permissions to modify entires in the
inventory
• Actions
Next specify the ORACLE_HOME details for the Oracle Clusterware installation, as seen below
• Notes
♦ Specify a location for where the clusterware will be installed. This will be the
ORACLE_HOME for CRS.
♦ Specify a name which will act as a reference for this installation
• Actions
Once this is done, the Product specific checks screen appears, it may fail for a few filesets in the O/S check,
this can safely be ignored, as the OUI will check for all component dependencies such as HACMP and other
filesets required for ASM, which we do not need for this particular installation.
Next you will be presented with the cluster configuration screen, were details about the nodes participating in
the cluster will have to be specified, if any nodes do not appear here add them as necessary.
♦ Information for only one node (dbpremia1) is displayed (this is the node where runInstaller,
was invoked from)
♦ We need to add information relevant to all nodes in our cluster
• Actions
♦ Click the 'add' button to enter details for the node dbpremia2
• Notes
♦ Clicking the add button results in add node dialog box appearing
• Actions
♦ Enter details pertaining to the public, private, virtual node name and ip addresses for
additional nodes - dbpremia2
Next, is the ‘specify network interface usage’ screen, as shown below. Here we will be using the en0 and en1
interfaces for public and private interfaces.
Since the ‘specify network interface usage’ screen shows an additional network interface en2 (gpfs private
interface), and we do not require this, we will click edit and select not to use this interface. The figure below
shows the result of this - Notice now how the 'interface type' column shows 'Do Not Use'
♦ The next step is to specify the location for the oracle clusterware component (OCR)
• Actions
Similar to the previous screen, the screen below depicts the location and file name specified for the
clusterware component – voting disk.
Finally, a screen displaying the summary of the tasks the OUI will perform is displayed, after this the
installation commences.
Once the installation reaches the end a pop up dialog box appears, which requires the following scripts to be
run on each node in the RAC configuration
• orainsRoot.sh
• Root.sh
# ./root.sh
WARNING: directory '/oracrs/product/oracle' is not owned by root
WARNING: directory '/oracrs/product' is not owned by root
WARNING: directory '/oracrs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
# ./root.sh
WARNING: directory '/oracrs/product/oracle' is not owned by root
WARNING: directory '/oracrs/product' is not owned by root
WARNING: directory '/oracrs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
"The given interface(s), "en0" is not public. Public interfaces should be used to configure virtual IPs"
Is expected due to the network chosen for the public interface – see metalink note id: The workaround for this
is to run 'vipca' from the last node as the root user. The screens below, show vipca being executed as the root
user out of the crs ORACLE_HOME.
• Notes
♦ The error returned during the run of root.sh on our second node requires that vipca be rerun as
the root user. Failure to do this will result in the vip's not being assigned to the public
interfaces of each node.
• Actions
♦ Invoke vipca from the command line as the root user. The first screen displayed is the
welcome screen as seen above.
Next specify the network interface that the vip’s will use – this will be the same as the public interface en0.
Then details about the vip will be specified on the next screen, as shown in screen above. The IP Alias Name
Once the assistants complete successfully, another summary screen is displayed, detailing what has been
done, see below.
To verify that the VIP's have been configured correctly, run the 'ifconfig -a' command and check to see that
the public interface has the vip ip address assigned to it. The screen below shows that the ip 192.168.0.183 has
been assigned to the public interface en0 in addition to the public ip address of 192.168.0.82 for node
dbpremia1. A further check would now be to ping the vip of the second node dbpremia2, to see if it is up.
$ ifconfig -a
en0: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD
(ACTIVE),PSEG,LARGESEND,CHAIN>
inet 192.168.38.83 netmask 0xffffff00 broadcast 192.168.38.255
inet 192.168.38.183 netmask 0xffffff00 broadcast 192.168.38.255
tcp_sendspace 131072 tcp_recvspace 65536
$ CRS_ORACLE_HOME/bin/olsnodes
dbpremia1
dbpremia2
This confirms that the nodes dbpremia1 and dbpremia2 are part of the cluster configuration.
Next we can check to see if the clusterware daemons (crsd,evmd and cssd) are running, by issuing the
following ps command on each node.
From dbpremia1:
Show ps output for node dbpremia1 Hide ps output for node dbpremia1
From dbpremia1:
Show ps output for node dbpremia2 Hide ps output for node dbpremia2
Additionally, we can also use the clusterware control utility ‘crsctl’ from each node to verify that the oracle
clusterware is up and healthy from each node.
From dbpremia1:
Show crsctl output for node dbpremia1 Hide crsctl output for node dbpremia1
From dbpremia2:
Show crsctl output for node dbpremia2 Hide crsctl output for node dbpremia2
We can use the ‘crs_stat’ command to show the entire status of the cluster – this command needs to be
executed from one node only, as it shows the entire cluster status:
$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ia1.gsd application ONLINE ONLINE dbpremia1
ora....ia1.ons application ONLINE ONLINE dbpremia1
ora....ia1.vip application ONLINE ONLINE dbpremia1
ora....ia2.gsd application ONLINE ONLINE dbpremia2
ora....ia2.ons application ONLINE ONLINE dbpremia2
ora....ia2.vip application ONLINE ONLINE dbpremia2
At this point we should also verify that the correct network interfaces have been chosen for the RAC
environment via the ‘oifcfg’ utility as follows (run from only one node):
$ oifcfg getif
en0 192.168.38.0 global public
en1 10.0.0.0 global cluster_interconnect
$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 2004
Available space (kbytes) : 260116
ID : 1923955616
Device/File Name : /premia/crsdata/ocrfile/ocr_file.dbf
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
There is a requirement to run the 'slibclean' utility for the AIX operating system on each node where the patch
will be applied. This utility should be run as the root user. Next the 10.2.0.3 patch is extracted to a stage area
and the oracle universal installer (oui) is invoked out of this stage
After the patchset welcome screen, we are presented with the 'Specify Home Details' screen, seen below. Here
we must select the ORACLE_HOME that corresponds to our existing oracle clusterware software installation.
This home should be available from the drop down list.
Finally we will be presented with the summary screen with the list of actions to be performed by this patchset:
Once the patchset application is complete, we will be presented with a few manual tasks that need to be
performed as the root user.
During the running of the root102.sh script, various ‘TOC’ related warnings will be generated, these can
safely be ignored. The screen, below shows the same for dbpremia2
Once the root102.sh scripts complete on both nodes, return to the patchset 'End of Installation' screen and
click exit to complete the 10.2.0.3 patchset application to oracle clusterware.
To verify that the clusterware has been upgraded to version 10.2.0.3 the following checks can be performed
From dbpremia1:
From dbpremia2:
If the clusterware needs to be started execute the following command as the root user from both nodes
# crsctl start crs
As with the patchset installation, also be sure to run the ‘slibclean’ utility on both nodes as the root user. Next,
run the oracle universal installer from the first dvd from the database directory. As before you will be
presented with the ‘Specify Home Details’ screen, be sure to enter a new home name and path – Oracle RAC
binaries must be located in a separate ORACLE_HOME from the oracle clusterware, see below.
If the oracle clusterware is up, you should then be presented with a screen listing both nodes as part of the
installation – ensure both are selected, as shown below.
As with the clusterware installation, the OUI will perform its own checks and fail for certain filesets, this like
before can be safely ignored, a list of the OUI operation is presented below, this shows which checks were
successful and which failed checks we ignored.
After this, standard screens appear which have not been included as they are self explanatory. So we skip
ahead to the end of the installation were a dialog box pops up and asks for the root.sh script to be executed on
each node in the cluster. This is shown, below.
From dbpremia1:
# ./root.sh
Running Oracle10 root.sh script...
From dbpremia2:
# ./root.sh
Running Oracle10 root.sh script...
Once the scripts have been run on both nodes, the 10g RAC software installation is complete, you should be
presented with the ‘End of Installation’ screen, as shown below.
We will now proceed by applying the 10.2.0.3 patch set to this ORACLE_HOME.
• Notes
Next proceed through the standard screens to commence the patchset application. Once this completes, the
configuration assistants screen appears.
Run root.sh on both nodes before proceeding with the next step which is to create the network listeners and
the RAC database.
The screens that, now, follow after are the standard screens seen when creating a listener.
Since there are no listeners present, the only option we have is the default which is to add a listener to our
configuration.
Once the creation process is complete, you should see messages similar to those shown in the screen above on
the xconsole/xterm from where netca was invoked.
Below is the welcome screen displayed once dbca is invoked, be sure to select the Real Application Clusters
database option and click next to proceed.
As with previous screens that show node selection, ensure that both nodes are selected before clicking next.
Next we specify the database global name and a sid prefix for out database.
After this standard screens appear as they do for a single instance database, those screens have been omitted.
Finally once the database has been created, a dialog box will pop up indicating the cluster database is now
being brought up.
From the output above one can see that all services on each node are up and running. A similar result can be
seen from issuing ‘crs_stat –t’ from any node. Also note how each listener has the node name of its hosts
appended to its name. Following this step completes the process of installing 10g RAC on AIX using GPFS as
a storage option.