Sie sind auf Seite 1von 31

Oracle E-Business Suite Release 11i with 9i RAC

Installation & Configuration Using AutoConfig

This document describe the steps required to install and setup an Oracle Applications Release 11i (11.5.10) environment with Oracle database 9i Release
2 (9.2.0.8) Real Application Cluster (RAC).
This document is divided into the following sections:

Section 1: Overview
Section 2: Environment
Section 3: Pre-requisites For RAC conversion.
Section 4: Installation/Configuration
Section 5: References

Section 1: Overview
The Oracle E-Business Suite 11i (Release 11.5.10) can be configured in a number of ways depending on varied business scenarios like uptime
requirements, hardware sizing and availability. This documents outlines instructions for installation and setup of Oracle E-Business Suite 11i (Release

11.5.10.2) with Oracle Database 9i(9.2.0.8) Real Application Clusters on RedHat Linux platform.
In this document instructions outlined are specific for Oracle E-Business Suite 11i and generic for all Unix platforms. For Windows platform, substitute the
appropriate syntax whenever necessary.
It is assumed that the reader of this document has knowledge of Oracle Database 9i, Real Application Clusters (RAC), and Oracle E-Business Suite
Release 11i.
Concurrent Processing (CP) requires configuration with 11i RAC environment. Please see Section 4.6 in this document for more details about configuring
CP.

System administrators are strongly advised to make complete environment backups before executing these procedures, and to make frequent backups at
multiple stages of this migration. System administrators should test these procedures in test bed environments before executing them in production
environments. Users must be asked to log off your system while applying these changes.

Section 2: Environment
The logical configuration used for creating this document is illustrated in the figure below. Oracle E-Business Suite 11i (11.5.10.2) with
9.2.0.6 version was deployed using Rapid Install.

2.1 Software/Hardware Configuration


Here are the versions of software and hardware used for this installation. The architecture mentioned in this document is a possible sample configuration.
For more details regarding reference architectures refer to the MetaLink Note# 285267.1

Software Component

Version
Release 11.5.10.2 (Production release) with Consolidated Update

Oracle E-Business Suite Release 11i

2( CU2)

Oracle9i

Release 9.2.0.8 (Production release)

Oracle Cluster Manager

Release 9.2.0.8 (Production release)

Oracle9i Real Application Clusters

Release 9.2.0.8 (Production release)

Linux

RHEL AS 3.0 (Kernel version 2.4.21-15.ELsmp)

2.2 ORACLE_HOME Nomenclature


Following ORACLE_HOME's are referred to in this document:
ORACLE_HOME
OLD_ORACLE_HOME
NEW_ORACLE_HOME
806 ORACLE_HOME

Purpose
Database ORACLE_HOME installed by Rapid Install
Database ORACLE_HOME installed for 9i RAC
Database
ORACLE_HOME installed by Rapid Install on

Application Tier

Section 3: Pre-requisites for Conversion


You must complete the following steps in your environment prior to conversion. For more details refer to Oracle9i Real Application Clusters Setup and
Configuration Guide.
Set up Cluster
Connect the required number of nodes to the cluster interconnect and the shared storage subsystem.
Install the cluster software and any required Oracle operating system-dependent (OSD) patches such as the Oracle UDLM patch for Sun
Clusters. For UNIX platforms, refer to your vendor s operating system-dependent documentation for instructions about installing the
cluster software. For Sun clusters, also install the Oracle UDLM patch from the first CD of the Oracle9i Enterprise Edition CD set.
Configure your cluster by adding the desired number of nodes.
Start up the clusterware on all nodes of your cluster.
Set up Shared Storage
If your platform supports a cluster file system, then set up the cluster file system on shared storage. For instructions to setup the cluster file
system on Windows please refer to Appendix A Setup Cluster File System (CFS) on Windows
If your platform does not support a cluster file system or you want to use raw devices for database files for performance reasons, then
install the vendor specific logical volume manager (for example, Veritas Cluster Volume Manager) and set up raw devices on shared disks.
Start up the shared storage management components such as Logical Volume Manager, Veritas Volume Cluster Manager, and so on.
See Also: Storage vendor-specific documentation for setting up the shared disk subsystem and for information about how to mirror and
stripe these disks.
Complete Rapid Install of Oracle Applications
Note: If you are not using raw devices as shared storage, you can specify the cluster file system location for your datafiles during Rapid Install.

Complete Rapid Install of Oracle E-Business Suite Release 11i(Release 11.5.10.2) if you don't have an existing single instance
environment.
Migrate all the data files to shared storage configured in the previous step.
Apply the following patches on your environment before executing this conversion.
Oracle Applications patches:
Patch Number

Description

3453499

11i.ADX.F

4712852

Minipack 11i.AD.I.4

4676589

11i.ATG_PF.H RUP4

4022732

11.5.10:SFM UNABLE TO PROCESS ORDERS IN RAC CONFIG

5225940

POST ADX-F FIXES

Note: Download the above patches specific to your operating system. Ensure that you have read the "README" files associated with each of these
patches for any pre-requisite patches and special instructions. Execute Autoconfig on all the tiers in your environment after application of these patches.

Section 4: Installation/Configuration
Here are the steps that need to be followed for converting E-Business Suite 11i to 9iRAC.

4.1 Install Oracle Cluster Manager


4.2 Install Oracle 9i (9.2.0.4) and upgrade database to 9.2.0.8
4.3 Enable AutoConfig on Database Tier for Oracle E-Business suite 11i
4.4 Convert the Oracle E-Business Suite 11i single instance to Oracle 9i RAC
4.5 Establish Oracle E-Business Suite 11i Applications Environment with RAC
4.6 Configure Parallel Concurrent Processing

4.1 Install Cluster Manager


Note: This section is for UNIX only. For Windows platform instruction to install Cluster Manager, please refer to Appendix B Install Cluster Manager on
Windows. The cluster manager installation needs to be done on all the database nodes that are part of the cluster. In this configuration, Cluster Manager
has been installed on host4 and host5 as per Figure 1-1 above.
Pre- Installation tasks for installing Cluster Manager
Check the version of binutils package on your Linux system using the following command
rpm -qa | grep -i binutils
The version must be 2.11.90.0.8-12 or higher. Otherwise apply patch 2414946.
If you are on Linux 2.4.9-e.12enterprise or higher, the hangcheck-timer module is already included, Else install this module by applying
patch 2594820.
Create unix account for Oracle with DBA group.
Add cluster node entries in host files.
Edit /etc/hosts and /etc/hosts.equiv on each node with cluster public and private interconnect addresses as

10.21.121.143 host4 #Oracle 9i Rac node 1 - public network


10.21.121.144 host5 #Oracle 9i Rac node 2 - public network
1.1.1.1 int-host4 #Oracle 9i Rac node 1 interconnect
1.1.1.2 int-host5 #Oracle 9I Rac node 2 interconnect
Verify that you have "rsh" package installed on your hosts by using rpm -qa|grep -i rsh
Verify the kernel parameter settings required as per Oracle Database Installation as per Oracle9i Installation Guide Release 2 (Part No:
A96167-01) and Oracle9i Release Notes Release 2 (9.2.0.4.0) for Linux (Part No: B13670-01)
Verify the settings of environment variables as per Oracle9i Installation Guide Release 2 (Part No: A96167-01)
Verify the setup done above by executing verification script, InstallPrep.sh. Refer the Metalink note (189256.1) for this script.
Install the 9.2.0.4 ORACM (Oracle Cluster Manager)
Note: You can download the Oracle database 9i(9.2.0.4) software from Oracle website at
http://www.oracle.com/technology/software/products/oracle9i/index.html.You can get Oracle database 9.2.0.8 patchset on
OracleMetaLink. After logging on to OracleMetaLink, click on "Patches" using the menu on the left of the screen. Use quick links or
advanced search to find the 9.2.0.8 patch set.
cd to 9.2.0.4 Disk1 stage area and start runInstaller.
On "File Locations Screen", verify the destination listed for your NEW_ORACLE_HOME (9.2.0.4) directory. Also enter a NAME to identify
this ORACLE_HOME. You can choose any appropriate name.
Choose Oracle Cluster Manager from available products.
For public node, enter the public alias specified in /etc/hosts e.g. host4
For private node, enter the private alias specified in /etc/hosts e.g. int-host4
Press "Install" at the "Summary" screen and complete the installation.
Note: Check that /oracm/admin/cmcfg.ora exists under your NEW_ORACLE_HOME. The cmcfg.ora is created using the rcp command

on all the other nodes in the cluster. Ensure that the file contains your public/private aliases.
Upgrade the Oracle Cluster Manager (ORACM) to 9.2.0.8
Download the Oracle Database 9.2.0.8 patch from Oracle Metalink.
Unzip and untar the patch.
Set ORACLE_HOME to NEW_ORACLE_HOME and LD_LIBRARY_PATH=$NEW_ORACLE_HOME/lib
Run Oracle Universal Installer from Disk1/oracm.
On "File Locations Screen", make sure that the source location is pointing to the products.xml file in the 9.2.0.8 patch set location under
Disk1/stage. Also verify that the "Destination"listed on screen is the NEW_ORACLE_HOME (9.2.0.8) directory.
On "Available Products Screen", select "Oracle9iR2 Cluster Manager 9.2.0.8.0"
On the "Public Node Information Screen", enter the public node names.
On the "Private Node Information Screen", enter the interconnect node names.
Click Install at the summary screen and complete the installation.
Note: For more details refer to Oracle Database 9.2.0.8 patch set release notes.
Verify Oracle Cluster Manager configuration files For Hangcheck-timer
Verify the NEW_ORACLE_HOME/oracm/admin/cmcfg.ora file as per sample file
Sample cmcfg.ora file
HeartBeat=15000
KernelModuleName=hangcheck-timer
ClusterName=Oracle Cluster Manager, version 9i
PollInterval=1000

MissCount=210
PrivateNodeNames= host2 host3
PublicNodeNames= int-host2 int-host3
ServicePort=9998
CmDiskFile=<path to shared drive>cmDiskFile
HostName=<Private hostname>
Note: If the cmcfg.ora file on your environment is not as per the sample file above, add the missing parameters as per the sample file shown
above. For more information on these parameters refer to RAC on Linux Best Practices.
Start the ORACM (Oracle Cluster Manager) on all nodes in the cluster
Change directory to the NEW_ORACLE_HOME/oracm/bin directory; change to the root user and start the ORACM using following
commands

$ cd $ORACLE_HOME/oracm/bin
$ su root
$ ./ocmstart.sh
Verify that ORACM is running using following command:

$ ps -ef | grep oracm

4.2 Install Oracle 9i (9.2.0.4) and upgrade database to 9.2.0.8


This section describes installation of the 9.2.0.4 database software, upgrading the software to 9.2.0.8 and upgrading the Oracle E-Business Suite 11i
database to 9.2.0.8.For Windows, customers need to install the 9.2.0.1 database software instead of 9.2.0.4.

Note: Oracle 9i(9.2.0.4) installation needs to be done on database nodes. In our example we have installed Oracle9i (9.2.0.4) on host4 and host5 as per
the figure1-1 above.
Install 9.2.0.4 Database (Software only) -- For Unix Platforms only
Set ORACLE_HOME to NEW_ORACLE_HOME (9.2.0.4) used in cluster manager install, otherwise Oracle Universal Installer will not
detect that the cluster manager is running
Set ORACLE_BASE to a valid directory with privileges matching the user and group of the user that is installing the software.
Start runInstaller from ORACLE_HOME/bin - i.e. use Oracle Universal Installer 2.2.0.18
After the Welcome screen, press "Next" button.This should take you to the "Cluster Node Selection" screen.
Note: If you do not see "Cluster Node Selection" screen, either ORACLE_HOME is not set or the cluster manager is not running. Unless
you see the "Cluster Node Selection" screen, do not continue as Oracle Universal Installer will not install RAC option.
The "Cluster Node Selection" screen should show all your public aliases. Make sure to select all nodes. By default local node will be
selected.
Select products.jar from the 9204 Disk1/stage directory.
Choose "Oracle Database 9.2.0.4 Enterprise Edition".
On "Database Configuration Screen", check "Software Only".
Summary should include Real Applications Clusters.
Install the software.
Run root.sh when prompted.
Complete the installation.
Install 9.2.0.1 Database (Software only) -- For Windows Platforms only
Set ORACLE_HOME to NEW_ORACLE_HOME (9.2.0.1) used in cluster manager install, otherwise Oracle Universal Installer will not
detect that the cluster manager is running
Set ORACLE_BASE to a valid directory with privileges matching the user and group of the user that is installing the software.

Start runInstaller from ORACLE_HOME/bin - i.e. use Oracle Universal Installer 2.2.0.19
After the Welcome screen, press "Next" button.This should take you to the "Cluster Node Selection" screen.
Note: If you do not see "Cluster Node Selection" screen, either ORACLE_HOME is not set or the cluster manager is not running. Unless
you see the "Cluster Node Selection" screen, do not continue, as Oracle Universal Installer will not install RAC option.
The "Cluster Node Selection" screen should show all your public aliases. Make sure to select all nodes. By default local node will be
selected.
Select products.jar from the 9201 Disk1/stage directory.
Choose "Oracle Database 9.2.0.1 Enterprise Edition".
On "Database Configuration Screen", check "Custom Installation"
Select the "Oracle Real Application Cluster Component" from the custom list.
Complete the installation.
Upgrade the 9.2.0.4 software installation to Oracle9iR2 Patch Set 9.2.0.8 -- For Unix Platforms only
Download the Oracle Database 9.2.0.8 patchset 4547809 from Oracle Metalink.
Set ORACLE_HOME to NEW_ORACLE_HOME and LD_LIBRARY_PATH=$NEW_ORACLE_HOME/lib:$NEW_ORACLE_HOME/lib32
Start runInstaller from NEW_ORACLE_HOME/oui/bin.
On "Cluster Node Selection" screen, make sure that all RAC nodes are selected.
On "File Locations Screen", make sure that the source location is pointing to the products.xml file in the 9.2.0.8 patch set location under
Disk1/stage. Also verify that the "Destination" listed on screen is the NEW_ORACLE_HOME directory.
On "Available Products Screen", select "Oracle9iR2 Patch Set 9.2.0.8". Click "Next".
Click "Install" at the summary screen.
Run root.sh when prompted.
Complete the installation.
Upgrade the 9.2.0.1 software installation to Oracle9iR2 Patch Set 9.2.0.7 -- For Windows Platforms only

Note: Windows platform customers needs to upgrade the 9.2.0.1 database software installed in previous step.
Download the Oracle Database 9.2.0.7 patchset 4163445 from Oracle Metalink.
Set ORACLE_HOME to NEW_ORACLE_HOME and LD_LIBRARY_PATH=$NEW_ORACLE_HOME/lib:$NEW_ORACLE_HOME/lib32
Start runInstaller from NEW_ORACLE_HOME/oui/bin.
On "Cluster Node Selection" screen, make sure that all RAC nodes are selected.
On "File Locations Screen", make sure that the source location is pointing to the products.xml file in the 9.2.0.7 patch set location under
Disk1/stage. Also verify that the "Destination" listed on screen is the NEW_ORACLE_HOME directory.
On "Available Products Screen", select "Oracle9iR2 Patch Set 9.2.0.7". Click "Next".
Click "Install" at the summary screen.
Run root.sh when prompted.
Complete the installation.

Upgrade Database Instance to 9.2.0.8


Note: Windows customers should follow the same steps to upgrade Database instance to 9.2.0.7
Login in as sysdba using SQL*Plus
Startup the database in migrate mode by using "startup migrate" option. Use pfile option to startup the database by using the init<SID>.ora
file from OLD_ORACLE_HOME.
Note: If the database is already running, shutdown the database and startup in migrate mode by using above startup option.
Run spool patch.log
Run @NEW_ORACLE_HOME/rdbms/admin/catpatch.sql
Run spool off
Review the "patch.log" file for any errors and rerun the catpatch.sql script after correcting the problems
Shutdown the database.

Startup the database


Note: For other product specific instructions related to upgrade of Oracle9iR2 Patch Set 9.2.0.8 refer to readme of patch 4547809. Apply
the required additional database patches as mentioned in Interoperability Notes - Oracle Applications Release 11i with Oracle Database 9i
Release 2
Install 9.2.0.6 Clusterware Patch into new Oracle_Home (For Windows customer only)
This patch needs to be installed manually into the new Oracle_Home on all nodes in the cluster. Follow the instructions of the readme until
you reach the OCFS section. The remainder of the patch after the Generic section does not have to be installed as it was completed when
the cluster services were installed earlier.
Note: Previously, we had installed the cluster services of patch 3973928 onto our RAC nodes. Now, please follow the instructions of the
ReadMe.html that comes with the 9206 Clusterware patch (3973928).

4.3 Enable AutoConfig on Database Tier for Oracle E-Business suite 11i
Copy the appsutil, appsoui and oui22 directories from the OLD_ORACLE_HOME to the NEW_ ORACLE_HOME.
Set environment variables ORACLE_HOME, LD_LIBRARY_PATH and TNS_ADMIN to point to NEW_ ORACLE_HOME.Set ORACLE_SID
variable to point to instance name running on this database node.
Shutdown the instance and database listener.
Start the instance by using parameter file as init<sid.ora>. Start the database listener.
Generate instance specific xml file using NEW_ORACLE_HOME/appsutil/bin

adbldxml.sh tier=db appsuser=<APPSuser> appspasswd=<APPSpwd>


Execute the AutoConfig utility (adconfig.sh) on database tier from NEW_ORACLE_HOME/appsutil/bin. Verify the log file located at

NEW_ORACLE_HOME>/appsutil/log/<context_name>/<MMDDhhmm

4.4 Convert Database to RAC.


This procedure of conversion will use Autconfig and ADX utilities extensively. Ensure that you have applied the Oracle Applications
patches mentioned in the pre-requisites section above.
Execute AutoConfig utility on the application tier. Verify the AutoConfig log file located at

$APPL_TOP/admin/<context_name>/log/<MMDDhhmm>.
Note :For more information on AutoConfig see Using AutoConfig to Manage System Configurations with Oracle E-Business Suite
11i.
Execute $AD_TOP/bin/admkappsutil.pl to generate appsutil.zip for the database tier.
Transfer this appsutil.zip to database tier in the NEW_ORACLE_HOME.
Unzip this file to create appsutil directory in the NEW_ORACLE_HOME.
Execute the AutoConfig on database tier from NEW_ORACLE_HOME/appsutil/<context_name>/scripts by using adautocfg.sh
Verify the AutoConfig log file located in the NEW_ORACLE_HOME

NEW_ORACLE_HOME>/appsutil/log/<context_name>/<MMDDhhmm.
Execute the following command to accumulate all the information about the instance

NEW_ORACLE_HOME/appsutil/scripts/<context_name>/perl adpreclone.pl database


Shutdown the instance
Ensure that listener process on database tier is also stopped.
For Windows customer, also shutdown the cluster manager service. GSD service will also be shutdown along with the cluster
manager service. You will be prompted to start up both of the services during the process of running adcfgclone.pl in the next step.
Execute the following from the NEW_ORACLE_HOME/appsutil/clone/bin.

perl adcfgclone.pl database


This will prompt for the following questions

Do you want to use a virtual hostname for the target node


(y/n) [n]?:(for example n)
Target instance is a Real Application Cluster (RAC)
instance (y/n) [n]:(for example y)
Current node is the first node in an N Node RAC
Cluster (y/n)[n]:(for example y)
Number of instances in the RAC Cluster [1]:(for example 2)
Target System database name: (Provide the service name here)
Enter the port pool number [0-99]:(for example 17)
NOTE If you want to use the same port numbers then use the same port pool used during Rapid Install. Refer to port
numbers created during install.

NOTE The next two parameters will be prompted for as many times as there are instances in the cluster.

Host name: (for example host4)


Instance number [1]: (for example 1)
Target system RDBMS ORACLE_HOME directory: (for example /d1/apps/product/10.1.0/Db)
Target system utl_file accessible directories list: (for example /usr/tmp)
Number of DATA_TOP's on the target system [2]:(for example 1)
Target system DATA_TOP 1: (for example /d5/racdemodata/10.1.0)
This above process will
Create instance specific context file
Create instance specific environment file.
Create RAC parameter specific init.ora file.
Recreate the control files.
Create redo log threads for other instances in the cluster.
Create undo tablespaces for other instances in the cluster.
Execute AutoConfig on the Database tier.
Start the instance and database listener on the local host.
Perform the following steps on all the other database nodes in the cluster.
Zip the appsutil directory from the NEW_ORACLE_HOME and create appsutil.zip.
Transfer appsutil.zip to NEW_ORACLE_HOME of the remaining Database nodes in the cluster.

Unzip appsutil.zip in NEW_ORACLE_HOME to create the appsutil directory.


Execute the following from the NEW_ORACLE_HOME/appsutil/clone/bin

perl adcfgclone.pl database


In addition to the questions mentioned above, the following will also be prompted in the subsequent nodes. Provide appropriate values.

Host name of the live RAC node [] (for example host4)


Domain name of the live RAC node [] (for example oracle.com)
Database SID of the live RAC node [] (for example instance1)
Listener port number of the live RAC node [] (for example 1538)
The above process will
Create instance specific context file
Create instance specific environment file
Create RAC parameter specific init.ora file for this instance
Execute AutoConfig on the database tier
Start the instance and database listener on the specified host
Verify the tnsnames.ora and listener.ora files located at $TNS_ADMIN. Ensure that tns aliases for load balance, fail-over, local &
remote listener are created.
Add your environment specific initialization parameters to the <context_name>_ifile.ora file under $ORACLE_HOME/dbs directory
on all the database nodes.
Source the environment from newly generated environment files and restart the instances.
Execute AutoConfig on all database nodes from $ORACLE_HOME/appsutil/<context_name>/scripts by using adautocfg.sh.

4.5 Configure Applications Environment for RAC


Repeat the following steps on all the application nodes:
Source the applications environment.
Execute the AutoConfig by using

$AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/<context_file>.
Note: For more information on AutoConfig execution see Using AutoConfig to Manage System Configurations with Oracle
E-Business Suite 11i.
Verify the AutoConfig log located at $APPL_TOP/admin/<context_name>/log/<MMDDhhmm>for errors.
Source the environment by using the latest environment file generated.
Verify the tnsnames.ora, listener.ora files located in the 8.0.6 ORACLE_HOME at $ORACLE_HOME/network/admin and
$IAS_ORACLE_HOME/network/admin. Ensure that the correct tns aliases are generated for load balance and fail over.
Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the
environment and load_balance is set to ON.

Load balancing the Applications Database connections


Run the Context Editor through Oracle Applications Manager interface to set the value of "Tools OH
TWO_TASK"(s_tools_two_task), "iAS OH TWO_TASK" (s_weboh_twotask) and "Apps JDBC Connect Alias"
(s_apps_jdbc_connect_alias)
To load balance the forms based applications database connections, set the value of "Tools OH TWO_TASK" to point to the

<database_name>_806_balance alias generated in the tnsnames.ora file.


To load balance the self-service applications database connections, set the value of iAS OH TWO_TASK" and "Apps JDBC
Connect Alias" to point to the <database_name>_balance alias generated in the tnsnames.ora file.
Execute AutoConfig by using

$AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/<context_file>
Restart the applications processes by using the latest scripts generated after AutoConfig execution.
Ensure that value of the profile option "Application Database Id" is set to dbc file name generated at
$FND_TOP/secure/<context_name>.

4.6 Configure Parallel Concurrent Processing


Setup PCP
Execute AutoConfig by using $COMMON_TOP/admin/scripts/<context_name>/adautocfg.sh on all
concurrent nodes.
Source the application environment by using $APPL_TOP/APPSORA.env
Check the configuration files tnsnames.ora and listener.ora located under 8.0.6 ORACLE_HOME at $ORACLE_HOME
/network/admin/<context>. Ensure that you have information of all the other concurrent nodes for FNDSM and FNDFS entries.
Restart the application listener processes on each application node.
Logon to Oracle E-Business Suite 11i Applications using SYSADMIN in login and System Administrator Responsibility. Navigate to
Install > Nodes screen and ensure that each node in the cluster is registered.
Verify whether the Internal Monitor for each node is defined properly with correct primary and secondary node specification and
work shift details. Also make sure the Internal Monitor manager is activated by going into Concurrent -> Manager -> Adminitrator
and activate the manager.
(e.g. Internal Monitor: Host2 must have primary node as host2 and secondary node as host3)

Set the $APPLCSF environment variable on all the CP nodes pointing to a log directory on a shared file system.
Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database
nodes. This value should be pointing to a directory on a shared file system.
Set profile option ' Concurrent: PCP Instance Check' to OFF if DB instance sensitive failover is not required. By setting it to 'ON'
Concurrent Managers will failover to a secondary middle-tier node when database instance it is connected goes down.
Setup Transaction Managers
Shutdown the application tiers on all the nodes.
Shutdown all the database instances cleanly in RAC environment using

SQL>shutdown immediate;
Edit $ORACLE_HOME/dbs/<context_name>_ifile.ora. Add following
parameters as shown below:
_lm_global_posts=TRUE
max_commit_propagation_delay=0

NOTE For Tru64 Unix platform users set max_commit_propagation_delay=1


Start the instance of all database nodes one by one.
Startup the Application tier on all the nodes.
Logon to Oracle E-Business Suite 11i Applications using SYSADMIN in login and System Administrator Responsibility.
Navigate to Profile > System and change the profile option Concurrent: TM Transport Type' to QUEUE' and verify the transaction
manager works across the RAC instance.
Restart the concurrent managers.
Load balancing of CP tiers

Carry oout following steps If you want to load balance the database connection from concurrent processing tier.

Create configuration file <context_name_ifile.ora> manually under 8.0.6 ORACLE_COME at

$ORACLE_HOME/network/admin/<context> on all concurrent nodes .


Create load balancing alias similar to <service_name>_806_balance as shown sample in
Appendix C.
Edit the applications context file through Oracle Applications Manager interface and set the value of Concurrent Manager
TWO_TASK to load balancing alias created in previous step.
Execute AutoConfig by using $COMMON_TOP/admin/scripts/<context_name>/adautocfg.sh on all concurrent nodes.

Section 5:References
Installing Oracle Applications Release 11i documentation (Part No:B13583-01)
Interoperability Notes - Oracle Applications Release 11i with Oracle Database 9i Release 2
Oracle9i Installation Guide Release 2 (Part No: A96167-01)
Oracle9i Release Notes Release 2 (9.2.0.4.0) for Linux (Part No: B13670-01)
Using AutoConfig to Manage System Configurations with Oracle E-Business Suite 11i.

Cloning Oracle Applications Release 11i with Rapid Clone


Oracle9i Real Application Clusters Concepts, Release 2 (9.2) - A96597-01
Oracle Applications System Administrator's Guide, Release 11i - B13925-01
RAC on Linux Best Practices
Concurrent Processing: Transaction Manager Setup and Configuration Requirement in an 11i RAC Environment

Appendix A: Setup Cluster File System (CFS) on WINDOWS


Cluster File System Pre-installation Steps
Note: Perform the preinstallation steps described in this section before installing CFS. Windows refers to raw partitions as logical drives. If you
need more information about creating partitions, refer to the Windows online help from within the disk administration tools.
Run Windows NT Disk Administrator or Windows 2000 Disk Management from one node to create an extended partition. Currently, CFS is
not supported on Primary partitions. For Windows 2000 only, use a basic disk. Dynamic disks are not supported.
Create at least two partitions: one for the Oracle home and one for the Oracle database files. Create the Oracle home on a local disk as
placing it on a CFS disk is not supported at this time
Note: You do not need to create a partition for the voting disk if you plan to use CFS. CFS stores the voting device for OSD clusterware as
a file on a CFS partition.The number of partitions used for CFS affects performance. Therefore, you should create the minimum number of
partitions needed for the CFS option you choose.
Before you begin, remove (disconnect) any Windows mapped drives that have been created and are not being used. Try to ensure that
there are no drive letter holes, i.e. if c:\, d:\ and f:\ exist, change f:\ to e:\ if possible.

Create partitions
From one of the nodes of the cluster, run the Windows disk Administration tool as follows: On Windows NT start Disk Administrator using
the path:Start>Programs>Administrative Tools>Disk Administrator. On Windows 2000 start Disk Management using the
path:Start>Programs>Administrative Tools>Computer Management.Expand the Storage folder to Disk Management. For Windows 2000
only, use a basic disk as an extended partition for creating partitions.
Click inside an unallocated part of an extended partition. For Windows NT choose Create Partition. For Windows 2000 choose Create
Logical Drive. A wizard presents pages for configuring the logical drive.
Note: Do not use Windows disk administration tools to assign drive letters to partitions in this procedure. ClusterSetup Wizard does this
when you create the cluster. For more details check Chapter 2 in Oracle9i Real Application ClustersSetup and Configuration, Release 2
(9.2), Part NumberA96600-02 and Appendix B in Oracle9i Database Installation Guide, Release 2 (9.2.0.1.0) for Windows, Part
NumberA95493-01
Enter the size that you want for the partition. In general, this should be 100 MB or more .Ensure that a drive letter is not assigned. Cluster
Setup Wizard will do this later.
Note: Windows NT automatically assigns a drive letter. Remove this drive letter by right-clicking on the new drive and selecting Do not
assign a drive letter for the Assign Drive Letter option. Do this for any Oracle partitions. For Windows 2000 choose the option 'Do not
assign a drive letter' and then choose the option 'Do not format this partition'. Click Finish on the last page of the wizard.
Choose Commit Changes Now from the Partition menu to save the new partition information. Alternatively, close the Disk Administrator
and reboot the machine.
Repeat above steps for the second and any additional partitions. An optimal configuration is one partition for the Oracle home on a local
drive and one CFS partition for Oracle database files.
Note: For an entire Oracle Applications Vision database, create a partition of at least 65GB. This is usually the easiest method to install
Oracle Applications onto a CFS partition and then move the datafiles to other CFS Partitions after the entire Oracle Applications setup is
complete to take advantage of fast disks, RAID, etc.

Check all nodes in the cluster to ensure that the partitions are visible on all the nodes and to ensure that none of the Oracle partitions have
drive letters assigned. If any partitions have drive letters assigned, then remove them as described in earlier step.
Install Cluster File system
To prepare for this procedure, perform the tasks described in "ClusterFile System Preinstallation Steps" in this document if you have not already
done so.
Download the 9206 Cluster patch 3973928.

1. Run clustersetup.exe from the preinstall_rac\clustersetup\ directory of the downloaded clusterpatch.


Note: Do not run clustersetup.exe from the Oracle9i Database product CD.

2. The Welcome page for the Oracle Cluster Setup Wizard appears.Click Next.
Note: It is not supported to install remotely via Terminal Server to Windows NT or 2000.However, one can do a remote install via Terminal
Server to Windows 2003 by connecting to the console of the remote server from the client by starting the Terminal Server Client as
MSTSC /V:RemoteServer /console
Note: If you need further assistance in using Terminal Server Client, please contact Microsoft Product Support.

3. Choose Create a cluster and click Next. The Network Selection page appears.
4. Choose Use private network for interconnect and click Next. The Private Network Configuration page appears.
Note: If the nodes have a high speed private network connecting them, it should be used as the cluster interconnect. Otherwise, the public
network can be selected. If you choose Use public network for interconnect, then the Public Network Configuration page appears.

5. Enter the name for the cluster you are creating, and enter the names of the nodes. If a private network interconnect was selected in
previous step , enter the public and private names for the nodes; otherwise, enter the public names and click Next. The Cluster File System
Options page appears.

6. Choose the option CFS for Datafiles. Click Next. The CFS for Data files page appears.

7. Choose a partition of the required size from the list of available partitions and then choose a drive letter from the Drive Letterdrop-down list.
For the CFS option that you choose in previous step , the partition and drive letter combination will be assigned to the CFS drive letter for
all of the volumes in the cluster.
Note: Use the longest common prefix of the node names for the cluster name. For example, if nodes aredeptclust1, deptclust2, deptclust3
then the cluster name will be deptclust. The clustername and each node name must be globally unique to your network. Do not change
node names once they have been assigned and used in a clusterdatabase.

8. Repeat the previous step for each CFS volume and click Next.
9. Click Next. The wizard checks your cluster interconnect to see if Virtual Interface Architecture (VIA) hardware is detected. If VIA is not
detected, then the VIA Detection page appears telling you VIA was not detected and TCP will be used for the clusterwareinterconnect.
Click Next and skip to step 13. If VIA is detected, then the VIA Selection page appears. Continue to step 11.

10. Choose Yes to use VIA for the interconnect and click Next. The VIA Configuration page appears. If you choose No, then TCP will be used.
11. Enter the name of the VIA connection and click Next.
12. The Install Location page is the last page that appears. The default location is %windir%\system32\osd9i. Click Browse to navigate to a
different location if needed.

13. Click Finish. A progress page displays the actions being performed.
14. When complete, reboot both nodes. Logon and make sure the new CFS partition can be seen from both nodes and has the same drive
letter assigned to it from both nodes.

Appendix B: Install Cluster Manager on Windows


Pre-installation tasks for installing Cluster Manager on Windows platform
Ensure that the External/Public Hostname's are defined in your Directory Network Services (DNS) and that the correct IP addresses
resolve for all nodes in the cluster

Ensure that all External/Public and Internal/Private Hostname's are defined in the HOSTS file on all nodes of the cluster. This file is located
in the WINDOWS_HOME\System32\drivers\etc directory.
Ensure that the TEMP and TMP folders be the same across all nodes in the cluster. By default these settings are defined as
%USERPROFILE%\Local Settings\Temp and %USERPROFILE%\Local Settings\Tmp in the Environment Settings of My Computer. It is
recommended to explicitly redefine these as WIN_DRIVE:\temp and WIN_DRIVE:\tmp; for example: C:\temp and C:\tmp for all nodes.
Ensure that each node has administrative access to all these directories within the Windows environment by running the following at the
command prompt:
NET USE \\host_name\C$
where host_name is the public network name for the other nodes. If you plan to install the ORACLE_HOME onto another drive location
than C, check that command prompt on node 1 of a four-node cluster:
NET USE \\node2\C$
NET USE \\node3\C$
repeat these commands on all nodes within the cluster.
Run the clustercheck.exe program located in the staged directory of unzipped patch 3973928. This tool will prompt for the public and
private host names and have you verify the IP address resolution. If that passes, then it will perform a check of the health of the shared disk
array and other environment variables and permissions necessary for proper cluster installation and operation. It will create a subdirectory
called opsm in the temporary directory specified by your environment settings (WIN_DRIVE:\Temp by default if you have changed it as
recommended) and log file called OraInfoCoord.log. This log will contain any errors encountered in the check. You should see the following
at the bottom of the log file and within the command prompt window when you run the clustercheck.exe program:
ORACLE CLUSTER CHECK WAS SUCCESSFUL

Note: You must correct any errors that occur before proceeding. Please contact your Cluster Hardware Vendor if you need assistance.If
you have any issues with Clustercheck, please see Note 186130.1 Clustercheck.exe Fails with Windows Error 183 .
Note: If at any time in the installation of the software you do not see all nodes in the cluster within the Cluster Node Selection screen, there
is something wrong with your cluster configuration. You will have to go back and troubleshoot your cluster install. You can perform
clusterware diagnostics by executing the ORACLE_HOME\bin\lsnodes -v command and analyzing its output. Use Metalink to search for
any errors. Refer to your vendor's clusterware documentation if the output indicates that your clusterware is not properly installed. Resolve
the problem, and then rerun the checks.
Run Oracle Cluster Setup Wizard ckquote>
Note: For 3-or-more nodes: Since the OUI is not used, you can run this only on node 1 and the software will be correctly transferred to the other
nodes in the cluster.

1. Download Patch number 3973928 Windows CFS and Clusterware Patch for 9.2.0.6
2. Expand the patch into the staged directory, such as E:\installs\osd9206 . This will create another subdirectory such as
E:\installs\osd9206\3973928. This clusterware patch contains a full clustersetup release.

3. Within a command prompt window, navigate to the <E:\installs\osd9206\3973928\preinstall_rac\clustersetup directory in the OCFS staged
directory

4.
5.
6.
7.
8.

Launch the Oracle Cluster Setup Wizard by typing clustersetup at the command line.
The Cluster Wizard program should launch with a Welcome page. Click Next.
The first time the Wizard is run, the only option will be to Create a cluster. Click Next.
Choose "Use private network for interconnect" and click Next.
The Network Configuration page appears. Enter the cluster name. Then enter the public hostnames for all nodes. The private hostnames
will be automatically entered as public_names. Accept the default or change as appropriate for your cluster configuration. Click Next.

9. The Cluster File System Options page appears. Choose CFS for Datafiles only. Click Next.
10. The CFS for Datafiles page appears. Choose a drive letter, and then choose one of the partition you prepared earlier with a minimum 4.0

GB in size. Click Next.

11. The VIA Detection screen appears stating whether Virtual Interface Architecture (VIA) hardware was detected. Choose yes or no
depending on your configuration. Please contact your cluster hardware vendor if you are unsure. Click Next.

12. The Install Location screen appears. It will default to the WIN_HOME\system32\osd9i directory. Accept the default and click Finish.
13. The Cluster Setup window will appear. This will show the progress with installing the cluster files, creating the cluster services on all nodes,
and formatting the OCFS drives. If no errors occur, the Oracle Cluster Setup Wizard application will complete and close automatically.

14. Check the Clusterware setup. You should have an OCFS drive visible from both nodes. Also, the following 3 services should be running on
each of the nodes in the cluster:
OracleClusterVolumeService
Oracle Object Service
OracleCMService9i
Note:If the clustersetup doesn't run properly, check for errors in the log files under WIN_HOME\system32\osd9i.

Appendix C
Sample <context_name_ifile.ora> for CP Tiers

CP_BALANCE=
(DESCRIPTION_LIST=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=<host2>)(PORT=<db_port>))
(CONNECT_DATA=

(SERVICE_NAME=<Database name>)
(INSTANCE_NAME=<SID>)
)
)
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=<host3>)(PORT=<db_port>))
(CONNECT_DATA=
(SERVICE_NAME=<Database name>)
(INSTANCE_NAME=<SID>)
)
)
)

Change Log
Date
13 Oct 2004

Description
First Posted

Added PCP Configuration Section.


22 Dec 2004

23 Mar 2005

Added PCP Related Patch Section .


Changes done for 11.5.10 Release.

Changed the Pre-requisite patches .

Changed the name of web_oh_two_task to IAS_OH_TWO_TASK.

28 Apr 2005

APPLFSTT values should be semi-colon separated instead of comma separated.

Added statement for windows in the overview section.


06 Jul 2005

Added PCP as mandatory requirement in the overview section.

Changed for 11.5.10 Plus CU1 with 9.2.0.6 RAC


15-Jul-2005

Changed format

Added patch 4462244


23-Aug-2005

Corrected links in reference section

Added patch 4502904


13-Sep-2005

Changed section 4.5 for load_balancing options.

Changed the 9.2.0.6 cluster manager installation section ,removed the manual copy steps from this section
11-Nov-2005

19-Dec-2005
23-Mar-2006

Moved OUI 10.1.0.3 installation section ahead of 9.2.0.6 cluster manager install section.
Added one step for adding any environment specific intialization parameters into Ifile .

Added windows specific sections and Appendix A and Appendix B

Changed For 11i.ATG_PF.H RUP3 4334965, 11i ADX F 3453499

Section 4.6 Changed for PCP & Transaction Manager Setup


Added Oracle Database Patch 4059639
Added Appendix C for PCP & Transaction Manager Setup on Windows.

31-Aug-2006

20-Nov-2006

06-Feb-07

Changed for Database Patchset 9.2.0.7

Changed for Database Patchset 9.2.0.8 (For Unix Customers Only)

Removed windows specific PCP section.

Note <279956.1> by Oracle Applications Development


Copyright 2007 Oracle Corporation
last updated: Tuesday 06 Feb 2007