Beruflich Dokumente
Kultur Dokumente
Page 1 of 25
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12 [ID 466649.1] Modified 31-MAY-2010 Type WHITE PAPER Status PUBLISHED
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12
Last Updated: May 31, 2010 Oracle Applications Release 12 has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle Applications Release 12 running on a single database instance to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 1 (11.1.0.7) with Automatic Storage Management (ASM).
Note: This document applies to UNIX, Linux, and Windows platforms. The example commands typically use UNIX/Linux syntax: Windows users should use the relevant syntax for their platform.
The most current version of this document can be obtained in My Oracle Support (formerly OracleMetaLink) Knowledge Document 466649.1. There is a change log at the end of this document. A number of conventions are used in describing the Oracle Applications architecture. These include the following: Convention Application tier Database tier oracle CONTEXT_NAME Meaning Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier. Machines (nodes) running Oracle Applications database. User account that owns the database file system (database ORACLE_HOME and files). The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _. Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: /appl/admin/.xml Database tier context file: /appsutil/.xml Oracle Applications database user password. Represents command line text. Type such a command exactly as shown. Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets.
CONTEXT_FILE
Rate this document
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 2 of 25
On UNIX or Linux, the backslash character can be entered at the end of a screen command line to indicate continuation of the command on the next screen line.
This document is divided into the following major sections: Section 1: Overview Section 2: Environment Section 3: Configuration Steps Section 4: References Appendix A: Oracle Net Files Appendix B: Example rconfig file Appendix C: Windows-Specific Tasks for Installing Cluster Manager
Section 1: Overview
You should be familiar with Oracle Database 11g, and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC). When planning to set up Oracle Real Application Clusters and shared devices, refer to the Oracle Real Application Clusters Setup and Configuration Guide 11g Release 1 (11.1) as required.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 3 of 25
2. Set up the required cluster hardware and interconnect medium. Before proceeding further, check that you meet the following prerequisites and apply the relevant patches if not: For Oracle E-Business Suite 12.0.x, you must be on Oracle E-Business Suite 12.0.2 Release Update Pack (RUP2 - patch 5484000), or a higher RUP such as Oracle E-Business Suite Release 12.0.4 Release Update Pack (RUP4 - patch 6435000). You must also ensure you have applied the latest AutoConfig patches, following the relevant instructions in My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12. In particular, refer to Section 6 of that document.
Note: Patch 6636108 will need to be applied on the application tier. This patch delivers the adbldxml utility that is used to generate the context file on the database tier.
For Oracle E-Business Suite Release 12.1, you should apply the Oracle E-Business Suite Release 12.1.1 Maintenance Pack (patch 7303030, also delivered by Release 12.1.1 Rapid Install).
Section 2: Environment
2.1 Software and Hardware Configuration
The following hardware and software components were used for this example installation. Component Oracle Database Version 11.1.0.7
Oracle Cluster Ready Services 11.1.0.7 Operating System Storage Device OEL 4.0 NetApp 880 filer with Data ONTAPT 6.1.2R3
You can obtain the latest 11.1.0.7 database software from: http://www.oracle.com/technology/software/products/database/index.html
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 4 of 25
Database 11g ASM ORACLE_HOME OracleAS 10.1.2 ORACLE_HOME OracleAS 10.1.3 ORACLE_HOME
ORACLE_HOME used for creation of ASM instances ORACLE_HOME installed on Application Tier for forms and reports ORACLE_HOME installed on Application Tier for HTTP server
The configuration steps you must perform are divided into a number of categories: 3.1 Install Oracle Clusterware 11g 3.2 Install Oracle Database Software 11.1.0.7 and Upgrade Oracle Applications Database to 11.1.0.7 3.3 Configure TNS Listener 3.4 Create ASM Instances/Diskgroups (Optional) 3.5 Convert 11g Database to Oracle RAC using rconfig 3.6 Perform Post-Conversion Steps 3.7 Enable AutoConfig on Database Tier 3.8 Establish Applications Environment for Oracle RAC 3.9 Configure Parallel Concurrent Processing
Note: Windows users should also refer to Appendix C for Windows-specific Oracle Clusterware preinstallation tasks.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 5 of 25
address must be in the same subnet as the associated public interface. After installation, clients can be configured to use either the virtual host name or virtual IP address. If a node fails, its virtual IP address fails over to another node. A private IP address (and optionally a host name) for each private interface. Oracle recommends that you use private network IP addresses for these interfaces.
Note: Sections 3.1.2 to 3.1.6 are for UNIX customers only. Windows customers should skip these and go to Section 3.1.7, also referring to Appendix C as noted previously.
3.1.2 Verify Kernel Parameters Check operating system kernel parameters as follows: Verify the kernel parameter settings required for Oracle Clusterware installation. Refer to the Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1). 3.1.3 Check rsh and Host Equivalence Verify that you have rsh (remote shell) package installed on all your hosts, by executing the command:
Test host equivalence by using the rcp command to copy some dummy files between host2 and host3, as follows: On host2:
# touch /u01/test # rcp /u01/test host3:/u01/test1 # rcp /u01/test int-host3:/u01/test2
On host3:
# # # # touch /u01/test rcp /u01/test host2:/u01/test1 rcp /u01/test int-host2:/u01/test2 ls /u01/test*
-- Returns /u01/test /u01/test1 /u01/test2 3.1.4 Set up Shared Storage If your platform supports a cluster file system, set up the cluster file system on shared storage. If your platform does not support a cluster file system, or you want to use raw devices for database files for performance reasons, you will need to install the vendor-specific logical volume manager. Also see storage vendor-specific documentation for details of setting up the shared disk subsystem, and how to mirror and stripe these disks.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 6 of 25
Refer to the Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) for more information about database storage. 3.1.5 Check Account Setup Configure the oracle account's environment for Oracle Clusterware and Oracle Database 11g, as per the Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1). 3.1.6 Configure Secure Shell Configure Secure Shell (SSH) on all cluster nodes as follows: 1. Log in as the oracle user 2. Run the following commands to create the .ssh directory in the oracle user's home directory with suitable permissions:
$ mkdir ~/.ssh $ chmod 755 ~/.ssh
3. Generate a RSA key for version 2 of the SSH protocol using the following command:
$ /usr/bin/ssh-keygen -t rsa
Accept the default location for the key file. Enter and confirm a passphrase that is different from the oracle user's password. This command writes the public key to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone. 4. Enter the following command to generate a DSA key for version 2 of the SSH protocol:
$ /usr/bin/ssh-keygen -t dsa
Accept the default location for the key file at the prompt. 5. Copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to the /.ssh/authorized_keys file on this node, and to the same file on all other cluster nodes.
Note: The ~/.ssh/authorized_keys file on every node must contain the contents from all the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files that you generated on all cluster nodes.
6. Run the following command to change the permissions on the ~/.ssh/authorized_keys file on all cluster nodes:
$ chmod 644 ~/.ssh/authorized_keys
7. To enable the Installer to use the ssh and scp commands without being prompted for a passphrase, follow these steps: 1. On the system where you want to install the software, log in as the oracle user and run the commands:
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 7 of 25
2. At the prompts, enter the passphrase for each key that you generated 3.1.7 Using Cluster Verification Utility (CVU) 1. Install cvuqdisk Package for Linux a. Locate the cvuqdisk RPM package, which is in the directory clusterware/rpm on the installation medium. b. Copy the cvuqdisk package to each node on the cluster (each node must be running the same version of Linux). c. Log in as root. d. Use the following command to see if you have an existing version of the cvuqdisk package:
# rpm -qi cvuqdisk
e. If you have an existing version, enter the following command to de-install the existing version:
# rpm -e cvuqdisk
2. Log in as the oracle user, and run the following command to determine which pre-installation steps have been completed, and which steps remain to be performed:
$ <11g Software Stage >/runcluvfy.sh stage -pre crsinst -n <node_list >
Note: Windows users should use runcluvfy.bat (equivalent to runcluvfy.sh) for CVU verification.
Substitute with with the names of the nodes in your cluster, separated by commas. 3. Use the following command to check the networking setup with CVU:
$ <11g Software Stage >/runcluvfy.sh comp nodecon -n <node_list > [-verbose]
Substitute with with the names of the nodes in your cluster, separated by commas. 4. Use the following command to check the operating system requirement with CVU:
$ <11g Software Stage >/runcluvfy.sh comp sys -n <node_list > -p {crs|database} \ -osdba osdba_group -orainv orainv_group -verbose
Substitute <node_list > with with the names of the nodes in your cluster, separated by commas.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 8 of 25
3.1.8 Install Oracle Clusterware 11g 1. Use the same oraInventory location that was created during the installation of Oracle Applications Release 12.
Note: You should take a backup of the oraInventory directory before starting this step.
3. In the File Locations Window, enter the name and path of CRS ORACLE_HOME and click Next. 4. Select the language option for your installation from the available list and click Next. 5. In the Cluster Configuration Window, enter the name of the Cluster Configuration. For the public node, enter the public alias specified in /etc/hosts, for example host2, host 3. Enter the corresponding private node names for the public nodes, for example host2-vlan2, host3-vlan2. Enter the corresponding virtual host names for the public host names and click Next. 6. Assign the network interface with the interface type, for example assign network interface eth1 with interface type private and its corresponding subnet mask. Refer to the cluster verification utility output generated earlier for more details on this network interface usage. Click Next. 7. Enter the location for Oracle Cluster Registry (OCR) and click Next.
Note: The OCR must be located on a shared file system that is accessible by all nodes. If you want to use OCR mirroring, select "Normal Redundancy". In such a case, you will have to specify two locations for the OCR.
Note: The voting disk must be on a shared file system that is accessible by all nodes. If you want to use mirroring of the voting disk, select "Normal Redundancy". In such a case, you will have to specify three locations for the voting disks.
9. Verify the installation Summary Window and Click Install. 10. At the end of the installation, the installer will prompt for executing root.sh from both the nodes. Execute root.sh from the CRS ORACLE_HOME specified after logging in as root. This will also start CRS services on both the cluster nodes. 11. Execute CRS ORACLE_HOME/bin/olsnodes. If this returns all the cluster node names, then your CRS installation was successful.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage ... Page 9 of 25
3.1.9 Upgrade Oracle Clusterware to 11.1.0.7 1. Download Oracle Database 11.1.0.7 patch set (patch 6890831). 2. Create a staging area for this patch and unzip the patch there. 3. At the command prompt, set the ORACLE_HOME variable to the 11.1.0.6 CRS ORACLE_HOME installed earlier in Step 1 (CRS 11.1.0.6 install). 4. Shut down all the CRS services on all the nodes in the cluster. 5. Execute runInstaller from the 11.1.0.7 staging area and upgrade the CRS software to 11.1.0.7 6. At the end of the installation, the installer will prompt for executing CRS_ORACLE_HOME/install/root111.sh from both the nodes. Execute root111.sh from the CRS ORACLE_HOME specified after logging in as root. This will also start CRS services on both the cluster nodes. 7. After installation, check that your Oracle Clusterware installation is installed and running correctly by logging in as root and using one of the following commands: crs_stat command
# <CRS_HOME >/bin/crs_stat -t -v Name Type Target State Host ---------------------------------------------------------------------------------------ora....dbs.gsd application ONLINE ONLINE ap614dbs ora....dbs.ons application ONLINE ONLINE ap614dbs ora....dbs.vip application ONLINE ONLINE ap614dbs ora....dbs.gsd application ONLINE ONLINE ap615dbs ora....dbs.ons application ONLINE ONLINE ap615dbs ora....dbs.vip application ONLINE ONLINE ap615dbs
crsctl command
# <CRS_HOME >/bin/crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy
Ensure that the output of the command used is similar to that shown in the corresponding example above.
3.2 Install Oracle Database Software 11.1.0.7 and Upgrade Oracle Applications Database to 11.1.0.7
Note: You should make a full backup of the oraInventory directory before starting this stage. To install the Oracle Database 11g (11.1.0.7) software and upgrading the Oracle Applications database to 11.1.0.7, refer to the My Oracle Support knowledge document that applies to your release of Oracle Applications: For Oracle Applications Release 12.0.x
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 10 of 25
Knowledge Document 735276.1, Interoperability Notes E-Business Suite Release 12 with Oracle Database 11g R1 (11.1.0). For Oracle Applications Release 12.1.1 Knowledge Document 802875.1, Interoperability Notes Oracle E-Business Suite Release 12.1 with Oracle Database 11gR1 (11.1.0) . Follow all the steps listed in the relevant document, except the following: Start the new database listener (conditional) Implement and run AutoConfig Restart Applications server processes Note: When performing the database installation using the Oracle Universal Installer (runInstaller), select all the nodes shown in the Cluster Nodes window to be included in your Oracle RAC cluster.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 11 of 25
3.4.2 Create ASM Instances/Diskgroups Note: While you can create the ASM instances using the same Oracle Home location used for the database, Oracle recommends creating a separate Oracle Home for ASM instances. There are two alternative methods you can use for creating ASM instances and diskgroups: Option 1: Create ASM instance and diskgroups using dbca Option 2. Create ASM instances and diskgroups manually (without using dbca) Note: Oracle recommends using dbca to create ASM instances (option 2).
Note: On Windows, the disks must be created with extended partitions and logical drives. Do not give drive letters to any logical drives that are to be used with CRS or ASM, or format them with NTFS. After creating the drives with Disk Manager on one node, immediately go to the other node and use Disk Manager to create the same partitions and drives. After rebooting, check that neither node has automatically associated drive letters with these drives. If so, use Disk Manager to go to drive properties and remove the drive letters. 9. The disk group has now been created and mounted on all the instances. Click Finish. 10. As the owner of the CRS_ORACLE_HOME, issue the crs_stat command to verify that the ASM instances are registered with CRS. The instances will be named using the format specified with dbca during creation (for example, ora.myhost.+ASM1.asm)
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 12 of 25
11. Using NetCA create remote listener TNS aliases with the name LISTENERS_ <ASM Servicename > on both nodes. 12. Update the ASM init parameter file with remote listener value both nodes.
$ sqlplus / as sysdba; SQL>alter system set remote_listener=LISTENERS_ <SERVICENAME > sid= scope=both;
13. Restart the ASM instances on all nodes. Note : Verify that all the ASM instances are registered with all the nodes. Refer to the Oracle Database Administrator's Guide 11g Release 1 (11.1) for further details.
3.4.2.2 Creating ASM instances and diskgroups manually (without using dbca)
1. Using NetCA, create local and remote listener TNS aliases for ASM instances. Use listener_ <ASM_SID > as alias name for local listener and listeners_ <ASM servicename > for remote listener alias. Ensure that these aliases are created on all nodes in the cluster. 2. Create ASM instances on all nodes in the cluster. Refer to Oracle Database Administrator's Guide 11g Release 1 (11.1) for information on creating the ASM instances. For ASM best practices, refer to Automatic Storage Management Technical Best Practices. 3. Start up all the ASM instances in the cluster. 4. Create the disk groups and mount them on all the ASM instances. Refer to Oracle Database Administrator's Guide 11g Release 1 (11.1). 5. Add the disk group entry against "asm_diskgroups" parameter in the init*.ora files of all the ASM instances. 6. Add ASM instances to CRS using the command:
$ srvctl add asm -n <node_name > -i <asm_inst_name > -o <oracle_home > [-p <spfile >]
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 13 of 25
Note: Specify the 'SourceDBHome' variable in ConvertToRAC.xml as the Non-RAC Oracle Home ($OLD_ORACLE_HOME path). If you wish to specify a NEW_ORACLE_HOME, start the database from the new Oracle Home using the command:
SQL>startup pfile= <OLD_ORACLE_HOME >/dbs/init <ORACLE_SID >.ora;
4. Shut down the database. 5. Create an spfile from the pfile using the command:
SQL>create spfile from pfile;
6. Move the $ORACLE_HOME/dbs/spfile <ORACLE_SID >.ora for this instance to the shared location. 7. Take a backup of existing $ORACLE_HOME/dbs/init <ORACLE_SID >.ora and create a new $ORACLE_HOME/dbs/init <ORACLE_SID >.ora with the following parameter:
spfile= <Path of spfile on shared disk >/spfile <ORACLE_SID >.ora
8. Start up the instance. 9. Using NetCA, create local and remote listener tns aliases for database instances. Use listener_ <instance_name > as the alias name for the local listener, and listeners_ <servicename > for the remote listener alias. 1. Execute netca from $ORACLE_HOME/bin. 2. Choose "Cluster Configuration" option in the NetCA wizard. 3. Choose the current nodename from the nodes list. 4. Choose "Local Net Service Name Configuration " option and click Next. 5. Select "Add" and in next screen enter the service name and click Next. 6. Enter the current node as Server Name and the port defined in Step 3.3. 7. Select "Do not perform Test" and click Next. 8. Enter the listener TNS alias name like LISTENER_ <ins1 > for local listener. 9. Repeat the above steps for remote listener, with the server name in step 6 as the secondary node and the listener name LISTENERS_ <service_name >. Note: Ensure that local and remote aliases are created on all nodes in the cluster. 10. Navigate to $ORACLE_HOME/bin, and use the following syntax to run the rconfig command:
$ ./rconfig
11. This rconfig run will: 1. Migrate the database to ASM storage (only if ASM is specified as storage option in the configuration XML file). 2. Create database instances on all nodes in the cluster. 3. Configure listener and NetService entries. 4. Configure and register CRS resources. 5. Start the instances on all nodes in the cluster.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 14 of 25
1. Shut down the instances on all database nodes. 2. On any one node, set cluster_database=false in the $ORACLE_HOME/dbs/init <SID >.ora file. 3. Set the following environment variables:
ORACLE_HOME =<11g NEW_ORACLE_HOME> ORACLE_SID = <instance name for current database node > PATH= $PATH:$ORACLE_HOME/bin;
4. Start up the instance using the 'mount' option. 5. Disable archive logging using the following SQL command:
$ sqlplus / as sysdba; SQL>alter database noarchivelog;
6. Shut down the database normally. 7. Set cluster_database=true in the $ORACLE_HOME/dbs/init <SID >.ora file. 8. Start up all the instances. 9. Check the archive log setting using the following SQL command:
$ sqlplus / as sysdba; SQL>archive log list;
This should show the value of 'Database log mode' as 'No Archive Mode' 3.6.2 Shut Down the Listeners Use the following command to shut down the listeners with the name LISTENER_ <nodename >, which were created in Step 3.3:
$ srvctl stop listener -n <nodename >
3. Copy the appsutil.zip file to the 11g NEW_ORACLE_HOME on the database tier, for example using ftp. 4. Unzip the appsutil.zip file to create the appsutil directory in the 11g NEW_ORACLE_HOME. 5. Copy the jre directory from OLD_ORACLE_HOME>/appsutil to 11g NEW_ORACLE_HOME>/appsutil.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 15 of 25
6. Create a <CONTEXT_NAME > directory under $ORACLE_HOME/network/admin. Use the new instance name while creating the context directory. Append the instance number to the instance prefix you are going to put in the rconfig XML file. For example, if your database name is VISRAC, and you want to use "vis" as the instance prefix, create the context_name directory as vis1_ <hostname >. 7. Set the following environment variables:
ORACLE_HOME =<11g ORACLE_HOME> LD_LIBRARY_PATH = <11g ORACLE_HOME>/lib, <11g ORACLE_HOME>/ctx/lib ORACLE_SID = <instance name for current database node > PATH= $PATH:$ORACLE_HOME/bin; TNS_ADMIN = $ORACLE_HOME/network/admin/ <context directory >>
8. De-register the current configuration using the Apps schema package FND_CONC_CLONE.SETUP_CLEAN.
SQL>exec fnd_conc_clone.setup_clean;
9. Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to $TNS_ADMIN/tnsnames.ora file and edit changing the aliases for SID= <new Oracle RAC instance name >. 10. To preserve TNS aliases (LISTENERS_ <service > and LISTENER_ <asminstance >) of ASM , create a file <context_name >_ifile.ora under $TNS_ADMIN, and copy those entries to that file. Note: Windows users must add a TNS alias for the LISTENER_ entry in the %TNS_ADMIN% \_ifile.ora file. 11. Create the listener.ora as per the sample file in Appendix A. Change the instance name and Oracle home to match this environment. 12. Start the listener. Note: Windows users must first unset the TNS_ADMIN environment variable. Also on Windows, the listener must be started from the default location (%ORACLE_HOME%\network\admin) for the first AutoConfig run only. 13. From the 11g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
$ adbldxml.pl appsuser= <APPSuser > appspasswd= <APPSpwd >
14. Set the value of s_virtual host_name to point to the virtual hostname (VIP alias) for the database host, by editing the database context file $ORACLE_HOME/appsutil/ <sid >_hostname.xml. 15. Rename $ORACLE_HOME/dbs/init <Oracle RAC instance >.ora, to a new name (i.e. init <racinstance >.ora.old) in order to allow AutoConfig to regenerate the file using the Oracle RAC specific parameters. 16. Ensure that the following context variable parameters are correctly specified.
s_jdktop= <11g ORACLE_HOME_PATH >/appsutil/jre s_jretop= <11g ORACLE_HOME_PATH >/appsutil/jre s_adjvaprg= <11g ORACLE_HOME_PATH >/appsutil/jre/bin/java
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 16 of 25
17. From the 11g ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script. 18. Check the AutoConfig log file located in 11g ORACLE_HOME/appsutil/log/ <CONTEXT_NAME >/ <MMDDhhmm. > 19. If ASM/OCFS is being used, note down the new location of the control file.
sqlplus / as sysdba; SQL > show parameters control_files
20. Perform the above steps [1-19] on all other database nodes in the cluster: 21. Execute AutoConfig on all database nodes in the cluster by running the command:
$ $ORACLE_HOME/appsutil/scripts/adautocfg.sh
22. Shut down the instances and listeners. 23. Edit $ORACLE_HOME/dbs/ <SID >_APPS_BASE.ora file on all nodes. If ASM is being used, change the following parameter:
control_files = <new location from step 20 above >
24. Create an spfile from the pfile on all nodes as follows: 1. Create an spfile from the pfile, and then create a pfile in a temporary location from the new spfile, with commands as shown in the following example:
SQL>create spfile= <temp location > from pfile; SQL>create pfile=/tmp/init <ins1 >.ora from spfile=;
Repeat this step on all nodes. 2. Combine the initialization parameter files for all instances into one initdbname.ora file by copying all existing shared contents. All shared parameters defined in your initdbname.ora file must be global, with the format *.parameter=value 3. Modify all instance-specific parameter definitions in init <SID >.ora files using the following syntax, where the variable <SID > is the system identifier of the instance: <SID
>.parameter=value
Note: Ensure that the parameters LOCAL_LISTENER,diagnostic_dest,undo_tablespace,thread,instance_number,instance_name are in <SID >.parameter format; for example, <SID >.LOCAL_LISTENER= <local_listener_name >. These parameters must have one entry for an instance. 4. Create the spfile in the shared location where rconfig created the spfile from the pfile in step 3 above.
SQL>create spfile= <shared location > from pfile;
25. Ensure that listener.ora and tnsnames.ora are generated as per the format shown in Appendix A.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 17 of 25
26. As AutoConfig creates the listener.ora and tnsnames.ora files in a context directory, and not in the $ORACLE_HOME/network/admin directory, the TNS_ADMIN path must be updated in CRS. Run the following command as the root user:
# srvctl setenv nodeapps -n <node > \ -t TNS_ADMIN= <Full Path of ORACLE HOME >/network/admin/ <context_directory >
27. Start up the database instances and listeners on all nodes. 28. Run AutoConfig all nodes to ensure each instance registers with all remote listeners. 29. Shut down and restart the database instances and listeners on all nodes. 30. Restart the database instances and listeners on all nodes. 31. De-register any old listeners and register the new listeners with CRS using the commands:
# srvctl remove listener n <nodename > -l <listener_name > # srvctl add listener -n <nodename > -o <oracle_home > -l <listener_name >
3.8 Establish Applications Environment for Oracle RAC 3.8.1 Preparatory Steps
Carry out the following steps on all application tier nodes: 1. Source the Oracle Applications environment. 2. Edit SID= <Instance 1 > and PORT= <New listener port > in $TNS_ADMIN/tnsnames.ora file, to set up connection one of the instances in the Oracle RAC environment. 3. Confirm you are able to connect to one of the instances in the Oracle RAC environment. Note: Windows users should delete the existing CM service on the node containing the concurrent processing tier and database tier, using the command %COMMON_TOP%\admin\install\adsvcm.cmd
-deinstall
4. Edit the context variable jdbc_url, adding the instance name to the connect_data parameter. 5. Run AutoConfig using the command:
$ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/ <context_file >.
For more information on AutoConfig, see My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12. 6. Check the $INST_TOP/admin/log/ <MMDDhhmm > AutoConfig log file for errors. 7. Source the environment by using the latest environment file generated. 8. Verify the tnsnames.ora and listener.ora files. Copies of both are located in the $INST_TOP/ora/10.1.2/network/admin directory and $INST_TOP/ora/10.1.3/network/admin directory. In these files, ensure that the correct TNS aliases have been generated for load balance and failover, and that all the aliases are defined using the virtual hostnames.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 18 of 25
9. Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the environment, and that load_balance is set to YES.
5. Restart the Applications processes, using the new scripts generated by AutoConfig. 6. Ensure that value of the profile option "Application Database ID" is set to dbc file name generated in $FND_SECURE. Note: If you are adding a new node to the application tier, repeat the above steps 1-6 for setting up load balancing on the new application tier node.
3.9 Configure Parallel Concurrent Processing 3.9.1 Check prerequisites for setting up Parallel Concurrent Processing
To set up Parallel Concurrent Processing (PCP), you must have more than one Concurrent Processing node in your environment. If you do not have this, follow the appropriate instructions in My Oracle Support Knowledge Document 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. Note: If you are planning to implement a shared Application tier file system, refer to My Oracle Support Knowledge Document 384248.1, Sharing the Application Tier File System in Oracle E-Business Suite Release 12, for configuration steps. If you are adding a new Concurrent Processing node to the application tier, you will need to set up load balancing on the new application by repeating steps 1-6 in Section 3.10.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 19 of 25
4. Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes. 5. Restart the Applications listener processes on each application tier node. 6. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered. 7. Verify that the Internal Monitor for each node is defined properly, with correct primary and secondary node specification, and work shift details. For example, Internal Monitor: Host2 must have primary node as host2 and secondary node as host3. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator. 8. Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system. 9. Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.) 10. Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. By setting it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.
3. Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters: _lm_global_posts=TRUE _immediate_commit_propagation=TRUE 4. Start the instances on all database nodes, one by one. 5. Start up the application services (servers) on all nodes. 6. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option 'Concurrent: TM Transport Type' to 'QUEUE', and verify that the transaction manager works across the Oracle RAC instance. 7. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers. 8. Restart the concurrent managers. 9. If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 20 of 25
Note: Windows users must set the value of "Concurrent Manager TWO_TASK" (s_cp_twotask context variable) to the instance alias. 2. Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.
Section 4: References
My Oracle Support Knowledge Document 384248.1 : Sharing The Application Tier file system in Oracle E-Business Suite Release 12 My Oracle Support Knowledge Document 387859.1: Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12 My Oracle Support Knowledge Document 406982.1: Cloning Oracle Applications Release 12 with Rapid Clone My Oracle Support Knowledge Document 240575.1: RAC on Linux Best Practices My Oracle Support Knowledge Document 265633.1: Automatic Storage Management Technical Best Practices Oracle Applications System Administrator's Guide, Release 12 Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) Oracle Database Administrator's Guide 11g Release 1 (11.1) Oracle Database Backup and Recovery Advanced User's Guide 11g Release 1 (11.1) Migration to ASM Technical White Paper
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 21 of 25
Sample LISTENER.ORA file for database nodes (with virtual host name)
LISTENER_ <host_name (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL FIRST))) (ADDRESS_LIST = (ADDRESS = (PROTOCOL (ADDRESS_LIST = (ADDRESS = (PROTOCOL ) ) > =
= TCP) (HOST = <Virtual IP Address >) (PORT = <db_port >)(IP = = TCP) (HOST = <host_name >) (PORT = <db_port >)(IP = FIRST))) = IPC) (KEY = EXTPROC <SID >)))
SID_LIST_LISTENER_ <host_name > = (SID_LIST = (SID_DESC = (ORACLE_HOME = <11g ORACLE_HOME >) (SID_NAME = <SID >)) (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = <11g ORACLE_HOME >) (PROGRAM = extproc)) ) STARTUP_WAIT_TIME_LISTENER_ <host_name > = 0 CONNECT_TIMEOUT_LISTENER_ <host_name > = 10 TRACE_LEVEL_LISTENER_ <host_name > = OFF LOG_DIRECTORY_LISTENER_ <host_name > = <11g ORACLE_HOME >/network/admin LOG_FILE_LISTENER_ <host_name > = <SID > TRACE_DIRECTORY_LISTENER_ <host_name > = <11g ORACLE_HOME >/network/admin TRACE_FILE_LISTENER_ <host_name > = <SID > ADMIN_RESTRICTIONS_LISTENER_ <host_name > = OFF IFILE= <11g ORACLE_HOME >/network/admin/ <CONTEXT_NAME >/listener_ifile.ora
Sample TNSNAMES.ORA file for database nodes (with virtual host name)
<CONNECT_STRING >= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp) (HOST= <Virtual IP Address >) (PORT= <db_port >)) (CONNECT_DATA= (SERVICE_NAME= <Service_name >) (INSTANCE_NAME= <SID >) ) )
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 22 of 25
Note: The Convert verify option in the ConvertToRAC.xml file can take one of three values: Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to Oracle RAC conversion have been met before it starts conversion. Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion. Convert verify="ONLY" rconfig only performs prerequisite checks; it does not start conversion after completing prerequisite checks. In order to validate, and test the settings specified for converting to Oracle RAC with rconfig, it is advisable to execute rconfig using Convert verify="ONLY" before carrying out the actual conversion.
<!-Specify current OracleHome of non-RAC database for SourceDBHome -- > /oracle/product/10.2.0/db_1 <!-Specify OracleHome where the Oracle RAC database should be configured. It can be same as SourceDBHome -- > /oracle/product/10.2.0/db_1 <!-Specify SID of non-RAC database and credential. User with sysdba role is required to perform conversion -- > <n:SourceDBInfo SID="sales" > <n:Credentials > <n:User >sys </n:User > <n:Password >oracle </n:Password > <n:Role >sysdba </n:Role > </n:Credentials > </n:SourceDBInfo > <!-ASMInfo element is required only if the current non-RAC database uses ASM Storage -- > <n:ASMInfo SID="+ASM1" > -
Note: The ASM instance name specified above is the local node's ASM instance, where rconfig is executed from to perform the Oracle RAC conversion. Before starting the actual conversion, ensure that ASM instances on all the nodes are running, and the required diskgroups are mounted on each instance.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 23 of 25
<n:Password >welcome </n:Password > <n:Role >sysdba </n:Role > </n:Credentials > </n:ASMInfo > <!-Specify the list of nodes that should have Oracle RAC instances running. LocalNode should be the first node in this nodelist. -- > <n:NodeList > <n:Node name="node1"/ > <n:Node name="node2"/ > </n:NodeList > <!-Specify prefix for Oracle RAC instances. It can be same as the instance name for non-RAC database or different. The instance number will be attached to this prefix. -- > <n:InstancePrefix >sales </n:InstancePrefix > <!-Specify port for the listener to be configured for Oracle RAC database.If port="", listener existing on localhost will be used for Oracle RAC database.The listener will be extended to all nodes in the nodelist -- > <n:Listener port=""/ >
Note: In order to use the existing listener definition and port assignment, you must specify a NULL entry for Listener port.
<!-Specify the type of storage to be used by Oracle RAC database. Allowable values are CFS|ASM. The non-RAC database should have same storage type. -- > <n:SharedStorage type="ASM" >
Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above xml file. The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required diskgroups are mounted on each of them. The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance:
select name, state, total_mb, free_mb from v$asm_diskgroup;
<!-Specify Database Area Location to be configured for the Oracle RAC database. If this field is left empty, current storage will be used for the database. For CFS, this field will have a directory path. -- > <n:TargetDatabaseArea >+ASMDG </n:TargetDatabaseArea >
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 24 of 25
Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this path, specify the ASM parameters as per your environment in the above xml file. If you are using CFS for your current database files then specify "NULL" to use the same location unless you want to switch to other CFS location. If you specify the path for the TargetDatabaseArea ,rconfig will convert the files to Oracle Managed Files nomenclature.
<!-Specify Flash Recovery Area to be configured for the Oracle RAC database. If this field is left empty, current recovery area of the non-RAC database will be configured for the Oracle RAC database. If current database is not using a recovery area, the resulting Oracle RAC database will not have a recovery area. -- > <n:TargetFlashRecoveryArea >+ASMDG </n:TargetFlashRecoveryArea > </n:SharedStorage > </n:Convert > </n:ConvertToRAC > </n:RConfig >
Where host_name is the public network name for the other nodes. If you plan to install the ORACLE_HOME on to another drive location than C, run the equivalent command for each node of the cluster:
C:\>NET USE \\node2\C$ C:\>NET USE \\node3\C$
Note: Refer to Oracle Clusterware Installation Guide 11g Release 1 (11.1) for Microsoft Windows, Part No. B28250-05 and Oracle Real Application Clusters Installation Guide 11g Release 1 (11.1) for Microsoft Windows, Part No. B28251-04 for any additional prerequisites specific to Oracle RAC on Windows.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010
Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storag... Page 25 of 25
Change Log
Date Sep 03, 2009 Description Rewrote Section 1.2.
Initial publication.
Note 466649.1 by Oracle E-Business Suite Development Copyright 2008, 2009 Oracle Related
Products Oracle E-Business Suite > Applications Technology > Technology Components > Oracle Applications Technology Stack
Back to top
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=WHITE...
11/10/2010