Sie sind auf Seite 1von 22

Business Continuity for Oracle Applications Release 11i Using Oracle Real Application Clusters and Physical Standby

Database [ID 341437.1] Modified 06-AUG-2009 Type WHITE PAPER Status PUBLISHED

Business Continuity for Oracle Applications Release 11i Using Oracle Real Application Clusters and Physical Standby Database
Oracle Applications Release 11i (11.5.10) has numerous configuration options that can be chosen to suit particular business scenarios, hardware capability, and availability requirements. This document describes how to configure a Real Application Clusters (RAC) Oracle Applications Release 11i (Release 11.5.10) environment to an Oracle9i Release 2 (9.2.0.6) RAC environment that utilizes the physical standby feature of Oracle Data Guard. A number of conventions are used in this document: Convention Meaning Machines running Forms, Web, Concurrent Processing and other servers. Application Tier Also called middle tier. Database Tier Machines running Oracle Applications database. Production Primary Oracle Applications system, which will used to create a standby System system. Standby System Applications system created as a copy of the production system. User account that owns the database file system (database oracle ORACLE_HOME and files). The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is <SID>_<hostname>. For CONTEXT_NAME systems installed with Rapid Install 11.5.8 or earlier, the context name will typically be <SID>. Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: CONTEXT_FILE <APPL_TOP>/admin/<CONTEXT_NAME>.xml Database tier context file: <RDBMS ORACLE_HOME>/appsutil/<CONTEXT_NAME>.xml APPSpwd Oracle Applications database user password. Monospace Represents command line text. Type such a command exactly as shown, Text excluding prompts such as '%'. Text enclosed in angle brackets represents a variable. Substitute a value for <> the variable text. Do not type the angle brackets. The backslash character is entered at the end of a command line to indicate \ continuation of the command on the next line. This document is divided into the following sections:

y y y y y y y

Section 1: Before You Start Section 2: Design Considerations Section 3: Example Environment Section 4: Prerequisite Tasks Section 5: Configuration Steps Section 6: References Appendix A: Oracle Net Files

View Change Log

Section 1: Before You Start


The reader of this document should be familiar with the Oracle9i database server, and have at least a basic knowledge of Real Application Clusters (RAC) and standby database configurations. Refer to OracleMetaLink Note 279956.1 Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration when planning to set up Real Application Clusters and shared devices with Oracle E-Business Suite Release 11i. 1.1 High Availability Terminology - Overview It is important to understand the terminology used in a high availability environment. Key terms include the following. y Real Application Clusters (RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, reducing processing time significantly. An RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime. The advantages of using Real Application Clusters include: o High availability o Rapid and automatic recovery from node failures or an instance crash o Increased scalability y Standby Database assists with disaster recovery by providing a completely automated framework to maintain one or more transactionally-consistent copies of the primary database. Changes can be transmitted from the primary database to the standby databases in a synchronous manner, which avoids any data loss, or in an asynchronous manner, which minimizes any performance impact on the production system. The standby database technology includes an automated framework to switch over to the standby system in the event of a physical disaster, data corruption, or planned maintenance at the production (primary) site. A standby database can be either a physical standby database or a logical standby database:

Physical standby database Provides a physically identical copy of the primary database, with ondisk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database by recovering the redo data received from the primary database. Logical standby database Contains the same logical information as the production database, although the physical organization and structure of the data can be different. It is kept synchronized with the primary database by transforming the data in the redo logs received from the primary database into SQL statements and then executing the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Thus, a logical standby database can be used concurrently for data protection and reporting.

NOTE At present, only physical standby databases are supported with Oracle Applications; logical standby databases are not supported. o o Oracle Data Guard is a set of services that create, manage, and monitor one or more standby databases to enable a production database to survive disasters and data corruption. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch a standby database to the production role, minimizing the downtime. The advantages of using Data Guard include:  Disaster protection and prevents data loss  Maintains transactional consistent copies of primary database  Protects against disasters, data corruption and user errors  Does not require expensive and complex hardware or software mirroring Data Guard offers three modes of data protection:  Maximum Protection This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database, and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode guarantees no data loss. Maximum Availability This mode is similar to the maximum protection mode, including no data loss. However, if a standby database becomes unavailable (for example, due to network connectivity problems), processing continues

on the primary database. When the fault is corrected, the standby database is resynchronized with the primary database. If there is a need to fail over before the standby database is resynchronized, some data may be lost.  Maximum Performance This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation on the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database, and there is little effect on primary database performance. NOTE Data Guard and Real Application Clusters are complementary, and should be deployed together to provide a comprehensive disaster recover solution that will meet your site-specific application requirements. This note describes the use of Data Guard Maximum Performance protection mode with a physical standby database in this note. For more details on High Availability architectures, refer to the Maximum Availability Architecture white paper on the Oracle Technology Network.

Section 2: Design Considerations


2.1 Requirements for Secondary Site The site for your standby environment should: y Be physically separate from the primary site, to protect against local and regional disasters. In terms of distance, it is typical practice for an organization to locate its standby data center in a different place (may be different city or state) from the production data center. Employ the same type of servers as at the production site, in the required numbers for adequate performance if put into use because a disaster has actually occurred, or because disaster recovery procedures are being tested (as they periodically should be). Have reliable and efficient network services to the primary data center and the location of the users (which may not be the same place). Have sufficient network bandwidth to the primary data center to cater for peak redo data generation. Have the required additional network bandwidth to support synchronization of the report, log, and output files, if your concurrent processing output needs to be mirrored.

y y y

For more detailed information on this topic, refer to the Maximum Availability Architecture white paper on the Oracle Technology Network.

Section 3: Example Environment


Oracle E-Business Suite Release 11i (11.5.10) with an Oracle 9.2.0.5 database server was installed using Rapid Install. The database was then upgraded to 9.2.0.6, and Oracle EBusiness Suite Release 11i was upgraded to 11.5.10.1 (Consolidated Update 1). The standby system will use the same version of the same products as installed on the primary system.

3.1 Software and Hardware Configuration


The following versions of software and hardware were used in this installation. The architecture described in this note is a sample configuration. For more details regarding supported architectures, refer to OracleMetaLink Note 285267.1. Software Component Oracle E-Business Suite Release 11i Oracle9i database server Oracle Cluster Manager Oracle9i Real Application Clusters Linux Operating System Version Release 11.5.10.1 (11.5.10 production release with Consolidated Update 1 applied) Release 9.2.0.6 (Production release) Release 9.2.0.6 (Production release) Release 9.2.0.6 (Production release) RedHat AS2.4.9-e.25 Enterprise

Section 4: Prerequisite Tasks


y Set Up Cluster Hardware and Software on Production and Standby Sites o Connect the required number of nodes to the cluster interconnect and shared storage subsystem. o Once the hardware has been set up, install the cluster software (clusterware), including any Oracle-required operating system dependent (OSD) patches. Refer to your vendor's operating system documentation for the cluster software installation procedure.  For Sun clusters, you will also need to install the Oracle UDLM patch from the first CD of the Oracle9i Enterprise Edition CD pack. o Start up the clusterware on all nodes of your cluster. Set up Shared Storage o If your platform supports a cluster file system, set up the cluster file system on shared storage. o If your platform does not support a cluster file system, or you want to deploy database files on raw devices for performance reasons, install the vendor-specific logical volume manager (for example, Veritas Cluster Volume Manager), and set up the requisite raw devices on shared disk volumes. o Start up the shared storage management components (for example,

Veritas Volume Cluster Manager). In addition to these steps, you may need to consult your storage vendor's documentation for details of the steps required to set up the shared disk subsystem, and how to mirror and stripe these disks.

Complete Installation of Oracle E-Business Suite Release 11i on Production System

NOTE If you are not using raw devices as shared storage, you may specify a cluster file system location for your datafiles when prompted by Rapid Install.

o o

If you do not already have an existing single instance environment, complete the installation of Oracle E-Business Suite Release 11i (Release 11.5.10) by running Rapid Install. Convert this environment to use RAC by following the steps listed in the OracleMetaLink Note Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig.  Before carrying out this RAC conversion, apply the following patches to your primary (production) environment, if they have not been applied already. Description 11.5.10 Oracle E-Business Suite Consolidated Upda (CU1) 11i.ADXE.1 Feb 2005 Cumulative Update Concurrent Processing PCP/RAC fixes for new GSM enabled mode and failover issues APPSRAP: Usage of CP_TWO_TASK in adcmctl.sh APPSRAP: ADPATCH errors out in 11.5.10 RAC environment when using load balancing alias as TW FAL does not copy archives from cross-instance arc destination

Patch Number

3240000 4175764 4045163 4404740 4462244 4074603

Download the above patches for your operating system, and examine the associated READMEs for any prerequisite patches and special instructions. Run AutoConfig on all the tiers in your environment after application of these patches. y Choose the level of data protection needed by your organization from the Data Guard Protection modes described in the Before You Start section of this document.

NOTE System administrators are strongly advised to take a complete backup of the environment before executing these procedures, and to take additional backups between stages of this migration. The procedures in this document should be tested in non-production environments before using them in production environments. All users must log off your system while the migration procedure is being carried out.

Section 5: Configuration Steps


The configuration steps you must carry out are divided into a number of stages: Environment Production system Configuration Step

1. Enable Forced Logging 2. Configure Oracle Net aliases to point to standby Production system database Production system 3. Enable Archive Logging on production database 4. Enable standby system to communicate with Production system production database 5. Execute pre-clone steps on database and application Production system tier nodes 6. Copy application tier files from production system to Production to standby system standby system 7. Generate a standby control file from the production Production to standby system database 8. Copy the database files, standby control file and Production to standby system ORACLE_HOME to standby system Standby system 9. Configure basic parameters for standby database Standby system 10. Configure additional parameters for standby database 11. Configure Oracle Net aliases to point to production Standby system database 12. Verify Cluster Manager settings and start Cluster Standby system Manager Standby system 13 Start the physical standby database Standby system 14 Start log application services on standby database. Production and standby 15 Enable log shipping from production database and systems verify redo data is being shipped Standby system 16. Add temporary files to standby database. 17. Configure all application tier nodes in standby Standby system environment
Standby and production 18. Perform a test failover (Optional but recommended) systems These steps are all described in more detail below.

5.1 Enable Forced Logging


y Place the primary database in FORCE LOGGING mode by using the following SQL statement:

SQL> ALTER DATABASE FORCE LOGGING; NOTE See OracleMetaLink Note 216211.1 for more details of force logging implementation. This statement may take a considerable amount of time to complete, because it has to wait for all unlogged direct write I/O operations to finish.

5.2 Configure Oracle Net aliases to point to standby system


y Under the $TNS_ADMIN directory on the production database system, create a <context_name>_ifile.ora file, and add a TNS aliases that points to the standby system database instances.. Refer as necessary to the Oracle Net sample files in Appendix A of this document.

5.3 Enable Archive Logging on production system


y Add the following lines to the ORACLE_HOME>/dbs/<CONTEXT_NAME>_ifile.ora of the first instance (instance1) of your production system: <log_archive_dest_1='LOCATION=<location of the archive log directory>' log_archive_dest_2='SERVICE=<standby_sid1> LGWR ASYNC REOPEN=10 ALTERNATE=log_archive_dest_4' log_archive_dest_3='SERVICE=<production_sid2> ARCH REOPEN=10' log_archive_dest_4='SERVICE=<standby_sid2> LGWR ASYNC REOPEN=10' log_archive_dest_state_4=ALTERNATE #log_archive_dest_state_2= defer log_archive_format=<production_sid1>_%s_%t.log log_archive_min_succeed_dest=1 log_archive_start=TRUE y Add the following lines to the ORACLE_HOME>/dbs/<CONTEXT_NAME>_ifile.ora of the second instance (instance2) of your production system.

log_archive_dest_1='LOCATION=<location of the archive log directory>' log_archive_dest_2='SERVICE=<standby_sid2> LGWR ASYNC REOPEN=10 ALTERNATE=log_archive_dest_4' log_archive_dest_3='SERVICE=<production_sid1> ARCH REOPEN=10' log_archive_dest_4='SERVICE=<standby_sid1> LGWR ASYNC REOPEN=10' log_archive_dest_state_4=ALTERNATE #log_archive_dest_state_2=enable log_archive_format=<production_sid2>_%s_%t.log log_archive_min_succeed_dest=1

log_archive_start=TRUE NOTE In an RAC environment, the thread parameter (%t or %T) is required to uniquely identify the archived redo logs specified in the LOG_ARCHIVE_FORMAT parameter. Shut down all the instances using the command: SQL> shutdown immediate; y On production instance1, set: cluster_database=false in the $ORACLE_HOME/dbs/init<production_instance1>.ora file, and then issue the following commands: SQL> SQL> SQL> SQL> y startup mount alter database archivelog; alter database open; shutdown immediate;

Now change the cluster_database line in the $ORACLE_HOME/dbs/init<production_instance1>.ora file to read: cluster_database=true

and issue the command: SQL> startup; y On production instance 2, issue the commands: SQL> shutdown immediate; SQL> startup;

5.4 Enable standby hosts to communicate with production database


y Oracle E-Business Suite Release 11.5.10 introduced support for the Oracle Net security feature, tcp.validnode_checking, which is used to prevent unauthorized Oracle Net access (for example via SQL*Plus) to the Applications database. If a node or PC is not registered, the connection attempt will fail with the error ORA12537: TNS: connection closed. This feature is enabled by default. If you are sure you do not want to use it, you can set the Profile Option 'SQLNet Access' (FND_SQLNET_ACCESS) to the value 'ALLOW_ALL'. If you require the feature to remain in use, you must add all the standby host systems to a special list of invited nodes in the

SQLNET.ORA file. NOTE It is not recommended to disable tcp.validnode_checking on a production system, because of the security implications. Only disable it if your organization's security policy permits this. The standby host system can be registered from Oracle Applications Manager (OAM), by navigating as follows:

Site Map > Administration > System Configuration > Hosts > 'Register' button under 'Other Hosts' OAM provides a wizard that can be used to specify the list of hosts that need access the database via Oracle Net: Applications Dashboard > Security > Manage Security Options > Enable Restricted Access > Run Wizard Select the standby host you added in the previous step, and press Continue. If the displayed list is correct, and now includes your new host, press Submit. You will also need to stop and restart the database listener. y After this operation completes, run AutoConfig on all production database servers to generate new TNS configuration files: % cd <RDBMS_ORACLE_HOME>/appsutil/scripts/<CONTEXT_NAME> % adautocfg.sh You will also need to stop and restart the database listener: % cd <RDBMS_ORACLE_HOME>/appsutil/scripts/<CONTEXT_NAME> % addlnctl.sh stop <SID> % addlnctl.sh start <SID>

5.5 Execute pre-clone steps on all production database and application tier nodes
NOTE Before carrying out this step, ensure that you have applied all the prerequisite patches mentioned in OracleMetaLink Note 230672.1, Cloning Oracle Applications Release 11i with Rapid Clone. Run adpreclone on any one node of the production system database tier, as follows: o Log on to the source system as the oracle user, and run the following commands: % cd <RDBMS

ORACLE_HOME>/appsutil/scripts/<CONTEXT_NAME> % perl adpreclone.pl dbTier y Run adpreclone on all the application tier nodes of the production system, as follows: o Log on to the source system as the applmgr user, and run the following commands on each node that contains an APPL_TOP: % cd <COMMON_TOP>/admin/scripts/<CONTEXT_NAME> % perl adpreclone.pl appsTier

5.6 Copy application tier on production system to standby system application tier
y Copy the application tier file system from the production system application tier to the standby system application tier as follows: 1. 2. 3. Log on to the source system application tier nodes as the applmgr user Shut down the application tier server processes Copy the following application tier directories from the source node to the target application tier node:  <APPL_TOP>  <OA_HTML>  <OA_JAVA>  <OA_JRE_TOP>  <COMMON_TOP>/util  <COMMON_TOP>/clone  <806 ORACLE_HOME>  <iAS ORACLE_HOME>

5.7 Generate a standby control file from the production database


y On the primary database, create the control file for the standby database using the command: SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '<full path to control file directory>/<control file name>'; NOTE The filename for the newly created standby control file must be different from the filename of the current control file of the production database.

5.8 Copy the database files, standby control file and ORACLE_HOME contents to standby system
y Shut down the production system database

y y

Copy the database files and standby control file (generated in the Step 5.7) to the standby environment shared file system Copy the ORACLE_HOME (on which adpreclone was executed in Step 5.5) to the standby database server

NOTE To eliminate the downtime for this step, you can use RMAN to take a hot backup of the database to the standby location. Refer to the RMAN documentation for more details.

y y

Start up all the production system instances

5.9 Configure basic parameters for standby database


y Create an instance-specific context file on the first database node in the standby system, using the command: % $ORACLE_HOME/appsutil/clone/bin/adclonectx.pl contextfile=$ORACLE_HOME/appsutil/<production instance1_hostname.xml> java=<location of java directory>* * For example, $ORACLE_HOME/appsoui/jre/1.3.1 This will prompt the following questions: Do you want to use a virtual hostname for the target node (y/n) [n]? Target hostname [<no default>] Do you want the inputs to be validated (y/n)? [n] Target System database name [<no default>] Target instance is a Real Application Cluster (RAC) instance (y/n)? [y] Current node is the first node in an N Node RAC Cluster (y/n)? [n] Number of instances in the RAC Cluster [2]? NOTE The next seven parameters will be prompted for as many times as the number of cluster instances you specified in the question above. Host name[]: (e.g. host4) Database SID[]:(e.g. instance1) NOTE Enter different instance names other than production instances

Instance number [1]: (e.g. 1) Listener port []: (e.g. 1521)

Private interconnect name[]: Target system quorum disk location required for cluster manger and node monitor[]: Target system cluster manager service port [9998] Oracle OS User [oracle]: Oracle OS Group [dba]: Target system domain name [] (e.g. oracle.com ) Target system RDBMS ORACLE_HOME directory [] (e.g. /d1/oracle/orarac) Target system utl_file accessible directories list [/usr/tmp] Number of DATA_TOP's on the target system [1] (e.g. 1) Target system DATA_TOP 1 [] (e.g. /d3/oracle/racdata/ ) This will generate a new context file referring to this host and instance. y Run AutoConfig on this standby database node using the command:

% $ORACLE_HOME/appsutil/bin/adconfig.sh contextfile=<contextfile generated in last step>


NOTE This AutoConfig execution will show errors in the AutoConfig log file. These errors can be ignored at this stage of the configuration.

y y

Create an instance-specific context file on the second database node in the standby system, using the command: % $ORACLE_HOME/appsutil/clone/adclonectx.pl contextfile=$ORACLE_HOME/appsutil/<production instance1 hostname.xml> java=<location of java directory>* * For example, $ORACLE_HOME/appsoui/jre/1.3.1

This will prompt the same questions as in the previous steps: Do you want to use a virtual hostname for the target node (y/n)? [n] Target hostname [<no default>] Do you want the inputs to be validated (y/n)? [n] Target System database name [<no default>] Target instance is a Real Application Cluster (RAC) instance (y/n)? [y] Current node is the first node in an N-Node RAC Cluster (y/n)? [n]

NOTE Enter 'Y', even though this is not the first node in the cluster. Number of instances in the RAC Cluster [2]

NOTE The next seven parameters will be prompted for as many times as the number of cluster instances you specified in the question above. Host name[]: (e.g. host4 ) Database Sid []:(e.g. instance2) Instance number [1]: (e.g. 1) Listener port []: (e.g. 1521 ) Private interconnect name[]: Target system quorum disk location required for cluster manger and node monitor[]: Target system cluster manager service port [9998] Oracle OS User[oracle]: Oracle OS Group [dba]: Target system domain name [] (e.g. oracle.com ) Target system RDBMS ORACLE_HOME directory [] (e.g. /d1/oracle/orarac) Target system utl_file accessible directories list [/usr/tmp] Number of DATA_TOP's on the target system [1] (e.g. 1) Target system DATA_TOP 1 [] (e.g. /d3/oracle/racdata/ ) The responses given to these questions will be used to generate a new context file for this host and instance. Edit this new context file and change the values of the following parameters: o o o instance_number, specifying the instance number for this instance instance_thread, specifying the redo log thread for this instance undo tablespace, specifying the undo tablespace for this instance

Run AutoConfig on this standby database node, using the command: % $ORACLE_HOME/appsutil/bin/adconfig.sh contextfile=<context file generated in last step> NOTE This AutoConfig execution will show errors in the AutoConfig log file. These errors can be ignored at this stage of the configuration.

5.10 Configure additional parameters for standby database


y On database node 1 of the standby system, add the following parameters to the <context_name>_ifile.ora in the $ORACLE_HOME/dbs directory:

control_files='<path to standby control file created in Step 5.7>' NOTE Comment out the control_files parameter in the <context_name>_APPS_BA is no longer used. standby_archive_dest='<path to a suitable shared directory>' log_archive_format=<standbysid1>%s_%t.log log_archive_min_succeed_dest=1 log_archive_start=TRUE standby_file_management = AUTO remote_archive_enable = TRUE fal_client = <standby_sid1> fal_server = <production_sid1>,<production_sid2>,<standby_sid2> log_archive_dest_1='LOCATION=<path to archive dest>' log_archive_dest_2='SERVICE=<production_sid1> LGWR ASYNC REOPEN=10 ALTERNATE=log_archive_dest_4' log_archive_dest_state_2=defer log_archive_dest_3='SERVICE=<production_sid2>ARCH REOPEN=10' log_archive_dest_4='SERVICE=<standby_sid2> LGWR ASYNC REOPEN=10' log_archive_dest_state_4=ALTERNATE NOTE In an RAC environment, the thread parameter (%t or %T) is required to uniquely identify the archived redo logs specified in the LOG_ARCHIVE_FORMAT parameter. On database node2 of the standby system, add the following parameters to the <context_name>_ifile.ora in the $ORACLE_HOME/dbs directory. control_files='<path to standby control file created in Step 5.7>' NOTE Comment out the control_files parameter in the <context_name>_APPS_BA is no longer used. standby_archive_dest='<path to a suitable shared directory>' #log_archive_dest_state_2=defer log_archive_format=<standby_sid2>_%s_%t.log log_archive_min_succeed_dest=1 log_archive_start=TRUE standby_file_management = AUTO remote_archive_enable = TRUE fal_client = <standby_sid2> fal_server = '<production_sid2>','<production_sid1>','<standby_sid

1>' log_archive_dest_1='LOCATION=<path to archive dest>' log_archive_dest_2='SERVICE=<production_sid2> LGWR ASYNC REOPEN=10 ALTERNATE=log_archive_dest_4' log_archive_dest_state_2=defer log_archive_dest_3='SERVICE=<production_sid1>ARCH REOPEN=10' log_archive_dest_4='SERVICE=<standby_sid1> LGWR ASYNC REOPEN=10' log_archive_dest_state_4=ALTERNATE y If the path for data files and log files is not same as on the production database, add following two parameters: db_file_name_convert=('<path to data files on production system', '<path to data files on standby system') log_file_name_convert=('path to log files on production system', 'path to log files on standby system') y Create the directories for archive log files to match the path specified in the above init.ora parameters.

5.11 Configure Oracle Net aliases to point to production database


y Under the $TNS_ADMIN directory on the standby database system, create a <context_name>_ifile.ora file, and add a TNS alias that points to the log_archive_dest_n location that was defined as part of Step 5.10 above. Refer as necessary to the Oracle Net sample files in Appendix A of this document. y Ensure that each production and standby instance can successfully tnsping every node specified in its <context_name>_ifile.ora file; in other words, every node must be able to ping every other node.

5.12 Verify cluster manager settings and start Cluster Manager


y On the standby database server, verify the contents of the $ORACLE_HOME/oracm/admin/cmcfg.ora file, as per the supplied sample cmcfg.ora: HeartBeat=15000 KernelModuleName=hangcheck-timer ClusterName=Oracle Cluster Manager, version 9i PollInterval=1000 MissCount=210 PrivateNodeNames= host2 host3

PublicNodeNames= int-host2 int-host3 ServicePort=9998 CmDiskFile=<path to shared drive>/CmDiskFile HostName=<private hostname> NOTE the cmcfg.ora file on your environment does not match the sample file above, add any missing parameters as per the sample file shown above. For more information on these parameters, refer to the Linux Best Practices document. Log in as root on the standby database server, set the environment variable ORACLE_HOME to the Cluster Manager install location, and start Cluster Manager with the following commands: %cd $ORACLE_HOME/oracm/bin % ./ocmstart.sh Run the command ps ef | grep oracm and ensure at least one oracm process is running. If none can be seen, try to identify the problem by checking the cluster manager log file, $ORACLE_HOME/oracm/log.

5.13 Start the standby database


y y y Log in as the oracle user on the standby database server Source the <context_name>.env environment file under $ORACLE_HOME Start up the standby database instance1 using the commands: SQL> startup nomount pfile=<RDBMS ORACLE_HOME>/dbs/init<standby SID1>.ora SQL> alter database mount standby database;

Start up the standby database instance2 using the commands:

SQL> startup nomount pfile=<RDBMS ORACLE_HOME>/dbs/init<standby SID2>.ora SQL> alter database mount standby database; y Start the TNS Listener on each of the standby database servers

5.14 Start log apply services on standby database


y On any one of the standby database instances, start the log apply services with the command: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE

DISCONNECT FROM SESSION; The DISCONNECT FROM SESSION option is needed so that log apply services can run in a background session. NOTE In a Real Applications Cluster environment, only one instance can be in managed recovery mode. When both the primary and standby databases are in a Real Applications Cluster configuration, and the standby database is in managed recovery mode, a single instance of the standby database applies all sets of logs transmitted by the primary instances. In such a case, the standby instances that are not applying redo cannot be in read-only mode while managed recovery is in progress; hence, the other instances should be shut down, although they can remain mounted.

5.15 Enable log shipping from production instance


y Log in to the production instance and start the log shipping to the standby database with the commands: SQL> alter system set log_archive_dest_state_2=ENABLE; SQL> alter system switch logfile; y For both the production and standby instances, change the $ORACLE_HOME/dbs/<context_name>_ifile.ora parameter log_archive_dest_state_2 to ENABLE. Successful log shipping can be verified as follows. o Check for errors in log_archive_destinations with the command: SQL> select * from v$archive_dest_status where status != 'INACTIVE'; o Check for existing redo logs on the standby instance with the command: SQL> select sequence#, first_time, next_time from gv$archived_log order by sequence#; o Perform a log switch using the command: SQL> alter system switch logfile; o Check for new redo logs received on the standby instance with the command: SQL> select sequence#, first_time, next_time from gv$archived_log order by sequence#;

Check whether these newly-received archive logs have been applied on the standby instances with the command: SQL> select sequence#, first_time, next_time, applied,completion_time from gv$archived_log where applied='YES' order by sequence#

NOTE You should also check the alert log files associated with each instance for any errors in log shipping and log application services.

5.16 Add temporary files to standby database


y y To save time on failover, you can add temporary files to the standby database. Use the file details collected in step 5.7 . Connect to the standby instance (in managed recovery mode), and run the following commands:

SQL> alter database recover managed standby database cancel; SQL> alter database open read only; SQL> alter tablespace temp add tempfile '<file specification>' size <size> reuse; SQL> alter database recover managed standby database disconnect from session;

5.17 Configure all Application Tier nodes in standby environment


NOTE You must perform following steps on all application tier nodes in your standby environment. Log on to the application tier of the standby environment, and create an Applications context file by executing the following command: % $COMMON_TOP/clone/bin perl adclonectx.pl contextfile=<APPL_TOP>/admin/<production context file >.xml NOTE Answer the questions from this utility carefully. You must enter the standby database name and any one instance SID when prompted. Run AutoConfig using this new context file: % $AD_TOP/bin/perl adconfig.pl

contextfile=<APPL_TOP>/admin/<standby context file>.xml run=INSTE8 NOTE This AutoConfig run will show errors in the AutoConfig logfile. These errors can be ignored at this stage.

5.18 Perform a test failover


NOTE This test failover step is optional but highly recommended.

Before starting a failover operation: o Identify the parameters that must be changed to complete the role transition. You must define the settings of the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST_STATE_n parameters on all standby sites, so that when a switchover or failover operation occurs, all of the standby sites will continue to receive logs from the new primary database. Verify that all but one standby instance in a Real Application Clusters configuration are shut down. This is essential for failover to take place.

Perform the following steps to carry out a failover: y Identify and resolve any archived redo log gaps  Check for a redo logs gap between production and standby database using the command: SQL> select * from gv$archive_gap;  Locate any missing logs and copy them to the standby server's standby archive log destination, then register them with the command: SQL> alter database register physical logfile '<filename on standby>'; NOTE Only one gap at a time is reported in v$archive_gap. If you find a gap and resolve it, repeat this process until there are no more gaps reported.  y Initiate the failover operation on the target physical standby database using the command: SQL> alter database recover managed standby database finish skip standby logfile;

After completing the previous step, convert the physical standby database to the primary role using the command: SQL> alter database commit to switchover to primary; SQL> shutdown immediate; SQL> startup;

NOTE After issuing this SQL statement, you can no longer use this database as a standby database and subsequent redo logs from the original primary database cannot be applied. The standby redo logs were archived, and should be copied to, registered, and recovered on all other standby databases derived from the original primary database. (This will happen automatically if the standby destinations are correctly defined on the new primary database.) During a failover, the original primary database is eliminated from participating in the configuration. To reuse the old primary database in the new configuration, you must recreate it as a standby database, using a backup copy of the new primary database.

y y

Run AutoConfig on all nodes of standby system


y De-register the current configuration using the command: perl $ORACLE_HOME/appsutil/bin/adgentns.pl appspass=apps contextfile=$CONTEXT_FILE -removeserver y Run AutoConfig on all database tier nodes, using the command: % $ORACLE_HOME/appsutil/scripts/<context_name>adautocfg.sh y Run AutoConfig on all the application tier nodes, using the command: % $COMMON_TOP/admin/scripts/<context_name>/adautocfg.sh NOTE Ensure that the load-balanced and failover aliases are generated correctly in the tnsnames.ora file, and that it has all the database nodes and SIDs as part of the TNS aliases. Start up the Applications services and verify that you are able to access the new environment

Section 6: References

Appendix A - Oracle Net Files


Sample LISTENER.ORA file for Database Node <SID> = (ADDRESS_LIST = (ADDRESS= (PROTOCOL= IPC)(KEY= EXTPROC<SID>)) (ADDRESS= (PROTOCOL= TCP)(Host= host2)(Port= db_port)) ) SID_LIST_<SID> = (SID_LIST = (SID_DESC = (ORACLE_HOME= <10g Oracle Home path>) (SID_NAME = <SID>) ) (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = <10g Oracle Home path>) (PROGRAM = extproc) ) ) STARTUP_WAIT_TIME_<SID> = 0 CONNECT_TIMEOUT_<SID> = 10 TRACE_LEVEL_<SID> = OFF LOG_DIRECTORY_<SID> = <10g Oracle Home path>/network/admin LOG_FI
Cancel

Das könnte Ihnen auch gefallen