Beruflich Dokumente
Kultur Dokumente
Do you have any requests for future EBS - Technology Webcast Events? Please email your suggestions to ruediger.ziegler@oracle.com ,subject: Topics of Interest.
AGENDA
Presentation and Demo approximately 60 - 75 minutes Q&A Session maximum 15 minutes
Web attendees can ask questions via Q&A panel Phone attendees can ask questions via Q&A panel or phone
3 Copyright 2011, Oracle and/or its affiliates. All rights reserved.
Who to ask?
3 Send
your question
Q&A panel
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
Client Connections
Applications Tier
(R12 Web, CP, Admin tiers)
High Speed Interconnect
Database Tier
10G/11G RAC
Oracle Database
12
Client
Local Director
Applications Tiers
(R12 Web, CP and Admin Tiers) High Speed Interconnect
Database Tier
10G/11G RAC
Oracle Database
13
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
15
If the ICMs node fails, an Internal Monitor on surviving node can spawn a new ICM on that node.
Services/Managers move back to their primary nodes when those nodes come back up.
16
17
If the ICM goes down, the Internal Monitor will attempt to start a new ICM on the local node. If multiple ICMs are started, only the first will stay active. The others will gracefully exit.
19 Copyright 2011, Oracle and/or its affiliates. All rights reserved.
Node 2
APPS TNS Listener
2
ICM
3 6 4
Service Manager
ICM
3
6
Internal Monitor
4 Internal Monitor
Standard Manager
Standard Manager
20
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
23
24
The value here determines the number of processes that will run when Standard Manager fails over to secondary node
25
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
27
Managers failback to primary node Inv Remote Procedure Mgr and MRP Mgr fail back to APRAC02
28
RAC
Node 2 Instance RAC2
CP TWO_TASK is set in context file (CP_TWOTASK) CP_TWOTASK set to load-balanced alias in RAC env If RAC Instance goes down, CP processes connected to failed instance will terminate and be restarted by ICM and connect to surviving instance using load-balanced alias If CP node fails, CP processes running on failed node are marked terminated and migrated to surviving node by ICM
CP Node 1
TWO_TASK RAC_BALANCE
CP Node 2
TWO_TASK RAC_BALANCE
29
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
31
(ADDRESS=(PROTOCOL=tcp)(HOST=aprac02-vip.us.oracle.com)(PORT=1521)))
(CONNECT_DATA= (SERVICE_NAME=VIS12)))
DBC File (jdbc connection) example entry for 2 instance RAC database:
APPS_JDBC_URL=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL= tcp)(HOST=aprac01-vip.us.oracle.com)(PORT=1521))(ADDRESS=(PROTOCOL=tcp)(HOST=aprac02vip.us.oracle.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=VIS12)))
32
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
Configuration - Profiles
Concurrent: TM Transport Type
Pipes are more efficient but require a Transaction Manager to be running on each Database Instance.
34
Configuration - Profiles
Concurrent: PCP Instance Check Used by Managers for Failover during instance failure
Concurrent processing provides database instance- sensitive failover capabilities. When an instance is down, all managers connecting to it switch to a secondary concurrent node. When this profile option is set to OFF, Parallel Concurrent Processing will not provide database instance failover support and managers will restart on the same node.
Can be set to OFF or ON
OFF
Managers dont failover to secondary concurrent node when DB instance they are connected to fails. In this case the load balanced alias (CP_TWOTASK) configured by autoconfig will allow the managers to connect to surviving instance while still running on their current concurrent node. ON Managers failover to secondary concurrent node when DB instance they are connected to fails
35
36
Reviver log: [ Wed Aug 10 14:52:16 EDT 2011 ] - reviver.sh starting up... [ Wed Aug 10 14:52:46 EDT 2011 ] - Attempting database connection... [ Wed Aug 10 14:52:47 EDT 2011 ] - Successful database connection.
[ Wed Aug 10 14:52:48 EDT 2011 ] - Looking for a running ICM process...
[ Wed Aug 10 14:52:49 EDT 2011 ] - ICM now running, reviver.sh complete.
37
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
Instance/Node Affinity
With R12.1, Concurrent requests can be directed to a specific database instance or node on a per-program basis. Concurrent / Program / Define => Session Control => Target Instance
39
Agenda
RAC
Introduction
Introduction Configuration
PCP
Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback
PCP/RAC - Failover/Failback
In RAC/PCP environment, failover can occur in 2 cases: Node Failure Node failure in RAC environment works the same as discussed earlier Instance failure The ICM will treat instance failures the same as node failures. When the instance managers are connected to goes down, managers will migrate to the other node
If you wish to handle instance failover separately, you can set the Profile Option Concurrent:PCP Instance Check to OFF.
It is recommended to set this profile to OFF in RAC environment as managers migrating to secondary node in case of instance failure can overload secondary node.
41
ICM log:
01-SEP-2011 12:23:18 The ICM has lost its database connection and is shutting down. Spawned reviver process 1868. AFPGMG-- 01-SEP-2011 12:24:19 Manager process: spid=(30878), cpid=(1526565), ORA pid=(97) manager=(0/0) Received lock, set mgralive=N Adding Node:(APRAC01), Instance:(V1204R2) to unavailable list Adding Node:(APRAC01), to unavailable list
Managers failover to secondary node: ICM started by reviver on same node. Standard Manager fails over to secondary node APRAC02.
USER_CONCURRENT_QUEUE_NAME OS_PROCESS_ID NODE_NAME Standard Manager 21759 APRAC02 Standard Manager 21760 APRAC02 Internal Manager 2259 APRAC01 Standard Manager 20587 APRAC02
42
Common Issues
Concurrent managers not coming up on some nodes: Confirm that all CP nodes are pingable from the node where adcmctl.sh is executed Confirm that application listener is running on each CP node ICM doesnt failover after node failure Confirm that Internal Monitor is running on surviving node/s. Concurrent Processing - Failover Of Concurrent Manager Processes Takes More than 30 Minutes (Doc ID 551895.1) Managers shutdown and dont failback after failed node is up For managers to failback, application listener should be running so service manager can be spawned. If node is pingable and application listener is not running, managers on secondary node will shutdown but not failback to primary node.
43 Copyright 2011, Oracle and/or its affiliates. All rights reserved.
Helpful Notes
Note 388577.1 - Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12 Note 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 Note 743716.1 - How to Setup and Test Failover of PCP on Non-RAC Environments. Note 790624.1 - Concurrent managers not running/failing over to correct node in PCP/RAC environment Note 1129203.1 - How to run a concurrent program against a specific RAC instance with PCP/RAC setup? Note 279156.1 - RAC Configuration Setup For Running MRP Planning, APS Planning, and Data Collection Processes
44
46
47
48
Q&A
49
THANK YOU
for attending our Advisor Webcast !
50
51