Sie sind auf Seite 1von 51

1

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

My Oracle Support Advisor Webcast Program

EBS RAC & Parallel Concurrent Processing (PCP)


Upcoming Webcasts in EBS Technology area :
Wednesday, September 14, 2011 06:00 PM CET (GMT +2) Teleconference Access: North America: 1866 230 1938 International (UK): +44 (0) 1452 562 665 Conference ID : 86579424 o October 2011 : EBS Concurrent Processing Best Practices

Do you have any requests for future EBS - Technology Webcast Events? Please email your suggestions to ruediger.ziegler@oracle.com ,subject: Topics of Interest.

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

AGENDA
Presentation and Demo approximately 60 - 75 minutes Q&A Session maximum 15 minutes
Web attendees can ask questions via Q&A panel Phone attendees can ask questions via Q&A panel or phone
3 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Who to ask?
3 Send

your question

Q&A panel

type your question here

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

your question pops-up here

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

ATTENTION AUDIO INFORMATION


Voice streaming/Audio broadcast is available. For full audio access, please join the telephone conference.
Teleconference Connect details: 1. Conference ID: 86579424 2. International dial in: +44 (0) 1452 562 665 3. List with national toll free numbers is available in Note 1342342.1 Note: You can view this info anytime using WebEx menu from your WebEx-Session :

Select Communicate --> Join Teleconference.


6

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Safe Harbor Statement


The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Concurrent Processing & RAC


Presenter : Pieter Breugelmans (Belgium) EMEA Session / Maya Atmaram (US) US Session Experts on the session : Martin Fritz (Germany) / Ramon Urdiales (Spain)
8

Paul Ferguson (US) / Christina Clark (US)


Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


9 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


10 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Real Application Cluster - definition


o Multiple instances of Oracle running on many nodes o All instances share a single physical database o All instances have common data, control, and initialization files o Each instances has Individual log files Individual rollback segments or undo tablespaces oAll instances can simultaneously execute transactions against the single database o Caches are synchronized using Oracles Global Cache Management technology (Cache Fusion)
11 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client Connections

Applications Tier
(R12 Web, CP, Admin tiers)
High Speed Interconnect

Database Tier
10G/11G RAC

Oracle Database

12

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Client

Local Director

Applications Tiers
(R12 Web, CP and Admin Tiers) High Speed Interconnect

Database Tier
10G/11G RAC

Oracle Database

13

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


14 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Parallel Concurrent Processing


PCP is the method for configuring the Concurrent Manager in a multi tier environment with 2 or more concurrent nodes. This allows concurrent processing load to be distributed across the nodes and provides high availability in case of node failure. Managers migrate to surviving node (failover)when one of the concurrent nodes goes down and migrate back (failback) when the failed node comes back. In RAC environment, failover and failback occurs with instance and/or node failure Each node with concurrent managers may or may not be running an ORACLE instance. The concurrent manager(s) connect via sqlnet to database using tns alias specified by TWO_TASK in adcmctl.sh and gsmstart.sh on each concurrent node.

15

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Parallel Concurrent Processing


Each service/manager may have a primary and a secondary node. Initially, a concurrent manager is started on its primary node. In case of node failure, all concurrent managers on that node migrate to their respective secondary nodes. Managers with no primary node assignment will be assigned a default target node. In general this will be the node where the ICM is currently running. When the primary node fails, the ICM will restart the manager on the secondary node.

If the ICMs node fails, an Internal Monitor on surviving node can spawn a new ICM on that node.
Services/Managers move back to their primary nodes when those nodes come back up.

16

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

ICM and Parallel Concurrent Processing


Internal Manager (ICM) monitors, activates and deactivates all managers. ICM migrates managers during node and/or instance failures and needs to be active for failover/failback to work. ICM uses the Service Manager (FNDSM) to spawn and terminate all concurrent manager processes, and to manage GSM services like Workflow mailer, Output Post Processor, etc. ICM will contact the APPS TNS Listener on each local and remote concurrent processing node to start the Service Manager on that node. ICM will not attempt to start a Service Manager if it is unable to TNS ping the APPS TNS Listener One Service Manager is defined for each application node registered in FND_NODES.

17

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Service Manager and PCP


Service manager (FNDSM process) is used to manage services/managers on each concurrent node. It is a requirement in all concurrent processing environments and is therefore an integral part of PCP. PCP cannot be implemented without Service manager. The Service Manager is spawned from the APPS TNS Listener The APPS TNS Listener must be started on every application node in the system, and started by the user that starts ICM (e.g. applmgr) TNS Listener spawns Service Manager to run as agent of ICM for the local node The Service Manager is started by ICM on demand when needed. If no management actions are needed on a node, Service Manager will not be started by ICM until necessary. When ICM exits its Service Managers exit as well. The Service Manager environment is set by gsmstart.sh and APPSORA.env as defined in listener.ora
18 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Internal Monitors and PCP


The only function of Internal Monitor (FNDIMON process) is to check if ICM is running and restart failed ICM on local node.

Internal Monitors are seeded on every registered node by default by autoconfig.


Activate Internal Monitor on each concurrent node where the ICM can start in case of a failure. By default, Internal Monitor is deactivated.
ICM node down Internal Monitor log on surviving node: Internal Concurrent Manager have been started on Target APRAC02. Internal Concurrent Manager have been started by IM.

If the ICM goes down, the Internal Monitor will attempt to start a new ICM on the local node. If multiple ICMs are started, only the first will stay active. The others will gracefully exit.
19 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

GSM PCP Overview


Node 1
APPS TNS Listener
2 Service Manager 5 1

Node 2
APPS TNS Listener
2

1. ICM contacts TNS Listener

ICM
3 6 4

Service Manager

ICM
3

2. TNS Listener spawns Service Manager

3. ICM communicates with Service Manager


4. Service Manager spawns Manager and Service processes 5. If ICM crashes 6. Internal Monitor will spawn ICM locally

6
Internal Monitor

4 Internal Monitor

Standard Manager

Standard Manager

when it detects ICM is down.

Workflow Notification Mailer

Output Post Processor

20

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


21 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Apps TNS Listener


Apps listener should be running on each concurrent node for Service manager to be spawned Concurrent Node: $TNS_ADMIN/Listener.ora example entry for FNDSM:
APPS_VIS12 = (ADDRESS_LIST = (ADDRESS= (PROTOCOL= TCP)(Host= aprac01)(Port= 1626)) ( SID_DESC = ( SID_NAME = FNDSM ) ( ORACLE_HOME = /u01/oracle/VIS12/apps/tech_st/10.1.2 ) ( PROGRAM = /u01/oracle/VIS12/apps/apps_st/appl/fnd/12.0.0/bin/FNDSM ) ( envs='MYAPPSORA=/u01/oracle/VIS12/apps/apps_st/appl/APPSVIS12_aprac01.env,PATH=/usr/bin:/usr/ccs/b in:/bin,FNDSM_SCRIPT=/u01/oracle/VIS12/inst/apps/VIS12_aprac01/admin/scripts/gsmstart.sh' ) )

Environmental variables for FNDSM ENVS string


22 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Apps TNS Listener


Concurrent Node: $TNS_ADMIN/tnsnames.ora

autoconfig creates 2 entries for each node


FNDSM_[hostname_service] FNDSM_[hostname +domain_service]

example for FNDSM entries for 2 nodes:


FNDSM_APRAC01_VIS12= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=aprac01.us.oracle.com)(PORT=1626)) (CONNECT_DATA= (SID=FNDSM))) FNDSM_APRAC02_VIS12= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=aprac02.us.oracle.com)(PORT=1626)) (CONNECT_DATA= (SID=FNDSM)))

23

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Node Definition


Concurrent/Manager/Define Concurrent/Manager/Administer

24

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Failover Sensitive Workshifts


During node failure, managers connected to failed node migrate to secondary node which can overload secondary node. The Failover Sensitivity feature allows Managers to failover with fewer processes than on the original node.

The value here determines the number of processes that will run when Standard Manager fails over to secondary node

25

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


26 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Failover - Node Failure


C:\Documents and Settings\matmaram>ping aprac02 Pinging aprac02.us.oracle.com [xx.xxx.xxx.xx] with 32 bytes of data: Request timed out. ICM log:
AFPGMG-11-AUG-2011 11:10:29 Manager process: spid=(7724), cpid=(1464742), ORA pid=(48) manager=(401/10) Received lock, set mgralive=N Adding Node:(APRAC02), to unavailable list

Node 2 down ping fails

Managers failover to surviving node:


Inv Remote Procedure Mgr and MRP Mgr move to APRAC01

27

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Failback - Node failure


C:\Documents and Settings\matmaram>ping aprac02 Pinging aprac02.us.oracle.com [xx.xxx.xxx.xx] with 32 bytes of data: Reply from xx.xxx.xxx.xx: bytes=32 time=1ms TTL=61
ICM log: Node and Service manager up Process monitor session started : 11-AUG-2011 11:18:46

Node 2 up ping succeeds

Queue size posting session started : 11-AUG-2011 11:18:46


Node list before Update Node list after Update Node=APRAC01, Inst=, SM Name=, Up=1 Node=APRAC02, Inst=, SM Name=, Up=1

Managers failback to primary node Inv Remote Procedure Mgr and MRP Mgr fail back to APRAC02

28

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

PCP and RAC Setup


RAC
Node 1 Instance RAC1

RAC
Node 2 Instance RAC2

CP TWO_TASK is set in context file (CP_TWOTASK) CP_TWOTASK set to load-balanced alias in RAC env If RAC Instance goes down, CP processes connected to failed instance will terminate and be restarted by ICM and connect to surviving instance using load-balanced alias If CP node fails, CP processes running on failed node are marked terminated and migrated to surviving node by ICM

CP Node 1
TWO_TASK RAC_BALANCE

CP Node 2
TWO_TASK RAC_BALANCE

29

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


30 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

RAC and PCP - Load balancing DB Connections


CP_TWOTASK and jdbc_url variables set in the context file point to load-balanced entries in RAC environment.
adcmctl.sh and gsmstart.sh inherit the value of TWO_TASK from CP_TWOTASK. DBC file inherits the value of APPS_JDBC_URL from jdbc_url The load-balanced entries are created in tnsnames.ora (sqlnet connection) and DBC file (jdbc connection) on each node by autoconfig. The processes on a concurrent node connect to either of RAC instances using loadbalanced entries. Concurrent Managers (e.g. Standard) use TWO_TASK for database connection Java Managers (OPP, WF Services) us APPS_JDBC_URL for database connection

31

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Load balancing DB Connections


tnsnames.ora (sqlnet connection) named <database_service_name>_BALANCE

example entry for 2 instance RAC database:


VIS12_BALANCE= (DESCRIPTION= (ADDRESS_LIST= (LOAD_BALANCE=YES) (FAILOVER=YES) (ADDRESS=(PROTOCOL=tcp)(HOST=aprac01-vip.us.oracle.com)(PORT=1521))

(ADDRESS=(PROTOCOL=tcp)(HOST=aprac02-vip.us.oracle.com)(PORT=1521)))
(CONNECT_DATA= (SERVICE_NAME=VIS12)))

DBC File (jdbc connection) example entry for 2 instance RAC database:
APPS_JDBC_URL=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL= tcp)(HOST=aprac01-vip.us.oracle.com)(PORT=1521))(ADDRESS=(PROTOCOL=tcp)(HOST=aprac02vip.us.oracle.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=VIS12)))

32

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


33 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Profiles
Concurrent: TM Transport Type

Used by Transaction managers to communicate with application session


Can be set to PIPE or QUEUE PIPE: TM uses dbms_pipe Cannot communicate across instances Requires 1:1 ratio of Transaction Managers to RAC instances QUEUE:

TM uses Advance Queue (AQ)


Can communicate across instances

Pipes are more efficient but require a Transaction Manager to be running on each Database Instance.

34

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Configuration - Profiles
Concurrent: PCP Instance Check Used by Managers for Failover during instance failure

Concurrent processing provides database instance- sensitive failover capabilities. When an instance is down, all managers connecting to it switch to a secondary concurrent node. When this profile option is set to OFF, Parallel Concurrent Processing will not provide database instance failover support and managers will restart on the same node.
Can be set to OFF or ON

OFF
Managers dont failover to secondary concurrent node when DB instance they are connected to fails. In this case the load balanced alias (CP_TWOTASK) configured by autoconfig will allow the managers to connect to surviving instance while still running on their current concurrent node. ON Managers failover to secondary concurrent node when DB instance they are connected to fails
35

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Reviver - Network Failure


Provides automatic recovery from network failure Spawned by ICM when it loses database connection reported by ORA-03113 and ORA-03114 errors Reviver attempts database connection till successful and restarts ICM

Plays important role in RAC environment


Example scenario: 2 DB nodes, each running RAC instance 2 CP nodes configured for PCP and load-balanced alias ICM and Internal Monitor connected to same instance The instance goes down Reviver is spawned and restarts ICM by connecting to surviving instance

36

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Reviver - Network Failure


DB Instance with ICM connection aborted DB connectivity error reported in ICM log :
10-AUG-2011 14:52:16 The ICM has lost its database connection and is shutting down. Spawning reviver process to restart the ICM when the database becomes available again. Spawned reviver process 9209. List of errors encountered: ........................................................ ..................... Routine AFPCMT encountered an ORACLE error. ORA-03114: not connected to ORACLE

Reviver log: [ Wed Aug 10 14:52:16 EDT 2011 ] - reviver.sh starting up... [ Wed Aug 10 14:52:46 EDT 2011 ] - Attempting database connection... [ Wed Aug 10 14:52:47 EDT 2011 ] - Successful database connection.

[ Wed Aug 10 14:52:48 EDT 2011 ] - Looking for a running ICM process...
[ Wed Aug 10 14:52:49 EDT 2011 ] - ICM now running, reviver.sh complete.

37

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


38 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Instance/Node Affinity
With R12.1, Concurrent requests can be directed to a specific database instance or node on a per-program basis. Concurrent / Program / Define => Session Control => Target Instance

39

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Agenda
RAC
Introduction
Introduction Configuration

PCP

Failover/Failback
Load Balancing Profiles Instance/Node Affinity Failover/Failback

PCP with RAC

Common Issues and Troubleshooting


40 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

PCP/RAC - Failover/Failback
In RAC/PCP environment, failover can occur in 2 cases: Node Failure Node failure in RAC environment works the same as discussed earlier Instance failure The ICM will treat instance failures the same as node failures. When the instance managers are connected to goes down, managers will migrate to the other node

If you wish to handle instance failover separately, you can set the Profile Option Concurrent:PCP Instance Check to OFF.
It is recommended to set this profile to OFF in RAC environment as managers migrating to secondary node in case of instance failure can overload secondary node.

41

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Failover - Instance Failure


USER_CONCURRENT_QUEUE_NAME OS_PROCESS_ID NODE_NAME Internal Manager 26725 APRAC01 Standard Manager 30875 APRAC01 Standard Manager 30878 APRAC01 Standard Manager 30882 APRAC01 SQLNET_STRING VIS12_BALANCE VIS12_BALANCE VIS12_BALANCE VIS12_BALANCE DB_INSTANCE V1204R2 V1204R1 V1204R2 V1204R1

ICM log:
01-SEP-2011 12:23:18 The ICM has lost its database connection and is shutting down. Spawned reviver process 1868. AFPGMG-- 01-SEP-2011 12:24:19 Manager process: spid=(30878), cpid=(1526565), ORA pid=(97) manager=(0/0) Received lock, set mgralive=N Adding Node:(APRAC01), Instance:(V1204R2) to unavailable list Adding Node:(APRAC01), to unavailable list

Instance V1204R2 is aborted

Managers failover to secondary node: ICM started by reviver on same node. Standard Manager fails over to secondary node APRAC02.

USER_CONCURRENT_QUEUE_NAME OS_PROCESS_ID NODE_NAME Standard Manager 21759 APRAC02 Standard Manager 21760 APRAC02 Internal Manager 2259 APRAC01 Standard Manager 20587 APRAC02

SQLNET_STRING VIS12_BALANCE VIS12_BALANCE VIS12_BALANCE VIS12_BALANCE

DB_INSTANCE V1204R1 V1204R1 V1204R1 V1204R1

42

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Common Issues
Concurrent managers not coming up on some nodes: Confirm that all CP nodes are pingable from the node where adcmctl.sh is executed Confirm that application listener is running on each CP node ICM doesnt failover after node failure Confirm that Internal Monitor is running on surviving node/s. Concurrent Processing - Failover Of Concurrent Manager Processes Takes More than 30 Minutes (Doc ID 551895.1) Managers shutdown and dont failback after failed node is up For managers to failback, application listener should be running so service manager can be spawned. If node is pingable and application listener is not running, managers on secondary node will shutdown but not failback to primary node.
43 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Helpful Notes
Note 388577.1 - Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12 Note 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 Note 743716.1 - How to Setup and Test Failover of PCP on Non-RAC Environments. Note 790624.1 - Concurrent managers not running/failing over to correct node in PCP/RAC environment Note 1129203.1 - How to run a concurrent program against a specific RAC instance with PCP/RAC setup? Note 279156.1 - RAC Configuration Setup For Running MRP Planning, APS Planning, and Data Collection Processes

44

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Applications Technology Group Webcasts


October 2011 : E-Business Suite Concurrent Processing Performance

Additional Webcasts are planned they will be promoted soon.


For complete details on all upcoming Oracle Advisor Webcast Events, please see Note 740966.1, Oracle Advisor Webcast Schedule. For EBS Technology Specific Webcasts please check Note 1186338.1. Do you have any requests for future ATG Advisor Webcast Events ? Please email your suggestions to me : ruediger.ziegler@oracle.com, subject: Topics of Interest.
45 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Communities in My Oracle Support

46

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

ATG Communities in MOS available


Following Communities are available in My Oracle Support
Oracle E-Business Suite ... BI Publisher - Business Intelligence Products (not only EBS) Core Concurrent Processing - Anything around Concurrent Processing and Concurrent Managers Core Workflow - Any Workflow isse not only E-Business Suite Diagnostic Tools - Anything around EBS Diagnostics E-Business Customizations - Your Customizations Installation - Fresh Install of the E-Business Suite Patch Review EBS - Review of Patches around the E-Business Suite Performance - EBS Performance Upgrade - EBS Upgrade User Produktivity Kit - User Productivity Kit (UPK) available for the E-Business Suite Utilities - Utilities / generic EBS DBA issues ... This is the current list for the E-Business Suite Applications Technology Group

47

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

EBS ATG Product Information Center Note 1160285.1

48

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Q&A

49

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

THANK YOU
for attending our Advisor Webcast !

50

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

51

Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Das könnte Ihnen auch gefallen