Sie sind auf Seite 1von 27

Troubleshooting Oracle Applications

Performance Issues
Doc ID: Note:169935.1 Type: TROUBLESHOOTING
Last Revision Status:
16-APR-2008 PUBLISHED
Date:

Troubleshooting Oracle Apps


Performance Issues
Carlos Sierra
Apps Engineering CoE
Oracle Support
February 2002 (updated on 30-JUL-2003)

• Purpose
• Scope & Application
• Problem Description
• Categories of Common Tuning Issues
• Certify
• Statistics Gathering
• Transaction Tuning
• Back-End Tuning
• Tech-Stack Tuning
• Reporting to Oracle
• Related Documents

Purpose
Practical guide in Troubleshooting Oracle Applications Performance Issues. Documents
a systematic approach to gather information and files usually required to: categorize,
research, troubleshoot, and eventually report to Oracle Support, an Apps Performance
Issue. Documents how to use the CoE Scripts.

Scope & Application


This document has been registered as Note:169935.1. The expected audience is:
Technical Analysts or DBAs involved in the process of reporting and/or solving an Apps
Performance Issue. This document assumes the reader has an intermediate to advanced
technical background, with a good understanding of RDBMS concepts, proficiency in
SQL*Plus, and comfortable in the OS of the database server.
The main focus of this Note is on Oracle ERP Apps 11i, but references are made to
earlier releases of Oracle Apps (11.0 and 10.7). Most of the techniques and tools
explained can equally be used for earlier releases of Oracle Apps, as well as for the CRM
modules.

This document can be used in several ways. It can be used as a check list to simply
gather information and files usually requested by Oracle Support in order to report an
Apps performance issue. Or it can be utilized as a process which walks the reader
through common steps of the troubleshooting process of performance issues. It can also
be used as a reference source of techniques and tools available to tune applications.

While following this process, you may solve your performance issue. If not, you will
have gathered the standard information and files required to pursue its resolution. This
document classifies the Apps Performance Issues into one of three categories. Once
determined the category, it branches into the details of that respective category.

Read this document entirely before starting to react to the steps of the troubleshooting
process.

Problem Description
The first step in troubleshooting any performance issue is to clearly describe the issue.
Proceed to create an electronic file using any text editor, and document in detail the
symptoms of the performance issue. Keep in mind that you may want to use your
electronic document if you later need to report the performance issue to Oracle for
further analysis. Document and differentiate factual information and user perceptions.
You may want to use the templates provided in the related documents section at the
bottom of this note, to start gathering your information in a well organized manner. You
will first need to categorize your issue however.

Your detailed description should include, at least, the following information:

1. What is the application group or module affected by your performance issue?


Examples: 'AP', or 'ONT and QP', or 'all manufacturing modules', 'just this Form',
'this list of transactions', etc.
2. Instance or instances in which it can be observed. Does it happen in Production?
Development? Test? All of them? If it does not happen in all your
environments, document what is different among them. Try to determine if the
difference among them can be isolated. This may narrow the problem to a very
specific area.
3. Determine if the poor performance is equally observed when the system load is
either low or high. What if there is only one active user? Is it equally slow?
Document if the poor performance keeps any relation to the system load, or not at
all.
4. Is the performance issue affecting one user only? If no, document how many
users are affected.
5. If your application is used on multiple locations: Does the location seem to affect
the performance? Do all users in all locations experience the same performance?
Can you state that poor performance has nothing to do with the user location?
6. Can you reproduce the issue systematically? Document how can the issue be
systematically reproduced. If you can't, document why not. In any case, try to
reproduce it systematically and document your findings.
7. Do you have a workaround available? What is it?
8. When was the last time you could complete the process or transaction without
experiencing poor performance? Document what has changed since then.
9. Determine if any software has been installed or upgraded in your server or
instance since the problem started. Include any RDBMS, Applications or 3rd
party software in your analysis.
10. Document any patches applied recently which could have affected your
performance. Include Operating System patches as well as patches from RDBMS
or Apps.

Categories of common Tuning Issues


The detailed description of the performance issue should be sufficient to categorize it into
one of the following 3 high-level areas. Determine which category best describe your
issue. Review and document the Certify and Statistics Gathering sections below. Then
proceed to the tuning section corresponding to the high-level category.

1. Transaction Tuning: One specific transaction denotes poor performance. It


may be a Form, a Concurrent Program, or a specific operation within a Form (i.e.
the click of a button, saving one row, displaying a list of values, etc.). If your
issue affects more than one transaction but just a few of them, it can still be
classified as transaction tuning. You may want to diagnose and document each
issue separately. Transaction Tuning usually translates into SQL Tuning.
2. Back-End Tuning: Overall poor performance is observed, or several
transactions perform poorly; usually from more than one application group. This
category typically includes issues where system load has an impact in the overall
performance, or it degraded after a major software upgrade. Back-End Tuning
usually translates into DB Server Tuning and/or OS Tuning.
3. Tech-Stack Tuning: Navigation between Forms or within Forms performs
poorly affecting several Forms. Opening a Form takes an unusual long time.
Different performance is observed from one location to another. Some users are
affected but no others. Back-End has been reviewed and ruled out. Tech-Stack
Tuning is also known as Middle-Tier Tuning.

Certify
Use Certify within MetaLink to verify that your specific combination of
Platform/Operating System, RDBMS Release and Oracle Applications Release, is fully
certified. On MetaLink main 'Certify - Platform Selection page' select your back-end
server platform. On the 'Product Group selection page', select 'E-Business Suite'.
Review the General Notes. Select your combination of Apps Release (Oracle E-Business
Suite Version), RDBMS Release (Oracle Server Version), and 'Certified combinations
only'. Locate your OS Release in the 'Database Tiers Certifications page' and review the
hotlinks under 'Additional Info' and 'Issues' columns. If there are 'Interoperability Notes'
for your implementation, please review them. Under the 'Issues' hot link you may
encounter a list of known Bugs and Issues affecting your particular combination of
OS/RDBMS/Apps Releases. Please review the list of known Bugs and Issues carefully
and document in your electronic file:

1. List of known Bugs and Issues according to your OS/RDBMS/Apps Releases that
may relate to your performance issue.
2. Which of the related Bugs have been applied?
3. Which ones have not yet been applied? Explain why.

Note: Be aware that the list of known Bugs and Issues changes periodically. Therefore,
it is a good practice to review this list of known Bugs and Issues affecting your very
specific Product combination every time you need to troubleshoot a new performance
issue.

Statistics Gathering
• Statistics Gathering in 11i
• Statistics Gathering in 11.0 and 10.7

Oracle Applications 11i and later uses the Cost Based Optimizer to generate execution
plans for SQL statements. In order to generate optimal execution plans, the CBO
requires updated statistics regarding your data volume and distribution. Statistics used by
the CBO are stored in the data dictionary and can be queried using several available
views. Oracle Applications 10.7 and 11.0 mostly use the Rule Base Optimizer. Still a
handful of SQL statements from these earlier releases of Oracle Apps invoke the CBO by
using embedded CBO Hints. Therefore, all Oracle Applications Releases must have
current CBO statistics available for pertinent schema objects belonging to the installed
Apps modules.

There are several tools available to gather CBO statistics. All of them allow to gather
stats either by reading all the data on the table or index (compute method), or just a
sample of the data (estimate method). For Oracle Applications there is no need to gather
exact stats using the 'compute statistics' method. When gathering CBO stats you should
generally use the 'estimate' method with a sample size of 10%.

Regarding the frequency of the CBO statistics gathering, the right schedule really
depends on your data, based on how often it changes. Typically, gathering CBO statistics
at least once per month, with an estimated sample of 10%, on all pertinent schema objects
belonging to Apps installed modules, has proved to be sufficient for most clients. Some
clients prefer to gather CBO statistics weekly, during a quiet system load interval. Other
clients prefer to optimize the gathering of CBO stats, by scheduling the frequency and
estimate sample percentage according to the current size of their tables.

In any case, if you suspect that your CBO stats may be affecting your process performing
poorly, a more aggressive CBO stats gathering is usually worth attempting. Measure the
performance again if you refreshed your CBO statistics for one or more schema objects
accessed by your process performing poorly. Be aware that prior measurements, Traces,
Trace Analyzer and TKPROF files, may become invalid.

Also keep in mind that when an object has its CBO stats refreshed, all references to that
object in the shared pool or library cache become invalid, requiring a new hard parse for
each SQL statement referencing the object. Therefore, in order to avoid temporarily
affecting the performance of all active users in a production environment, you must
refrain from gathering CBO stats during times of intensive system usage. Always
schedule the CBO stats gathering during periods of time with very low system load, and
when the number of active users is at its minimum. In other words: DO NOT GATHER
CBO STATS WHEN NUMBER OF ACTIVE USERS IS MEDIUM TO HIGH.

Note: Try gathering stats at least once per month and with an estimate sample size of
10% for all your schema objects belonging to Apps. If you can afford gathering stats
every week, do so (it is preferred, but not required). Always gather stats with no active
users, or when system load is very low.

Regarding your CBO statistics gathering, document in your electronic file:

1. Frequency of the CBO stats gathering on the pertinent schema objects accessed by
your process performing poorly (monthly, weekly, daily, etc.)
2. Method used (per table, per schema, all schemas, etc.)
3. Tool used (coe_stats.sql, FND_STATS, DBMS_STATS, ANALYZE
command, DBMS_UTILITY)
4. Estimate sample size percentage used (or if compute method, specify so)
5. When was the last time the CBO stats where gathered for the affected schema
objects?

Note: Using 8i, never gather CBO statistics for data dictionary objects owned by user
'SYS'. The only exception is the table DUAL owned by SYS, for which FND_STATS
gathers stats when executed for one or all schemas.

Statistics Gathering in 11i

From all the tools available, Oracle Apps 11i mandates the exclusive use of the
FND_STATS package to gather the CBO stats. This package can be invoked either from
a seeded Concurrent Program, or directly from SQL*Plus. When invoked using the
seeded Concurrent Programs, use only the 'Gather Schema Statistics' or the 'Gather Table
Statistics'.

When using the 'Gathering Schema Statistics' concurrent program, pass only the schema
name parameter. Let the other parameters default automatically. The schema name
passed can be a specific schema or 'ALL' if you prefer to gather CBO stats for the
pertinent objects belonging to all Apps installed modules, using the estimate method with
a sample size of 10% (default). Be aware this process may take several hours to
complete.

When using the 'Gather Table Statistics' concurrent program, pass only the owner of the
table (schema name) and the table name. Let all other parameters default automatically,
except when the table has been partitioned. When gathering CBO stats for a partitioned
table, pass 'PARTITION' in the Granularity parameter, otherwise FND_STATS will
calculate global stats (plus partition stats) instead of rolling up the global stats from the
partitions. If this happens, you may have to delete the global stats (with cascade equals
'false') and gather the stats for one partition to once again enable the automatic rolling up
into the global stats.

To execute the corresponding FND_STATS procedures from SQL*Plus to gather CBO


stats for one or all schemas, or for a particular table, use the following examples:

# sqlplus apps/<apps_pwd>
SQL> exec fnd_stats.gather_schema_statistics('MRP');
<- One schema
SQL> exec fnd_stats.gather_schema_statistics('ALL');
<- All schemas
SQL> exec
fnd_stats.gather_table_stats('MRP','MRP_FORECAST_DATES');
<- One table
SQL> begin <-
Partitioned table
2 fnd_stats.gather_table_stats(ownname =>
'APPLSYS',
3 tabname =>
'WF_ITEM_ACTIVITY_STATUSES',
4 granularity =>
'PARTITION');
5 end;
6 /

There is also a public script available that may help you automate the CBO statistics
gathering. This is the coe_stats.sql script, included and documented in
Note:156968.1. The coe_stats.sql script uses the FND_STATS package and
schedules the CBO stats gathering according to table sizes. It uses the estimate method
with a variable sample size based also on table size. It gathers stats more accurately for
small tables (larger sample percent) and uses a smaller sample size percent for larger
tables. The main benefit of this script is that it has the potential to reduce the overall
CBO stats gathering execution time without sacrificing stats accuracy. If your total CBO
stats gathering time for all pertinent schema objects is acceptable using plain
FND_STATS procedures, then there is no need to explore using the coe_stats.sql
script. In the other hand, if yours is a 24x7 operation and you need to minimize the
window to gather CBO stats, consider using the coe_stats.sql script.

If you are not confident of the current status of your overall CBO stats in your Apps 11i
instance, download and execute the bde_last_analyzed.sql script from
Note:163208.1. This script reports the current status of the CBO stats for all schema
objects owned by Apps installed modules. It summarizes by schema name (application
group) and by date, where at least one schema object got its stats refreshed. It warns you
of possible erroneous stats gathering on schema objects owned by SYS. It also reports
and warns on partitioned tables which global and partition level stats are out of sync.
You can execute this bde_last_analyzed.sql script at any time. By reviewing
the summary page at the beginning of the report produced, you can quickly get the status
of the CBO stats on your instance.

Statistics Gathering in 11.0 and 10.7

For earlier Releases of Oracle Applications you can use any tool available to gather CBO
statistics. The recommended tool is the DBMS_STATS package, documented in the
corresponding 'Supplied PL/SQL Packages Reference' manual for your RDBMS
Release. The FND_STATS package did not exist on 11.0 or 10.7, but it is available
through back-port patch for Bug 1268797. If you don't have to have FND_STATS
installed on your 11.0 or 10.7 instance, use DBMS_STATS instead.

Besides the DBMS_STATS package, the ANALYZE command documented in the 'SQL
Reference' manual corresponding to your RDBMS Release can also be used on 11.0 and
10.7, as well as the DBMS_UTILITY package.
Transaction Tuning
• Data Gathering
• Researching
• Initial Troubleshooting
• Apps 11i init.ora parameters
• Advanced Troubleshooting

Once you have validated Certify and Statistics Gathering, proceed with this next
section if your performance issue qualifies as this high-level tuning category.

Data Gathering (files and information)

Gather the indicated files and document in your electronic file the following
information:

1. How long does it take for the process to complete (specify hours/mins/secs)?
2. How long did it previously take (before having the performance issue)?
3. What is the expected performance for this process (specify hours/mins/secs)?
4. If you have any customization related to this process, document it in your
electronic file, in full detail.
5. Determine module information requested below, for specific Form, Report,
Package, Main Pro*C, etc.
1. Module internal name (short name): For a Form, use the menu to find the
short name and version. For a concurrent program, use the 'Concurrent
Program Define' Form from the SYSADMIN responsibility to find short
and long name.
2. Module title (descriptive or long name): For a Form, use the navigation
path or the title displayed in the Form.
3. Module version: For a concurrent program in UNIX, navigate to directory
(i.e. $AP_TOP/reports or $PO_TOP/bin) and use strings command:

# strings -a <module> | grep -i '$Header'

4. Latest version available according to MetaLink: Use 'Patches' option on


MetaLink main menu.
5. Current patchset level for application group owning the affected module:
Ask your System Administrator or DBA.
6. Latest patchset available according to MetaLink for the application group:
Use 'Patches' option on MetaLink main menu.

Note: If you find that you have an old version of the module, be aware that quite
often a newer version of it may fix a known performance issue. This may be true
even if you don't find a direct hit in the list of issues fixed between your version
and the latest version. If possible, upgrade the affected module to the latest
version available. Also be aware that in many cases, due to intricate file
dependencies, Oracle Development requires you have the latest version of the
module applied, in order to troubleshoot a new issue.

6. When the affected module is a Pro*C program, generate a text file with the list of
modules (components) linked to the Pro*C program. Use this or similar
command:

# strings -a <Pro*C module> | grep -i '$Header' >


MODULES.TXT

7. On a multi-org environment, determine the ORG_ID used while monitoring the


performance of the affected module. This ORG_ID or Operating Unit is usually
required to later execute an isolated SQL statement performing poorly, using
SQL*Plus. If you don't know the ORG_ID, use this command to produce a list of
Operating Units, and determine the ORG_ID from the list:

SQL> select organization_id org_id, name from


hr_operating_units;

8. If your transaction performing poorly is a Concurrent Program, determine if your


application group provides a Profile Option to turn ON and OFF some sort of
debugging information, if it does, execute your Concurrent Process with
debugging information turned ON and recover the Log file generated (this is in
addition to the Trace file requested below). Review your Log file.
9. If yours is a Concurrent Program, document in your electronic file all the
parameters requested by the execution of your process, indicating which values
you passed, including which values were left NULL, or automatically defaulted.
Include all parameter names and values. Keep in mind that the number, name, or
order of the parameters may change from one version of the module to the next.
10. For online transactions, document in your electronic file, in full detail, the
complete navigation path, as well as step-by-step instructions to reproduce your
issue. If the transaction performing poorly is a query, document which
parameters are passed to the query, as well as, which parameters are NULL, or
automatically defaulted. Be as specific as possible.
11. If your transaction has been running for several hours or days, and you did not
turn SQL Trace at the beginning of it, you can still capture the expensive SQL
statements for further analysis. Download, familiarize yourself, and execute the
bde_session.sql script (Note:169630.1) and/or the SQLAREAT.SQL
script (Note:238684.1 for 8.1 and Note:163209.1 for 8.0). These scripts can be
executed even a few minutes after the process has been killed. The former
requires the session_id, the latter has no parameters.
12. All Apps Transaction Tuning issues require, at the very least, one raw SQL
Trace. If tracing a Form, turn Trace ON using the menu (Help → Diagnostics →
Trace → Trace with Binds and Waits), and set the trace size to unlimited (Help →
Diagnostics → Trace → Unlimited Trace File Size). If tracing a Concurrent
Program, use whatever method is available for your Apps Release (usually a
check box at the Concurrent Program Define Form). Under some specific
conditions, Oracle Development requires a raw SQL Trace generated with Event
10046 (Trace with Binds and Waits). If you can provide that special Trace up
front, it helps to expedite the whole process. Once you generate the raw SQL
Trace (standard for Concurrent Programs, or with Event 10046 for any Form),
compress it and have it available in case you have to provide it to Oracle Support.
This file is usually large. Traces are found in the 'user_dump_dest' directory.

Note: It is common to iterate in this step because the raw SQL Trace generated does not
have statistics turned ON, or because it was truncated due to its size. To avoid this
unnecessary loop in the process, verify these two init.ora parameters settings:
'timed_statistics' set to TRUE, and 'max_dump_file_size' set to UNLIMITED. On the
other hand, if your process 'hangs' and you killed it, be aware that even an incomplete
raw SQL Trace may be useful to pin-point the SQL Statement(s) performing poorly.

13. If you created a Trace with Binds and Waits, use the Trace Analyzer tool
(Note:224270.1) to analyze your Trace. This tool, when used on the same db
instance where the Trace was generated, produces a comprehensive report that
can be used to identify expensive SQL in terms of service and/or wait times.
14. # sqlplus apps/<apps_pwd>
SQL> START TRCANLZR.sql UDUMP prod_ora_9105.trc;

15. For all Apps Transaction Tuning issues, generate and be ready to provide to
Oracle Support, one TKPROF file generated with the Explain Plan option.
TKPROF reports must be generated on the same db instance where the raw SQL
Trace was created. If you had to kill the process because it never completed, don't
destroy the raw SQL Trace generated, proceed to create a TKPROF from it as
well. To generate one TKPROF, unsorted, and with Explain Plan, use this
syntax:

# tkprof 12345.trc 12345.prf explain=apps/<apps_pwd>

Note: Read the TKPROF and determine if it corresponds to the transaction


performing poorly. Providing to Oracle a TKPROF which does not correspond to
the problematic transaction is very common, and it causes unnecessary delays in
the resolution of performance issues. In the TKPROF, you should recognize table
names and possibly the transaction. The total TKPROF Elapsed Time should be
close to the 'user' Elapsed Time.

Researching

At this point, you can pursue your performance issue with Oracle Support. Provide as
many of the requested files as possible. Or, you may opt to participate more pro-
actively in the research phase by performing the following steps before contacting Oracle
Support:

1. Use the Trace Analyzer and/or TKPROF file(s) to find the expensive SQL
statements. In most cases you want to focus your attention in those SQL
statements causing more than 5 to 20% of the overall logical reads or elapsed time
totals (summarized in the first page of the Trace Analyzer report, or the last page
of the TKPROF). If you are using the Trace Analyzer, you may also want to
review SQL statements with large non-idle wait times.
2. Document in your electronic file the most expensive SQL statement(s) found.
Include from the Trace Analyzer or TKPROF the block showing the SQL
statement; the block showing the parse, execute and fetch statistics for the SQL
statement; and the block that shows the Explain Plan for it.
3. For each expensive SQL statement (in most cases it is just one), search on
MetaLink for known issues. Review the SQL statement and grab from it
significant and not-so-common columns to make your search criteria as selective
as possible. Take pieces from the FROM clause and the WHERE clause. Be
aware that you may be experiencing a performance issue already reported by
another customer and for which either a fix or a workaround exists. While doing
your search on MetaLink, you may want to use also the short name of the module
(Form or Concurrent Program).
4. Document in your electronic file all MetaLink hits regarding your poorly
performing SQL statement(s). Even if they seem to be unrelated or never fixed.

Initial Troubleshooting

If you decide to be pro-active in the initial troubleshooting phase, perform the following
steps for each SQL statement identified from the Trace Analyzer or TKPROF as
expensive (according to the number of logical reads performed, or according to the
elapsed time for its complete execution, or the non-idle wait time). Start with the very
most expensive SQL statement that is not a BEGIN procedure or data dictionary
recursive SQL. In other words, focus only on DML commands (SELECT, UPDATE,
INSERT or DELETE) accessing Apps tables and indexes.
Note: If the most expensive statement in your Trace Analyzer or TKPROF is NOT a
DML command (SELECT, UPDATE, INSERT or DELETE), in other words, it is a
PL/SQL library call, use the profiler.sql script from Note:243755.1.

1. Create a flat file (text type) pasting the DML command (SQL statement) directly
from the Trace Analyzer or TKPROF report. Leave one and only one blank line
at the end of the SQL statement. Do not leave the statement without a single
blank line at the end, and do not include more than one blank line at the end. This
is required by the SQLTXPLAIN.SQL or coe_xplain_xx.sql scripts.
Create your first flat file with the name of sql1.txt, the second with
sql2.txt and so on. The SQLTXPLAIN.SQL and coe_xplain_xx.sql
scripts can handle any file name, but the standard is sql<n>.txt. Bind
variables (those with colons, i.e. ':b1',':2') should be left intact. And you should
not have a semicolon ';' at the end of your SQL statement.
2. For RDBMS Release 8.1 or later, download script SQLTXPLAIN.SQL from
Note:215187.1. For 8.0 download coe_xplain_80.sql from
Note:156959.1. And for 7.3 download coe_xplain_73.sql from
Note:156960.1. Read the downloaded version of the script and familiarize
yourself with it.
3. Place your SQLTXPLAIN.SQL set of files or your coe_xplain_xx.sql
script and your sql<n>.txt file(s) into a dedicated directory.
4. Execute SQLTXPLAIN.SQL or coe_xplain_xx.sql script from SQL*Plus
connecting as apps/apps_pwd, passing as the inline parameter the name of the file
containing your SQL statement sql<n>.txt as indicated by the instructions on
the corresponding script (see commands below). The SQLTXPLAIN.SQL or
coe_xplain_xx.sql script will not execute your SQL statement included
into sql<n>.txt. They just parse your SQL statement and proceed to explode
it into pieces for a detailed analysis.

SQL> start SQLTXPLAIN.SQL sql<n>.txt


SQL> start coe_xplain_xx.sql sql<n>.txt

5. If you need to provide to Oracle Support the output of the SQLTXPLAIN.SQL or


coe_xplain_xx.sql scripts, compress and send the whole directory with all
spool files within it.
6. From the Trace Analyzer or TKPROF, and from the spool file created by the
SQLTXPLAIN.SQL or coe_xplain_xx.sql script, determine and compare
the Optimizer used. It should be consistent. Document in your electronic file
which Optimizer is being used: Rule Based - RBO or Cost Based - CBO.
7. If using CBO, good statistics of the schema objects accessed are crucial to
generate an optimal execution plan. Locate in your spool file the column 'Last
Analyzed'. This column tells you when you last gathered stats for each table
accessed by your SQL statement. (You can do the same to review all indexes).
Look also at the column that reads 'Delta Percent' to determine gap between actual
number of rows in your tables (dynamically calculated with COUNT function)
and the number of rows recorded in your data dictionary as part of the CBO stats
for your tables accessed by your SQL statement. If your stats for the schema
objects accessed by your SQL statement are more than one month old, or the gap
reported under the 'delta percent' column is more then 10%, you need to gather
fresh stats for your affected schema objects.
8. Only for CBO: If you determine that some or all the schema objects accessed by
your SQL statement require refreshed stats, you must use the appropriate tool to
gather stats according to your Apps Release. If your Apps Release is 11i, you
may want to use the SQLTGSTATS.SQL script included in the SQLT.zip file.
9. Only for CBO: If you had to refresh stats, execute the SQLTXPLAIN.SQL or
coe_xplain_xx.sql script again to produce a new Explain Plan. If the
Explain Plan changed, measure again the performance of your original
transaction, as it may have changed as well.
10. If you are using CBO, are on 8.1 or later, and have only identified one expensive
SQL statement, there are two files that you want to recover and have available for
Oracle Support. Be aware that second file is Binary, therefore you must copy
across servers as such.
1. Find in 'user_dump_dest' directory a raw SQL Trace generated by the
SQLTXPLAIN.SQL script automatically. This raw SQL Trace file is
completely unrelated to the first raw SQL Trace file you generated when
executing your original transaction. If SQLTXPLAIN.SQL was executed
using SQL*Plus on the db server, the raw SQL Trace generated by it may
had been already copied into the same dedicated directory from which the
script was executed.
2. Generate Binary file SQLT.dmp with your CBO statistics for affected
objects, by executing Export command (exp) as per
INSTRUCTIONS.TXT included in SQLT.zip. Use command below.
Execute export using ORACLE_HOME 8.0.6.

# exp apps/<apps_pwd> file=SQLT


tables='SQLT$STATTAB'

11. If you have a vanilla instance, or one in which the same transaction performs fine,
follow the same steps to reproduce the Explain Plan, and compare them to verify
that you can produce the same Explain Plan in both instances. If not, focus first
on indexes. If you find index discrepancies among instances, solve such
discrepancies.
12. If you are using RBO and see in your Explain Plan that a custom index is being
accessed, drop the custom index and measure the performance again. Be aware
that once you drop an index being accessed according to the Explain Plan, all
prior measurements and files become invalid. The Explain Plan will change, as
well as the performance.
13. If on an 11i instance you notice that your SQL statement makes references to
Apps Views, find corresponding view definitions in the spool file created by the
SQLTXPLAIN.SQL script. You may optionally execute the coe_view.sql to
generate scripts which can be used later to clone your views from one instance
into another. Be aware there may exist several versions of the same view
depending on the version of the corresponding ODF file used to create the view.
You can use the syntax of the command below to search from the corresponding
APPL_TOP for the specific ODF file that creates a view. Once you know the file
and find its version, you can search on MetaLink for newer versions:

# find . -type f -exec grep -i


"CZ_COMPONENT_SELECTIONS_V" {} \; -print

14. If on an 11i instance, you may want to ensure all required and recommended
init.ora parameters for 11i are set correctly. Since you already have at least one
SQLTXPLAIN.SQL or coe_xplain_xx.sql report, near the end of them
you will find a list of init.ora parameters affecting the behavior of the CBO. The
init.ora parameters required for your Apps Release are clearly identified. If
necessary, fix any parameter showing an incorrect setting, and repeat your test.

Apps 11i init.ora parameters

Development has provided an official list under Note:216205.1.

You may optionally download the bde_chk_cbo.sql script from Note:174605.1.


This script allows to quickly validate all database initialization parameters according to
Note:216205.1. This script includes which EVENTs must be set or unset for Oracle
Apps 11i as well.

Advanced Troubleshooting

This phase requires special skills, and at this point most performance issues are raised to
Oracle Support. If you are proficient in SQL*Plus and you understand the Explain Plan,
you may want to review this section; otherwise collect the requested files and provide
them to Oracle Support.

Fully read the output of the SQLTXPLAIN.SQL or coe_xplain_xx.sql script and


make sense of the Explain Plan and all related information. Familiarize yourself with this
report and make a sanity check on its contents.
To proceed with this phase, you need to prepare a tuning environment. This tuning
environment can be located on the same instance on which the performance issue is
observed, or it can be on a different instance. If the issue is observed in a Production
instance, try to set the tuning environment on the Test or Development instances. Tune
on the Production instance only when you have failed to reproduce the issue on a non-
production environment. Reproducing an issue does not necessarily mean reproducing its
performance. In most cases it is sufficient to reproduce the same 'bad' Explain Plan,
rather than the slow access itself.

In general, a good tuning environment is one in which you can perform the following two
independent activities at will. You may end up with two tuning environments, one for
each activity.

1. Reproduce the exact same Explain Plan from original version of SQL statement.
2. Execute the expensive SQL statement in its original version and on several
different 'modified' versions (hopefully improved).

Reproducing the exact same Explain Plan on a different instance usually represents a
significant challenge, as the CBO decides the plan based on: CBO statistics, init.ora
parameters, schema object attributes, RDBMS Release, and in a very few cases, in the
Platform used. The latter is particularly true when hitting a platform specific OS or
RDBMS Bug. With RBO, reproducing the same explain plan is much easier, since it
depends mostly on the schema objects defined and accessed. Therefore, this advanced
troubleshooting phase relates mostly to issues regarding the CBO.

The focus of the tuning effort is usually in or around the Explain Plan Operation with the
largest number of rows, according to the Explain Plan from the Trace Analyzer or
TKPROF report. If the column 'Rows' on the Explain Plan from the Trace Analyzer or
TKPROF report shows all zeroes (meaning zeroes for all Operations), then it is almost
indispensable to isolate the SQL statement, identify, define and assign the values of the
bind variables, and execute from SQL*Plus with SQL Trace enabled. Then create a new
Trace Analyzer or TKPROF from new raw SQL Trace. The new Trace Analyzer or
TKPROF will have the Rows column of the Explain Plan populated. This relates to how
SQL*Plus handles cursors compared to other tools. The cursor must be closed in order to
get the number of rows column in the Explain Plan, and SQL*Plus keeps open one cursor
at a time, forcing the closure of the prior cursor.

To reproduce the exact same Explain Plan from original version of SQL statement, try
the following:

1. Copy the CBO statistics from the source instance (where the SQL performs
poorly) into the destination instance (tuning environment):
1. When you ran the SQLTXPLAIN.SQL on the source instance, it updated
a staging table SQLT$STATTAB with the CBO stats for all schema
objects related to the SQL statement (tables, indexes, columns and
histograms).
2. Use the Export command on the source instance, according to the
INSTRUCTIONS.TXT file provided inside the SQLT.zip file.
Generate an external binary file SQLT.dmp with the following
command:

# exp apps/<apps_pwd> file=SQLT


tables='SQLT$STATTAB'
3. Copy this binary file from source to destination instance. Do not use
ASCII. Treat always as BINARY.
4. Use the Import command on destination instance, as documented on the
SQLTSTATTAB.SQL script, also included inside the SQLT.zip file.

#imp apps/<apps_pwd> file=SQLT


tables='SQLT$STATTAB' ignore=y
5. Execute script SQLTSTATTAB.SQL on the destination instance to update
the data dictionary CBO stats from imported SQLT$STATTAB table.

Note: To restore the stats in the destination instance, simply gather new
stats for affected schema objects. You may also want to use the
SQLTGSTATS.SQL script included in the SQLT.zip file. This means
you don't need to backup CBO stats on destination instance prior to
overriding them with stats from source instance.

2. Review and compare the init.ora parameters files from source and destination
instances. If these instances are Production and Test, you may have the exact
same parameters. If not, annotate the differences in case you have to adjust the
CBO relevant parameters in the destination instance. You don't have to take
action right away, as you may produce the same Explain Plan without changing
any init.ora parameter. Even if you adjust an init.ora parameter, you may be able
to do so with an ALTER SESSION command for most of them, instead of
modifying the init.ora file and bouncing the database altogether.
3. Review and compare schema object attributes such as 'Partitioned' and Parallel
'Degree'. If you find discrepancies, sync them up.
4. Annotate the RDBMS Release version (up to the fifth digit), as well as the
Platform. You must be at least on the same RDBMS Release up to the 3rd digit
before trying to reproduce the same Explain Plan. It would be better, if you are
on sync up to the 4th digit.
5. Now try, using the SQLTXPLAIN.SQL script, to generate the same Explain Plan
in source and destination. If you don't get the same Explain Plan, adjust CBO
related init.ora parameters in destination and try again (use ALTER SESSION if
parameter allows). If you still cannot obtain the same Explain Plan using
common CBO stats, init.ora parameters, and schema object attributes, try then
ruling out differences among the source and destination instances (one at a time).
Determine what is different between source and destination instances. Focus on
the discrepancies that may be causing the 'bad' Explain Plan (assuming the plan
on destination instance performs better, of course), and rule them out (one by one)
until you find which of the discrepancies causes the Explain Plan to switch from
'good' to 'bad'.

Note: In most cases, finding the root cause of a ' bad' Explain Plan, is the same
than finding the solution to the performance issue.

6. If you find yourself in a situation in which CBO stats, schema object attributes,
and init.ora, are all the same, but Platform and RDBMS Release are different, and
instance in which RDBMS is newer performs better, you can be almost certain
that upgrading the RDBMS engine will solve your performance issue.
7. If you have the same CBO stats, schema object attributes, init.ora and Platform,
and only RDBMS Release is different, and the instance with the newer Release
performs better, try to upgrade your RDBMS Release on the older instance.
8. If you, or Oracle Support, can reproduce the same 'bad' Explain Plan on the same
or different Platform (common up to the 3rd digit RDBMS Release), using your
CBO stats, most likely your issue is an Application Bug. In such case, the issue
needs to be reported to Oracle Development and you will be given a new Bug
number.
9. Once your performance issue has been cataloged as a new Application Bug, the
next steps in the troubleshooting process require to explain and execute the SQL
statement in its original form and on several different modified versions, trying to
create one execution plan delivering a better performance.

Note: If you have two similar instances. One performing well and the other
poorly, and they have similar data but producing different Explain Plans, as a
workaround you can force the 'good' plan into the slow instance by migrating the
CBO stats using SQLTXPLAIN.SQL together with SQLTSTATTAB.SQL.

To execute the expensive SQL statement in its original version and on several different
modified versions, try the following:

1. You need to be able to isolate the expensive SQL statement for execution from
SQL*Plus, at will. To do this, you need first to find the values of the bind
variables referenced. If you can deduce the values by reviewing the SQL
statement, do so. Otherwise, you might have to use one of the methods available
for tracing with bind variables according to Note:171647.1, and then use the
Trace Analyzer (Note:224270.1) to process the raw SQL Trace and identify the
bind variables from your raw SQL Trace created with Event 10046 on your
transaction.
2. Assign the values of the bind variables referenced in the SQL statement. Be
careful conserving the same data types. You may need to define and assign
values to bind variables. Avoid replacing bind variables with literals.
3. If your Explain Plan shows any table with suffix '_ALL', this means you need to
set up the multi-org environment. Using the corresponding Operating Unit
(ORG_ID), set multi-org with this command:

SQL> exec fnd_client_info.set_org_context('&org_id');

4. Use ALTER SESSION commands to turn SQL_TRACE ON and OFF, and


execute your SQL statement. To facilitate the whole process, you may want to
create a script with the setting of multi-org, the 'ALTER SESSION SET
SQL_TRACE = TRUE;' command, the assignment of the bind variable values,
your SQL statement, and the command to terminate the SQL Trace. This way
you can execute the SQL statement systematically.
5. For INSERT, UPDATE or DELETE SQL statements, you need to focus on the
subquery performing poorly (usually the main query feeding the
insert/update/delete). Transform the original SQL into an isolated stand-alone
query. If this is not possible, add the ROLLBACK command to the script with
your INSERT, UPDATE or DELETE SQL statement.
6. Tuning the SQL statement requires a lot of practice and patience. All methods
converge to the same principle: apply your knowledge to create a modified and
improved version of the original SQL statement and determine if it performs
better for every situation. Some analysts mostly use the Explain Plan leaving the
benchmark of a candidate to the end. Others prefer to execute and measure the
performance of each candidate (modified SQL statement). It becomes a matter of
style at this level.
7. While iterating in this process, generating explain plans for several versions of the
same SQL statement, you can use the SQLTX.SQL script (included in
SQLT.zip), or the bde_x.sql script from Note:174603.1. The latter script
generates a very simple Explain Plan and their execution is faster than complex
SQLTXPLAIN.SQL.
8. Things to try in order to improve the Explain Plan and ultimately the performance
of the SQL statement:
1. Verify all tables referenced in the FROM clause are actually joined in the
WHERE clause.
2. Improve CBO stats, either by using a higher sample size or with
histograms for columns that are filtered with literals in the WHERE
clause. You can also temporarily modify the CBO stats for an index or
column (only as a short term workaround).
3. Indexes with better selectivity for specific filter or join conditions. These
indexes may ultimately be custom indexes.
4. Switch the optimizer from RBO to CBO, or from CBO to RBO. The latter
as a temp solution only.
5. Use one or more CBO Hints.
6. Rewording the SQL statement.
7. Avoid overuse of UNION, ORDER BY, DISTINCT and other clauses that
may not be necessary.
8. Using dynamic SQL to avoid poorly written code with functions like NVL
or DECODE in the WHERE clause, which may affect the selectivity of
the predicates.
9. If the SQL statement has been fully tuned, and the reason of the poor
performance is purely the volume of data necessarily retrieved, then you
may want to consider data partitioning or parallel query. Keep in mind
that data partitioning or parallel processing are not the right solution to fix
a poorly tuned SQL.

Back-end Tuning
Once you have validated Certify and Statistics Gathering, proceed with this section if
your performance issue qualifies as this high-level tuning category.

• Data Gathering
• Initial Troubleshooting
• Advanced Troubleshooting

Data Gathering (files and information)

Gather the indicated files and document in your electronic file the following
information:

1. Architecture information: Basically how are the database server and the middle-
tier configured? Are they in the same box? If not, how many boxes you have?
Which type? etc. Try answering: Where do you have what?
2. Hardware Profile: CPU(s) number, type and speed. Amount of memory. How is
the I/O subsystem configured? What is the network type and speed? etc. Provide
high-level information of your hardware and components.
3. Server workload profile: What do you have running on same box? Databases in
use on same server. Number of concurrent active users. Number of concurrent
active processes at a given time (usually peak times). Basically, how busy is your
box.
4. Do other applications or databases you may have running in the same box also
perform poorly? If not, explain.
5. What is the current and projected system load? What is the current vs. projected
number of active concurrent users. Same for active concurrent processes.
6. What are the concurrent processing policies? Have you moved as much of the
concurrent processing to lower online user activity periods? Do you have any
restrictions to concurrent processing at all? Do you use concurrent managers to
restrict this intensive processing activities during the day?
7. Have you validated any Operating System resource contention? This includes
CPU, memory and disk. Your comments should take into consideration all your
boxes (back-end and middle-tier). Validate and document if you are experiencing
CPU spikes, memory paging, hot disks, etc. If you have noticed memory
swapping, document in detail, as well as any action being taken, including short-
term actions like borrowing hardware from Development or Test servers.
8. Make a list of all application modules (application groups), installed and in use on
this Apps instance. Example: AP, GL, FND, ONT, etc.
9. For the application modules (groups) affected by poor performance, document
currently installed patchset level.
10. For same affected application modules, research on MetaLink and document in
your electronic file what is the latest patchset available. Use the 'Patches' option
on MetaLink main menu. Document if you can upgrade to latest patchset levels
shortly.
11. What is the Operating System patching level in your server box? Try to rule out
any known performance issues on your hardware platform. In general, try to keep
your OS patching level as current as possible. It is known that several overall
performance issues are caused by inadequate OS patching level. Consult your
hardware vendor in this regard.
12. If you have any prior Performance related Report with specific recommendations
regarding hardware, configuration sizing, benchmark, stress test or similar,
include the report in your electronic documentation of your performance issue
and provide it to Oracle Support. This includes any formal or informal
recommendations made by any Oracle or 3rd party entity. Especially important if
you are considering or in the process of implementing such recommendations.
13. Review init.ora parameters file doing a sanity check on it, and have it ready to
make it available to Oracle Support. If you find that some Events are set in this
file, determine if there is a strong valid reason why. If none, delete them. If your
init.ora file references another file by using the command IFILE, review the
referenced file and have it available as well.
14. Review ALERT.LOG for possible server errors or any other evident
abnormalities. If the ALERT.LOG is large, have the last 48 hours of it available
for Oracle Support. Be sure the ALERT.LOG includes the monitored interval
where the performance was poor.
15. Make file APPLPTCH.TXT available to Oracle Support. This file keeps track of
patches applied to Apps.
16. Identify up to the top 10 user transactions performing poorly. This transactions
are usually what is causing your users to complain of the poor performance in the
first place. Examples: entering an order, saving a line, performing a query, etc.
Document for each of this up to 10 'critical' user transactions:
1. Transaction name or short description. Example: booking an order
2. Performance during normal system load (secs/mins)
3. Expected performance (secs/mins)
4. Performance when system load is almost zero (when only one or very few
users are active on instance)

Initial Troubleshooting

At this point, you can pursue your performance issue with Oracle Support. Provide as
many of the requested files as possible. Or, you may opt to participate more pro-
actively in the initial troubleshooting phase by performing the steps in this section before
contacting Oracle Support.

1. For 11i instances: Verify required and recommended init.ora parameters are set
correctly. Use the provided script AFCHKCBO.sql and Interoperability Notes as
well. Generate the AFCHKCBO.TXT file containing the Apps 11i required
init.ora parameters names and values. Make this file available to Oracle Support.
Fix any discrepancy, and document possible reason why 'current' and 'required'
columns don't have the same value. If the AFCHKCBO.sql script and the
Interoperability Notes require different values, set the values of the Note. To
execute the script use the syntax below:

# sqlplus apps/<apps_pwd>
SQL> SPOOL AFCHKCBO.TXT;
SQL> start $FND_TOP/sql/AFCHKCBO.sql
SQL> SPOOL OFF;
SQL> QUIT;
2. For 11i instances: There is another script that validates all required and
recommended init.ora parameters according to RDBMS Release. Use script
bde_chk_cbo.sql from Note:174605.1 as an extension to
AFCHKCBO.sql. Be aware that script bde_chk_cbo.sql provides more
detail than AFCHKCBO.sql.
3. For 11i instances: Verify CBO statistics by executing the
bde_last_analyzed.sql script from Note:163208.1. This script reports
CBO stats per application group (schema name) and per date. It also creates a
summary at the beginning indicating how old the CBO stats are, if data dictionary
objects are free of CBO stats, and if you have partitioned tables requiring you fix
their global stats. Review BDE_LAST_ANALYZED.TXT reports and make them
available to Oracle Support.
4. For 10.7 and 11.0 instances: Verify you are gathering your CBO stats at least one
per month, all schema objects, with an estimate of 10%
5. For 8.0 and later: when the system is performing poorly, take one snapshot of the
SQL area, to easily identify if there is any evident expensive SQL being executed,
or recently executed. For 8.1 and later, use SQLTAREAT.SQL from
Note:238684.1. For 8.0 use coe_sqlarea_80.sql from Note:163209.1.
Read and get yourself familiarized with the specific version of
SQLTAREAT.SQL script and its output. Compress the whole directory which
contains the SQLAREAT.SQL generated spool file(s).

Advanced Troubleshooting

This phase is not complex, but it requires you to invest more time initially, monitoring
the performance of your instance for up to one working day. Therefore, you may want to
proceed to reporting to Oracle Support your overall bad performance issue at this time.
Once your issue has been reported, proceed immediately with this advanced
troubleshooting phase, in order to quickly generate the necessary files to pursue a final
resolution.

1. For 7.3 and 8.0: Generate and provide at least one REPORT.TXT file from UTL
scripts 'ULTBSTAT/UTLESTAT'. Your DBA should be familiar with these
common performance monitoring scripts.
2. For 8.1 and later: Install the latest version available of the StatsPack package.
This package is automatically available on 8.1, but it requires it be installed from
its location at $ORACLE_HOME/rdbms/admin into the database (as any other
package). Look for detailed instructions in the document spdoc.txt within the
same directory. Your DBA should already either be familiar with this package, or
get familiar with it since it replaces former UTL scripts
'ULTBSTAT/UTLESTAT'. Read Note:153507.1 for more information on
StatsPack for Oracle Apps. Once installed, monitor poor performance during 4-8
hours, taking snapshots every 30 mins during the whole monitored interval (4-8
hours). Then generate one or more StatsPack Report based on the snapshots
while performance was worse (usually between 1 and 2 hours within the 4-8
hours monitored interval). Provide these reports to Oracle Support.
3. For all RDBMS Releases, monitor OS performance at the same time database
performance is monitored. For 8.1 and later, every 30 mins during the whole
monitored interval (4-8 hours). For 7.3 and 8.0, every 30 mins during the same
interval of 'ULTBSTAT/UTLESTAT' execution. Use standard OS performance
monitoring tools like: vmstat, iostat, mpstat and top.
4. If there is an obvious resource contention in your OS, stop here and try to
alleviate it with permanent or temporary actions at the OS level.
5. Have your DBA read and analyze your ULTBSTAT/UTLESTAT or StatsPack
Report. Search on MetaLink for possible known issues with the same
characteristics.
6. If you opt to be pro-active reducing the number of performance issues due to
known problems, keep your code as recent as possible, starting by the OS and
moving towards RDBMS, leaving Apps code last. Upgrading the OS software is
usually painless, the RDBMS requires minor testing, and the Apps layer requires
significant user testing.
7. Provide all your gathered files, performance template, and dial-in to Oracle
Support. Familiarize yourself with Oracle Direct Connect ODC

Tech-Stack Tuning
Once you have validated Certify and Statistics Gathering, proceed with this section if
your performance issue qualifies as this high-level tuning category.

Data Gathering (files and information)

Gather the indicated files and document in your electronic file the following
information:

1. How long does it take for the process to complete (specify hours/mins/secs)?
2. How long did it use to take (before having the performance issue)?
3. What is the expected performance for this process (specify hours/mins/secs)?
4. Architecture information: Basically how are the database server and the middle-
tier configured? Are they in the same box? If not, how many boxes you have?
Which type? etc. Try answering: Where do you have what?
5. Hardware Profile: CPU(s) number, type and speed. Amount of memory. How is
the I/O subsystem configured? What is the network type and speed? etc. Provide
high-level information of your hardware and components.
6. Server workload profile: What do you have running on same box? Databases in
use on same server. Number of concurrent active users. Number of concurrent
active processes at a given time (usually peak times). Basically, how busy is your
box.
7. Do other applications or databases you have running on the same box also
perform poorly? If no, explain.
8. Do other applications running on the same client also perform poorly? If no,
explain.
9. If the issue relates to slowness in loading a Form or navigating, does it reproduce
when the client is connected directly to the network hub closest to the server?
10. What is the current and projected system load? What is the current vs. projected
number of active concurrent users. Same for active concurrent processes.
11. What are the concurrent processing policies? Have you moved as much of the
concurrent processing to lower online user activity periods? Do you have any
restrictions to concurrent processing at all? Do you use concurrent managers to
restrict this intensive processing activities during the day?
12. Have you validated any Operating System resource contention? This includes
CPU, memory and disk. Your comments should take into consideration all your
boxes (back-end and middle-tier). Validate and document if you are experiencing
CPU spikes, memory paging, hot disks, etc. If you have noticed memory
swapping, document in detail as well as any action being taken, including short-
term actions like borrowing hardware from Development or Test servers.
13. Make a list of all application modules (application groups), installed and in use on
this Apps instance. Example: AP, GL, FND, ONT, etc.
14. For the application modules (groups) affected by poor performance, document the
currently installed patchset level.
15. For the same affected application modules, research on MetaLink and document
in your electronic file what is the latest patchset available. Use 'Patches' option on
MetaLink main menu. Document if you can upgrade to latest patchset levels
shortly.
16. What is the Oracle Forms Release version?
17. Which is the JINITIATOR version?
18. Browser name and version?
19. Document in your electronic file, in full detail, the complete navigation path, as
well as step-by-step instructions to reproduce your issue. If the transaction
performing poorly is a query, document which parameters are passed to the query
as well as which parameters are NULL, or automatically defaulted. Be as specific
as possible.

Proceed to report your Tech-Stack performance issue to Oracle Support.

Reporting to Oracle
Providing to Oracle well organized, accurate, and comprehensive information and files,
helps to expedite the total resolution time. Try to determine the right balance between the
initial time invested in gathering as much information and files early in the process, with
the benefit they provide to the overall process reducing the number of iterations between
the end-user, the technical user, Oracle Support, and Oracle Development. Reports like
the one produced by the SQLTXPLAIN.SQL script consolidate pieces of information
like CBO stats, indexes, init.ora parameters, etc. Having all this information gathered
systematically and into one single place, certainly reduces the number of iterations and
therefore the total resolution time.

Requested Files for Transaction Tuning Issues

At the very minimum, provide files 1 to 5 below. If you pro-actively participated on the
initial troubleshooting phase, or Oracle Support has guided you through it, provide files 6
to 9 accordingly.

1. Apps Performance Bug Template - Transaction Tuning


2. For Pro*C Concurrent Programs, MODULES.TXT with versions of modules
3. For Concurrent Programs, Log file with debugging information
4. Raw SQL Trace from the transaction, with or without Event 10046 details
(compressed), complete or incomplete
5. Trace Analyzer and/or TKPROF file(s) with Explain Plans
6. SQL<n>.TXT file(s) with identified expensive SQL
7. Compressed file with spool files generated by SQLTXPLAIN.SQL
8. Only for CBO and SQL1.TXT: Raw SQL Trace produced by
SQLTXPLAIN.SQL
9. Only for CBO and SQL1.TXT: Binary file SQLT.dmp with CBO stats

Requested Files for Back-End Tuning Issues

At the very minimum, provide files 1 to 5 below. If you pro-actively participated on the
initial troubleshooting phase, or Oracle Support has guided you through it, provide files 6
to 8 accordingly. If you opted to, or Oracle Support instructed you to perform the
advanced troubleshooting steps, provide files 9-11 accordingly.
1. Apps Performance Bug Template - Back-End Tuning
2. Any prior Performance related Report available
3. File init.ora, including any ifile(s) referenced on the init.ora
4. ALERT.LOG file, including last 48 hours
5. APPLPTCH.TXT file
6. For 11i, AFCHKCBO.TXT and/or BDE_CHK_CBO.TXT with required init.ora
parameters
7. For 11i, BDE_LAST_ANALYZED.TXT with summary of CBO stats
8. For 8.1 or later, SQLAREAT.SQL spool files with expensive SQL statements
9. For 7.3 or 8.0, REPORT.TXT produced by bstat/estat tool
10. For 8.1 or later, StatsPack Report
11. Set of stats from OS performance monitoring tools, like: vmstat, iostat, mpstat
and top

Requested Files for Tech-Stack Tuning Issues

1. Apps Performance Bug Template - Tech-Stack Tuning

Related Documents
Scripts

All related COE, BDE, TRCA, SQLT and PROF scripts can be downloaded from
MetaLink

MetaLink Script Title


224270.1 TRCANLZR.SQL Trace Analyzer (8.1.6+)
Enhanced Explain Plan and related info for one
215187.1 SQLTXPLAIN.SQL SQL statement (8.1+)
Implementing and Using the PL/SQL Profiler
243755.1 profiler.sql (8.1+)
Enhanced Explain Plan for given SQL Statement
156959.1 coe_xplain_80.sql (8.0)
Enhanced Explain Plan for given SQL Statement
156960.1 coe_xplain_73.sql (7.3)
Simple Explain Plan for given SQL Statement
174603.1 bde_x.sql (8.1+)
Clones views across instances for SQL tuning
156972.1 coe_view.sql exercises (8.0+)
238684.1 SQLAREAT.SQL Top 10 Expensive SQL from SQL Area (8.1+)
163209.1 coe_sqlarea_80.sql Top 10 Expensive SQL from SQL Area (8.0)
Expensive SQL and resources utilization for
169630.1 bde_session.sql given Session ID (8.1+)
Current, required and recommended Apps 11i
174605.1 bde_chk_cbo.sql init.ora params (11i)
Verifies Statistics for all installed Apps
163208.1 bde_last_analyzed.sql modules (11i)

Automates CBO Stats Gathering using FND_STATS


156968.1 coe_stats.sql and Table sizes (11i)
SQL Tracing Apps online transactions with
156969.1 coe_trace.sql Event 10046 (11i)
SQL Tracing Apps online transactions with
156970.1 coe_trace_11.sql Event 10046 (11.0)
Turns SQL Trace ON for all open DB Sessions
156971.1 coe_trace_all.sql (8.0+)
156965.1 coe_locks.sql Session and serial# for locked Rows (7.3+)

Templates

MetaLink Apps Performance Bug Template


Transaction Tuning
169938.1 Back-End Tuning
169939.1 Tech-Stack Tuning

Notes

MetaLink Title
39817.1 Interpreting Raw SQL Trace and DBMS_SUPPORT.START_TRACE
153507.1 Oracle Applications and StatsPack
171647.1 Tracing Oracle Applications using Event 10046
170223.1 Profile Option: Initialization SQL Statement - Custom