Sie sind auf Seite 1von 43

TOP DATABASE PERFORMANCE ISSUES/PROBLEMS AND HOW TO RESOLVE THEM

Library Cache/Shared Pool Latch waits

Typically Library Cache/Shared Pool Latch waits is a contention problem caused by unshared SQL (in the case of
the library cache latch), or exhaustion of
space in the shared pool (for the shared pool latch). For the shared pool latch, while new space allocations will
require the latch it is typically the freeing
AND allocation of space through too small a shared pool which causes problem.

Note 62143.1 Understanding and Tuning the Shared Pool


Modified Date : 04-AUG-2008 Status: PUBLISHED
Checked for relevance on 7-Nov-2007.

Introduction
The aim of this article is to introduce the key issues involved in tuning the shared pool in Oracle 7 through 9. The notes here
are particularly important if your system shows any of the following:

• Latch contention for the library cache latch/es


• Latch contention for the shared pool latch
• High CPU parse times
• High numbers of reloads in V$LIBRARYCACHE
• Lots of parse calls
• Frequent ORA-04031 errors

What is the shared pool ?


Oracle keeps SQL statements, packages, object information and many other items in an area in the SGA known as the
shared pool. This sharable area of memory is managed as a sophisticated cache and heap manager rolled into one. It has 3
fundamental problems to overcome:

1. The unit of memory allocation is not a constant - memory allocations from the pool can be anything from a few bytes to
many kilobytes
2. Not all memory can be 'freed' when a user finishes with it (as is the case in a traditional heap manager) as the aim of
the shared pool is to maximize sharability of information. The information in the memory may be useful to another
session - Oracle cannot know in advance if the items will be of any use to anyone else or not.
3. There is no disk area to page out to so this is not like a traditional cache where there is a file backing store. Only
"recreatable" information can be discarded from the cache and it has to be re-created when it is next needed.

Given this background one can understand that management of the shared pool is a complex issue. The sections below list
the key issues affecting the performance of the shared pool and its associated latches.

Items covered include:

• Terminology
• Benefits of Literal SQL ?
• Why Share SQL
• Reducing the load on the Shared Pool
o Parse Once / Execute Many
o Eliminating Literal SQL
o Avoid Invalidations
o CURSOR_SHARING parameter (8.1.6 onwards)
o SESSION_CACHED_CURSORS parameter
o CURSOR_SPACE_FOR_TIME parameter
o CLOSE_CACHED_OPEN_CURSORS parameter
o SHARED_POOL_RESERVED_SIZE parameter
o SHARED_POOL_RESERVED_MIN_ALLOC parameter
o SHARED_POOL_SIZE parameter
o _SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards)
o Precompiler HOLD_CURSOR and RELEASE_CURSOR Options
o DBMS_SHARED_POOL.KEEP
o Flushing the SHARED POOL
o Using V$ Views (V$SQL and V$SQLAREA)
o MTS & XA
• Useful SQL for looking at Shared Pool problems
• Issues in various Oracle Releases

Terminology

Literal SQL

A literal SQL statement is considered as one which uses literals in the predicate/s rather than bind variables where the value
of the literal is likely to differ between various executions of the statement.
Eg 1:
SELECT * FROM emp WHERE ename='CLARK';

is used by the application instead of

SELECT * FROM emp WHERE ename=:bind1;


Eg 2:
SELECT sysdate FROM dual;

does not use bind variables but would not be considered as a literal

SQL statement for this article as it can be shared.

Eg 3:
SELECT version FROM app_version WHERE version>2.0;

If this same statement was used for checking the 'version' throughout
the application then the literal value '2.0' is always the same
so this statement can be considered sharable.

Hard Parse

If a new SQL statement is issued which does not exist in the shared pool then this has to be parsed fully. Eg: Oracle has to
allocate memory for the statement from the shared pool, check the statement syntactically and semantically etc... This is
referred to as a hard parse and is very expensive in both terms of CPU used and in the number of latch gets performed.

Soft Parse

If a session issues a SQL statement which is already in the shared pool AND it can use an existing version of that statement
then this is known as a 'soft parse'. As far as the application is concerned it has asked to parse the statement.
Identical Statements ?

If two SQL statements mean the same thing but are not identical character for character then from an Oracle viewpoint they
are different statements. Consider the following issued by SCOTT in a single session:
SELECT ENAME from EMP;

SELECT ename from emp;


Although both of these statements are really the same they are not identical as an upper case 'E' is not the same as a lower
case 'e'.

Sharable SQL

If two sessions issue identical SQL statements it does NOT mean that the statement is sharable. Consider the following:
User SCOTT has a table called EMP and issues:

SELECT ENAME from EMP;

User FRED has his own table called EMP and also issues:

SELECT ENAME from EMP;


Although the text of the statements are identical the EMP tables are different objects. Hence these are different versions of
the same basic statement. There are many things that determine if two identical SQL strings are truely the same statement
(and hence can be shared) including:

• All object names must resolve to the same actual objects


• The optimizer goal of the sessions issuing the statement should be the same
• The types and lengths of any bind variables should be "similar". (We dont discuss the details of this here but different
types or lengths of bind variables can cause statements to be classed as different versions)
• The NLS (National Language Support) environment which applies to the statement must be the same.

Versions of a statement

As described in 'Sharable SQL' if two statements are textually identical but cannot be shared then these are called 'versions'
of the same statement. If Oracle matches to a statement with many versions it has to check each version in turn to see if it is
truely identical to the statement currently being parsed. Hence high version counts are best avoided by:

• Standardising the maximum bind lengths specified by the client


• Avoid using identical SQL from lots of different schemas which use private objects. Eg: SELECT xx FROM MYTABLE;
where each user has their own MYTABLE
• Setting _SQLEXEC_PROGRESSION_COST to '0' in Oracle 8.1

Library Cache and Shared Pool latches

The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool.

The library cache latches (and the library cache pin latch in Oracle 7.1) protect operations within the
library cache itself.

All of these latches are potential points of contention. The number of latch gets occurring is influenced
directly by the amount activity in the shared pool, especially parse operations. Anything that can
minimise the number of latch gets and indeed the amount of activity in the shared pool is helpful to both
performance and scalability.
Literal SQL versus Shared SQL
To give a balanced picture this short section describes the benefits of both literal SQL and sharable SQL.

Literal SQL

The Cost Based Optimizer (CBO) works best when it has full statistics and when statements use literals in their predicates.
Consider the following:
SELECT distinct cust_ref FROM orders WHERE total_cost < 10000.0;

versus

SELECT distinct cust_ref FROM orders WHERE total_cost < :bindA;


For the first statement the CBO could use histogram statistics that have been gathered to decide if it would be fastest to do
a full table scan of ORDERS or to use an index scan on TOTAL_COST (assuming there is one). In the second statement
CBO has no idea what percentage of rows fall below ":bindA" as it has no value for this bind variable to determine an
execution plan . Eg: ":bindA" could be 0.0 or 99999999999999999.9

There could be orders of magnitude difference in the response time between the two execution paths
so using the literal statement is preferable if you want CBO to work out the best execution plan for you.
This is typical of Decision Support Systems where there may not be any 'standard' statements which
are issued repeatedly so the chance of sharing a statement is small. Also the amount of CPU spent on
parsing is typically only a small percentage of that used to execute each statement so it is probably
more important to give the optimizer as much information as possible than to minimize parse times.

Sharable SQL
If an application makes use of literal (unshared) SQL then this can severely limit scalability and throughput. The cost of
parsing a new SQL statement is expensive both in terms of CPU requirements and the number of times the library cache
and shared pool latches may need to be acquired and released.

Eg: Even parsing a simple SQL statement may need to acquire a library cache latch 20 or 30 times.

The best approach to take is that all SQL should be sharable unless it is adhoc or infrequently used
SQL where it is important to give CBO as much information as possible in order for it to produce a good
execution plan.

Reducing the load on the Shared Pool

Parse Once / Execute Many

By far the best approach to use in OLTP type applications is to parse a statement only once and hold the cursor open,
executing it as required. This results in only the initial parse for each statement (either soft or hard). Obviously there will be
some statements which are rarely executed and so maintaining an open cursor for them is a wasteful overhead.

Note that a session only has <Parameter:OPEN_CURSORS> cursors available and holding cursors
open is likely to increase the total number of concurrently open cursors.

In precompilers the HOLD_CURSOR parameter controls whether cursors are held open or not while in
OCI developers have direct control over cursors .

Eliminating Literal SQL


If you have an existing application it is unlikely that you could eliminate all literal SQL but you should be prepared to
eliminate some if it is causing problems. By looking at the V$SQLAREA view it is possible to see which literal statements
are good candidates for converting to use bind variables. The following query shows SQL in the SGA where there are a
large number of similar statements:
SELECT substr(sql_text,1,40) "SQL",
count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions < 5
GROUP BY substr(sql_text,1,40)
HAVING count(*) > 30
ORDER BY 2;

Note: If there is latch contention for the library cache latches the above

statement may cause yet further contention problems.


The values 40,5 and 30 are example values so this query is looking for different statements whose first 40 characters are
the same which have only been executed a few times each and there are at least 30 different occurrances in the shared
pool. This query uses the idea it is common for literal statements to begin "SELECT col1,col2,col3 FROM table WHERE ..."
with the leading portion of each statement being the same.

Note:There is often some degree of resistance to converting literal SQL to use bind variables. Be
assured that it has been proven time and time again that performing this conversion for the most
frequently occurring statements can eliminate problems with the shared pool and improve scalability
greatly.

See the documentation on the tool/s you are using in your application to determine how to use bind
variables in statements.

Avoid Invalidations

Some specific orders will change the state of cursors to INVALIDATE. These orders modify directly the
context of related objects associated
with cursors. That's orders are TRUNCATE, ANALYZE or DBMS_STATS.GATHER_XXX on tables or
indexes, grants changes on underlying objects. The associated cursors will stay in the SQLAREA but
when it will be reference next time, it should be reloaded and reparsed fully, so the global performance
will be impacted.

The following query could help us to better identify the concerned cursors:

SELECT substr(sql_text, 1, 40) "SQL", invalidations from v$sqlarea


order by invalidations DESC;
To get more details on this, consult Note 115656.1 and Note 123214.1.

CURSOR_SHARING parameter (8.1.6 onwards)

<Parameter:CURSOR_SHARING> is a new parameter introduced in Oracle8.1.6. It should be used with caution in this
release. If this parameter is set to FORCE then literals will be replaced by system generated bind variables where possible.
For multiple similar statements which differ only in the literals used this allows the cursors to be shared even though the
application supplied SQL uses literals. The parameter can be set dynamically at the system or session level thus:
ALTER SESSION SET cursor_sharing = FORCE;

or

ALTER SYSTEM SET cursor_sharing = FORCE;


or it can be set in the init.ora file.
Note: As the FORCE setting causes system generated bind variables to be used in place of literals, a different execution
plan may be chosen by the cost based optimizer (CBO) as it no longer has the literal values available to it when costing the
best execution plan.

In Oracle9i, it is possible to set CURSOR_SHARING=SIMILAR. SIMILAR causes statements that may


differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the
meaning of the statement or the degree to which the plan is optimized. This enhancement improves the
usability of the parameter for situations where FORCE would normally cause a different, undesired
execution plan. With CURSOR_SHARING=SIMILAR, Oracle determines which literals are "safe" for
substitution with bind variables. This will result in some SQL not being shared in an attempt to provide a
more efficient execution plan.

See Note 94036.1 for details of this parameter.

SESSION_CACHED_CURSORS parameter

<Parameter:SESSION_CACHED_CURSORS> is a numeric parameter which can be set at instance level or at session level
using the command:
ALTER SESSION SET session_cached_cursors = NNN;
The value NNN determines how many 'cached' cursors there can be in your session.

Whenever a statement is parsed Oracle first looks at the statements pointed to by your private session
cache - if a sharable version of the statement exists it can be used. This provides a shortcut access to
frequently parsed statements that uses less CPU and uses far fewer latch gets than a soft or hard
parse.

To get placed in the session cache the same statement has to be parsed 3 times within the same
cursor - a pointer to the shared cursor is then added to your session cache. If all session cache cursors
are in use then the least recently used entry is discarded.

If you do not have this parameter set already then it is advisable to set it to a starting value of about 50.
The statistics section of the bstat/estat report includes a value for 'session cursor cache hits' which
shows if the cursor cache is giving any benefit. The size of the cursor cache can then be increased or
decreased as necessary. SESSION_CACHED_CURSORS are particularly useful with Oracle Forms
applications when forms are frequently opened and closed.

CURSOR_SPACE_FOR_TIME parameter

<Parameter:CURSOR_SPACE_FOR_TIME> controls whether parts of a cursor remain pinned between different executions
of a statement. This may be worth setting if all else has failed as it can give some gains where there are sharable
statements that are infrequently used, or where there is significant pinning / unpinning of cursors (see
<View:V$LATCH_MISSES> - if most latch waits are due to "kglpnc: child" and "kglupc: child" this is due to pinning /
unpinning of cursors) .

You must be sure that the shared pool is large enough for the work load otherwise performance will be
badly affected and ORA-4031 eventually signalled.
If you do set this parameter to TRUE be aware that:
• If the SHARED_POOL is too small for the workload then an ORA-4031 is much more likely to be signalled.
• If your application has any cursor leak then the leaked cursors can waste large amounts of memory having an adverse
effect on performance after a period of operation.
• There have historically been problems reported with this set to TRUE. The main known issues are:

o Bug 770924 (Fixed 8061 and 8160) ORA-600 [17302] may occur
o Bug 897615 (Fixed 8061 and 8160) Garbage Explain Plan over DBLINK
o Bug 1279398 (Fixed 8162 and 8170) ORA-600 [17182] from ALTER SESSION SET NLS...

CLOSE_CACHED_OPEN_CURSORS parameter

This parameter has been obsoleted in Oracle8i.


<Parameter:CLOSE_CACHED_OPEN_CURSORS> controls whether PL/SQL cursors are closed when a transaction
COMMITs or not. The default value is FALSE which causes PL/SQL cursors to be kept open across commits which can help
reduce the number of hard parses which occur. If this has been set to TRUE then there is an increased chance that the SQL
will be flushed from the shared pool when not in use.

SHARED_POOL_RESERVED_SIZE parameter

There are quite a few notes explaining <Parameter:SHARED_POOL_RESERVED_SIZE> already in circulation. The
parameter was introduced in Oracle 7.1.5 and provides a means of reserving a portion of the shared pool for large memory
allocations. The reserved area comes out of the shared pool itself.

From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about 10% of
SHARED_POOL_SIZE unless either the shared pool is very large OR
SHARED_POOL_RESERVED_MIN_ALLOC has been set lower than the default value:

• If the shared pool is very large then 10% may waste a significant amount of memory when a few Mb will suffice.
• If SHARED_POOL_RESERVED_MIN_ALLOC has been lowered then many space requests may be eligible to be
satisfied from this portion of the shared pool and so 10% may be too little.

It is easy to monitor the space usage of the reserved area using the <View:V$SHARED_POOL_RESERVED> which has a
column FREE_SPACE.

SHARED_POOL_RESERVED_MIN_ALLOC parameter

In Oracle8i this parameter is hidden.


SHARED_POOL_RESERVED_MIN_ALLOC should generally be left at its default value, although in certain cases values of
4100 or 4200 may help relieve some contention on a heavily loaded shared pool.

SHARED_POOL_SIZE parameter

<Parameter:SHARED_POOL_SIZE> controls the size of the shared pool itself. The size of the shared pool can impact
performance. If it is too small then it is likely that sharable information will be flushed from the pool and then later need to be
reloaded (rebuilt). If there is heavy use of literal SQL and the shared pool is too large then over time a lot of small chunks of
memory can build up on the internal memory freelists causing the shared pool latch to be held for longer which in-turn can
impact performance. In this situation a smaller shared pool may perform better than a larger one. This problem is greatly
reduced in 8.0.6 and in 8.1.6 onwards due to the enhancement in Bug 986149 .
NB: The shared pool itself should never be made so large that paging or swapping occur as
performance can then decrease by many orders of magnitude.

See Note 1012046.6 to calculate the SHARED_POOL_SIZE requirements based on your current
workload.

_SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards)

This is a hidden parameter which was introduced in Oracle 8.1.5. The parameter is included here as the default setting has
caused some problems with SQL sharability. Setting this parameter to 0 can avoid these issues which result in multiple
versions statements in the shared pool.
Eg: Add the following to the init.ora file
# _SQLEXEC_PROGRESSION_COST is set to ZERO to avoid SQL sharing issues

# See Note:62143.1 for details

_sqlexec_progression_cost=0
Note that a side effect of setting this to '0' is that the V$SESSION_LONGOPS view is not populated by long running queries.

See Note 68955.1 for more details of this parameter.

Precompiler HOLD_CURSOR and RELEASE_CURSOR Options

When using Oracle Precompiler the behavior of the shared pool can be modified by using parameters RELEASE_CURSOR
and HOLD_CURSOR when precompiling the program. These parameters will determine the status of a cursor in the library
cache and the session cache once the execution of the cursor ends.

For further information on these parameters, please refers to Note 73922.1

DBMS_SHARED_POOL.KEEP

This procedure (defined in the DBMSPOOL.SQL script in the RDBMS/ADMIN directory) can be used to KEEP objects in the
shared pool. DBMS_SHARED_POOL.KEEP allows one to 'KEEP' packages, procedures, functions, triggers (7.3+) and
sequences (7.3.3.1+) and is fully described in Note 61760.1

It is generally desirable to mark frequently used packages such that they are always KEPT in the
shared pool. Objects should be KEPT shortly after instance startup since the database does not do it
automatically after a shutdown was issued.

NB: Prior to Oracle 7.2 DBMS_SHARED_POOL.KEEP does not actually load all of the object to be
KEPT into the shared pool. It is advisable to include a dummy procedure in each package to be KEPT.
This dummy procedure can then be called after calling DBMS_SHARED_POOL.KEEP to ensure the
object is fully loaded. This is not a problem from 7.2 onwards.

Flushing the SHARED POOL

On systems which use a lot of literal SQL the shared pool is likely to fragment over time such that the degree of concurrency
which can be achieved diminishes. Flushing the shared pool will often restore performance for a while as it can cause many
small chunks of memory to be coalesced. After the flush there is likely to be an interim spike in performance as the act of
flushing may remove sharable SQL from the shared pool but does nothing to improve shared pool fragmentation. The
command to flush the shared pool is:
ALTER SYSTEM FLUSH SHARED_POOL;
Contrary to reports elsewhere items kept in the shared pool using DBMS_SHARED_POOL.KEEP will NOT be flushed by
this command. Any items (objects or SQL) actually pinned by sessions at the time of the flush will also be left in place.
NB: Flushing the shared pool will flush any cached sequences potentially leaving gaps in the sequence
range. DBMS_SHARED_POOL.KEEP('sequence_name','Q') can be used to KEEP sequences
preventing such gaps.

Using V$ Views (V$SQL and V$SQLAREA)

Note that some of the V$ views have to take out relevant latches to obtain the data to reply to queries. This is notably so for
views against the library cache and SQL area. It is generally advisable to be selective about what SQL is issued against
these views. In particular use of V$SQLAREA can place a great load on the library cache latches. Note that V$SQL can
often be used in place of V$SQLAREA and can have less impact on the latch gets - this is because V$SQLAREA is a
GROUP BY of statements in the shared pool while V$SQL does not GROUP the statements.

MTS, Shared Server and XA

The multi-threaded server (MTS) adds to the load on the shared pool and can contribute to any problems as the User
Global Area (UGA) resides in the shared pool. This is also true of XA sessions in Oracle7 as their UGA is located in the
shared pool. (In Oracle8/8i XA sessions do NOT put their UGA in the shared pool). In Oracle8 the Large Pool can be used
for MTS reducing its impact on shared pool activity - However memory allocations in the Large Pool still make use of the
"shared pool latch". See Note 62140.1 for a description of the Large Pool.

Using dedicated connections rather than MTS causes the UGA to be allocated out of process private
memory rather than the shared pool. Private memory allocations do not use the "shared pool latch" and
so a switch from MTS to dedicated connections can help reduce contention in some cases.

In Oracle9i, MTS was renamed to "Shared Server". For the purposes of the shared pool, the behaviour
is essentially the same.

Useful SQL for looking at Shared Pool problems


This section shows some example SQL that can be used to help find potential issues in the shared pool. The output of these
statements should be spooled to a file.
Note: These statements may add to any latch contention as described in "Using V$ Views (V$SQL and V$SQLAREA)"
above.

• Finding literal SQL


• SELECT substr(sql_text,1,40) "SQL",
• count(*) ,
• sum(executions) "TotExecs"
• FROM v$sqlarea
• WHERE executions < 5
• GROUP BY substr(sql_text,1,40)
• HAVING count(*) > 30
• ORDER BY 2;

This helps find commonly used literal SQL - See "Eliminating Literal SQL" above.

• Finding the Library Cache hit ratio


• SELECT SUM(PINS) "EXECUTIONS",
• SUM(RELOADS) "CACHE MISSES WHILE EXECUTING"
FROM V$LIBRARYCACHE;

If the ratio of misses to executions is more than 1%, then try to reduce the library cache misses
• Checking hash chain lengths:
• SELECT hash_value, count(*)
• FROM v$sqlarea
• GROUP BY hash_value
• HAVING count(*) > 5;

This should usually return no rows. If there are any HASH_VALUES with high counts (double figures) then you may be
seeing the effects of a bug, or an unusual form of literal SQL statement. It is advisable to drill down and list out all the
statements mapping to the same HASH_VALUE. Eg:

SELECT sql_text FROM v$sqlarea WHERE hash_value= <XXX>;

and if these look the same get the full statements from V$SQLTEXT. It is possible for many literals to map to the same hash
value. Eg: In 7.3 two statements may have the same hash value if a literal value occurs twice in the statement and there are
exactly 32 characters between the occurrences.

• Checking for high version counts:


• SELECT address, hash_value,
• version_count ,
• users_opening ,
• users_executing,
• substr(sql_text,1,40) "SQL"
• FROM v$sqlarea
• WHERE version_count > 10;

"Versions" of a statement occur where the SQL is character for character identical but the underlying objects or binds etc..
are different as described in "Sharable SQL" above. High version counts can occur in various Oracle8i releases due to
problems with progression monitoring. This can be disabled by setting _SQLEXEC_PROGRESSION_COST to '0' as
described earlier in this note.

• Finding statement/s which use lots of shared pool memory:


• SELECT substr(sql_text,1,40) "Stmt", count(*),
• sum(sharable_mem) "Mem",
• sum(users_opening) "Open",
• sum(executions) "Exec"
• FROM v$sql
• GROUP BY substr(sql_text,1,40)
• HAVING sum(sharable_mem) > <MEMSIZE>;
where MEMSIZE is about 10% of the shared pool size in bytes. This should show if there are similar literal statements, or
multiple versions of a statements which account for a large portion of the memory in the shared pool.

• Allocations causing shared pool memory to be 'aged' out


• SELECT *
• FROM x$ksmlru
• WHERE ksmlrnum>0;
Note: This select returns no more than 10 rows and then erases the contents of the X$KSMLRU table so be sure to SPOOL
the output. The X$KSMLRU table shows which memory allocations have caused the MOST memory chunks to be thrown
out of the shared pool since it was last queried. This is sometimes useful to help identify sessions or statements which are
continually causing space to be requested. If a system is well behaved and uses well shared SQL, but occasionally slows
down this select can help identify the cause. Refer to Note 43600.1 for more information on X$KSMLRU.

Issues in various Oracle Releases


These are some important issues which affect performance of the shared pool in various releases:
• Increasing the CPU processing power of each CPU can help reduce shared pool contention problems in all Oracle
releases by decreasing the amount of time each latch is held. A faster CPU is generally better than a second CPU.
• If you have an EVENT parameter set for any reason check with Oracle support that this is not an event that will impact
shared pool performance.
• Ensure that there is no shortage of memory available for the Oracle instance so that there is no risk of SGA memory
being paged out.
eg: On AIX shared pool issues may become visible due to incorrect OS configuration - See Note 316533.1 .

Bug fixes and Enhancements


This is a summary listing of the main bugs and enhancements affecting the shared pool. The 'Fixed' column lists the 4 digit
server releases where the problem / enhancement is fixed - eg: 8062 means fixed in 8.0.6.2, 9000 means the issue has
been fixed for Oracle9i.
Note 190077.1.
Bug Versions Fixed Description
Bug Identical SQL referencing SCHEMA.SEQUENCE.NEXTVAL not shared by
8-90 9000
1623256 different users
Bug 8063 8171
-90 Cursors referencing a fully qualified FUNCTION are not shared
1366837 9000
Bug 8063 8171
-90 Large row cache can cause long shared pool latch waits (OPS only)
1484634 9000
Bug
815-90 9000 INSERT AS SELECT may not share SQL when it should
1318267
Bug 8062 8162
-817 ENHANCEMENT: Reduced latch gets purging from shared pool
1149820 8170
Bug 8062 8162
-817 ENHANCEMENT: Delay purge when bind mismatch
1150143 8170
Bug
-817 8170 ENHANCEMENT: Reduce need to get PARENT library cache latch
1258708
Bug
8-817 8163 8170 MVIEW refresh unnecessarily invalidates shared cursors
1348501
Bug ALTER SESSION FORCE PARALLEL PQ/DML/DDL does not share recursive
815-817 8163 8170
1357233 SQL
Bug
815-817 8162 8170 Cursors may not be shared in 8.1 when they should be
1193003
Bug
815-817 8162 8170 Cursors not shared if both TIMED_STATISTICS and SQL_TRACE are enabled
1210242
ENHANCEMENT: More freelists for shared pool memory chunks (reduced latch
Bug 986149 7-816 8060 8160
contention)
Bug 858015 815-816 8160 Shared pool memory for views higher if QUERY_REWRITE_ENABLED set
Bug 918002 815-816 8151 8160 Cursors are not shared if SQL_TRACE or TIMED_STATISTICS is TRUE
TIMED_STATISTICS can affect cursor sharing / Dump from EXPLAIN or
Bug 888551 815-816 8151 8160
enable/disable SQL_TRACE
Bug 8062 8162 Access to DC_HISTOGRAM_DEFS from Remote Queries can impact shared pool
8-817
1065010 8170 performan ce.
Bug 8062 8162
-817 Cursor authorization and dependency lists too long - can impact shared pool
1115424 8170
Bug
803-8062 8062 8150 SQL from PLSQL using NUMERIC binds may not be shared when it should
1131711
Bug
817 82 ORA-4031 / SGA memory leak of PERMANENT memory for buffer handles
1397603
Bug
816 8171 8200 ORA-4031 due to leak / cache buffer chain contention from AND-EQUAL access
1640583

Historic Notes

The notes here relate to pre-Oracle7.3 releases of Oracle and are included for completeness only:

• In 7.3 the PLSQL was enhanced to used paged executable code reducing the number of large allocations in the shared
pool and reducing the need for KEEPing.
• Oracle 7.1.6 to 7.2.3 there are several known problems. See Note 32871.1
• Between Oracle 7.1 and 7.2 the latching mechanism over the library cache changed.
• Some historic bugs:

Bug Versions Fixed Description


Excessive shared pool fragmentation due to 2K context area chunk
Bug 596953 80-81 8044 8052 8060 8150
size.
Bug 724620 700-814 7344 8043 8052 8060 Select from VIEW now uses less shared memory (less latch gets)
Bug 633498 7-815 7343 8043 8050 8150 Selecting from some V$ views can make statements unsharable
7343 8042 8051 8060
Bug 625806 7X-806 Cursor not shared for a VIEW using FUNCTION / with DBMS_SQL
8150
Bug 520708 7X-804 7336 7342 8040 Better handling of small memory chunks in the SGA
.
High Version Counts

High version counts occur when there are multiple copies of the 'same' statement in the shared pool, but some
factor prevents them from being shared wasting space
and causing latch contention.

Note 296377.1 Handling and resolving unshared cursors/large version_counts

Troubleshooting Guide to high version_counts

This document is intended to explain how SQL sharing works and give examples of diagnostics
which can be used to help determine why SQL sharing may not occur
RDBMS
DBPERF
Version [All versions]

Author: James Cremonini

The information in this document has been reviewed and is current as of 01-JAN-2005.

Instructions for the reader: The Troubleshooting Guide is provided to assist in debugging SQL sharing issues. When
possible, diagnostic tools are included in the document to assist in troubleshooting problems. This document does not
contain bugs/patches as these topics are addressed in the articles referenced at the bottom of this document.
Troubleshooting Summary

1. Introduction: What is shared SQL ?


2. Diagnostics: How do I see the versions and why they are not shared ?
3. Documentation: What do the reasons given in v$SQL_SHARED_CURSOR mean?
4. Best Practises: How do I make SQL more shareable - are there any parameters which can help?
5. Q&A: Are there any times when a high version count is expected even though BINDS are being used?
6. Prob: What articles, white papers, or manuals should I read for more information on [subcomponent]?

Troubleshooting Details

1. What is shared SQL ?

The first thing to remember is that all SQL is implicitly sharable. When a SQL statement is entered, the
RDBMS will create a hash value for text of the statement and that hash value then helps the RDBMS to
easily find SQL already in the shared pool. It is not in the scope of this article to discuss this in any
great detail, so let's just assume entering a series of text results in a hash value being created

For instance :- 'select count(*) from emp' hashes to the value 4085390015

We now create a parent cursor for this sql and a single child. It does not matter that a SQL statement
may never be shared - when it is first parsed a parent and a single child are created. The easy way to
think of this is that the PARENT cursor is a representation of the hash value and the child cursor(s)
represent the metadata for that SQL

What is 'SQL Metadata'?


Metadata is all the information which enables a statement to run. For instance, in the example I have
given EMP is owned by scott and therefore has an OBJECT_ID which points to the EMP table owned
by this user. When the user SCOTT logged in, optimizer parameters are initialised in that session for
use by the statement, so this too is used by the optimizer and is therefore metadata. There are other
examples of Metadata which will be mentioned further in this document.

Let's say this session logs out and back in again now. It then runs the same command again (as the
same user). This time we already have the SQL in the shared pool (but we don't know this yet). What
we do is hash the statement and then search for that hash value in the shared pool. If we find it, we can
then search through the children to determine if any of them are usable by us (ie the metadata is the
same). If it is, then we can share that SQL statement
I would still have one version of that SQL in the shared pool because the metadata enabled me to
share the statement with the already existent child. The fundementals are that the parent is not shared,
it is the children which determine shareability.

Now - another user 'TEST' has it's own version of EMP. If that user was to now run the select statement
above then what would happen is :-

1. The statement is hashed - it is hashed to the value 4085390015


2. The SQL will be found in the shared pool as it already exists
3. The children are scanned (at this point we have one child)
4. Because the OBJECT_ID of the EMP table owned by TEST is different the OBJECT_ID owned by
scott we have a 'mismatch'

(Essentially, what happens here is that we have a linked list of children which we traverse, comparing
the metadata of the current SQL with that of all the children. If there were 100 children then we would
scan each of them (looking for a possible mismatch and moving on) until we found one we could share.
If we cannot share any (ie. have exhausted the list of children) then we need to create a new child)

5. We therefore have to create a new child - we now have 1 PARENT and 2 CHILDREN.

[top]

2. How do I see the versions and why they are not shared ?

Lets use the example above and take a look at what SQL we can use to see this in the shared pool.

SCOTT runs select count(*) from emp

I can now run the following to see the PARENT statement and it's hash value and address

select sql_text, hash_value,address from v$sqlarea where sql_text like 'select count(*)
from emp%';

SQL_TEXT HASH_VALUE ADDRESS

------------------------------------- ----------------

select count(*) from emp 4085390015 0000000386BC2E58

To see the CHILDREN (I expect to see 1 at this point) :-


9i - select * from v$sql_shared_cursor where kglhdpar = '0000000386BC2E58'

10G - select * from v$sql_shared_cursor where address = '0000000386BC2E58'

ADDRESS KGLHDPAR U S O O S L S E B P I S T A B D L T R I I R L I O S M U T
N F

---------------- ---------------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -

0000000386BC2D08 0000000386BC2E58 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N

We can see we have a single child (ADDRESS 0000000386BC2D08). The mismatch information (USOOSL etc) is all N
because this is the first child. Now, if I log in as another user and run the same select (select count(*) from emp) and look
again I will get the following output:-

ADDRESS KGLHDPAR U S O O S L S E B P I S T A B D L T R I I R L I O S M U T
N F

---------------- ---------------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -

0000000386BC2D08 0000000386BC2E58 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N

0000000386A91AA0 0000000386BC2E58 N N N N N N N N N N N N N Y N N N Y N N N N N N N N N N N
N N

We can now see the 2nd child ( 0000000386A91AA0) and also the reasons why it could not be shared with the first (The
'Y's denote a mismatch) The reasons are (1) AUTH_CHECK_MISMATCH and (2) TRANSLATION_MISMATCH This is
basically because the objects under my new user do not map to those of SCOTT (the current child). So , authentication fails
because I cannot access SCOTTs objects and translation fails because we have different object_ids.
[top]

3. What do the reasons given in v$SQL_SHARED_CURSOR mean?

Below are the list of reasons given as well as some workable examples (Those denoted by ** are the
ones most often seen) :-

UNBOUND_CURSOR - The existing child cursor was not fully built (in other words, it was not optimized)

SQL_TYPE_MISMATCH - The SQL type does not match the existing child cursor

**OPTIMIZER_MISMATCH - The optimizer environment does not match the existing child cursor
select count(*) from emp; -> 1 PARENT, 1 CHILD
alter session set optimizer_mode=ALL_ROWS
select count(*) from emp; -> 1 PARENT, 2 CHILDREN (The optimizer mode has changed and therefore the existing
child cannot be reused)

(The same applies with events - if I turned on tracing with 10046 than I would get the OPTIMIZER_MISMATCH again and a
3rd child)
OUTLINE_MISMATCH - The outlines do not match the existing child cursor
If my user had created stored outlines previously for this command and they were stored in seperate categories (say
"OUTLINES1" and "OUTLINES2") running:-

alter session set use_stored_outlines = OUTLINES1;


select count(*) from emp;
alter session set use_stored_oulines= OUTLINES2;
select count(*) from emp; --> Would create a 2nd child as the outline used is different than the first run.
STATS_ROW_MISMATCH - The existing statistics do not match the existing child cursor

Check that 10046/sql_trace is not set on all sessions as this can cause this.

LITERAL_MISMATCH - Non-data literal values do not match the existing child cursor

SEC_DEPTH_MISMATCH - Security level does not match the existing child cursor

EXPLAIN_PLAN_CURSOR - The child cursor is an explain plan cursor and should not be shared

Explain plan statements will generate a new child by default - the mismatch will be this.
BUFFERED_DML_MISMATCH - Buffered DML does not match the existing child cursor

PDML_ENV_MISMATCH - PDML environment does not match the existing child cursor

INST_DRTLD_MISMATCH - Insert direct load does not match the existing child cursor

SLAVE_QC_MISMATCH -The existing child cursor is a slave cursor and the new one was issued by the coordinator

(or, the existing child cursor was issued by the coordinator and the new one is a slave cursor).

TYPECHECK_MISMATCH - The existing child cursor is not fully optimized

AUTH_CHECK_MISMATCH - Authorization/translation check failed for the existing child cursor

The user does not have permission to access the object in any previous version of the cursor. A typical example would be
where each user has it's own copy of a table
**BIND_MISMATCH - The bind metadata does not match the existing child cursor

variable a varchar2(100);
select count(*) from emp where ename = :a -> 1 PARENT, 1 CHILD
variable a varchar2(400);
select count(*) from emp where ename = :a -> 1 PARENT, 2 CHILDREN (The bind 'a' has now changed in
definition)

DESCRIBE_MISMATCH - The typecheck heap is not present during the describe for the child cursor

LANGUAGE_MISMATCH - The language handle does not match the existing child cursor

TRANSLATION_MISMATCH - The base objects of the existing child cursor do not match

The definition of the object does not match any current version. Usually this is indicative of the same issue as
"AUTH_CHECK_MISMATCH" where the object is different anyway
ROW_LEVEL_SEC_MISMATCH - The row level security policies do not match

INSUFF_PRIVS - Insufficient privileges on objects referenced by the existing child cursor

INSUFF_PRIVS_REM - Insufficient privileges on remote objects referenced by the existing child cursor

REMOTE_TRANS_MISMATCH - The remote base objects of the existing child cursor do not match
USER1: select count(*) from table@remote_db
USER2: select count(*) from table@remote_db (Although the SQL is identical, the dblink pointed to by
remote_db may be a private dblink which resolves
to a different object altogether)
LOGMINER_SESSION_MISMATCH

INCOMP_LTRL_MISMATCH

OVERLAP_TIME_MISMATCH - error_on_overlap_time mismatch

SQL_REDIRECT_MISMATCH - sql redirection mismatch

MV_QUERY_GEN_MISMATCH - materialized view query generation

USER_BIND_PEEK_MISMATCH - user bind peek mismatch

TYPCHK_DEP_MISMATCH - cursor has typecheck dependencies

NO_TRIGGER_MISMATCH - no trigger mismatch

FLASHBACK_CURSOR - No cursor sharing for flashback

ANYDATA_TRANSFORMATION - anydata transformation change

INCOMPLETE_CURSOR - incomplte cursor

TOP_LEVEL_RPI_CURSOR - top level/rpi cursor

DIFFERENT_LONG_LENGTH - different long length

LOGICAL_STANDBY_APPLY - logical standby apply mismatch

DIFF_CALL_DURN - different call duration

BIND_UACS_DIFF - bind uacs mismatch

PLSQL_CMP_SWITCHS_DIFF - plsql compiler switches mismatch

CURSOR_PARTS_MISMATCH - cursor-parts executed mismatch

STB_OBJECT_MISMATCH - STB object different (now exists)

ROW_SHIP_MISMATCH - row shipping capability mismatch

PQ_SLAVE_MISMATCH - PQ slave mismatch

Check you want to be using PX with this reason code, as the problem could be caused by running lots of small SQL
statements which do not really need PX. If you are on < 11i you may be hitting BUG4367986

TOP_LEVEL_DDL_MISMATCH - top-level DDL cursor

MULTI_PX_MISMATCH - multi-px and slave-compiled cursor

BIND_PEEKED_PQ_MISMATCH - bind-peeked PQ cursor

MV_REWRITE_MISMATCH - MV rewrite cursor

ROLL_INVALID_MISMATCH - rolling invalidation window exceeded

I suspect this can occur when you see a mix of this reason and some other one in v$sql_shared_cursor, together with a
library cache latch issue. My suspicion is this is as a result of that 'other' reason so you should address that first to relieve
the latch.

OPTIMIZER_MODE_MISMATCH - optimizer mode mismatch

PX_MISMATCH - parallel query mismatch

MV_STALEOBJ_MISMATCH - mv stale object mismatch

FLASHBACK_TABLE_MISMATCH - flashback table mismatch

LITREP_COMP_MISMATCH - literal replacement compilation mismatch

[top]

4. What further tracing is available ?

Solution:
In 10G it is possible to use CURSORTRACE to aid the investigation of why cursors are not being shared. This event should
only be used under the guidance of support and the resultant trace file is undocumented. To get the trace for a particular
SQL statement you first of all need to get the hash_value (See the above select from v$sqlarea). You then set the trace on
using:-

alter system set events


'immediate trace name cursortrace level 577, address hash_value';

(level 578/580 can be used for high level tracing (577=level 1, 578=level 2, 580=level 3)

This will write a trace file to user_dump_dest each time we try to reuse the cursor.

To turn off tracing use:-

alter system set events


'immediate trace name cursortrace level 2147483648, address 1';

Please note: BUG5555371 exists in 10.2 (fixed in 10.2.0.4) where cursor trace cannot fully be turned off and single line
entries will still be made to the trace
file as a result. The w/a is to restart the instance. How invasive this BUG is depends on the executions of the cursor (and the
size of the resultant trace file additions)

[top]

5. Are there any times when a high version count is expected even though BINDS are being
used?

Solution:
Consider the following where cursor_sharing=SIMILAR

select /* TEST */ * from emp where sal > 100;


select /* TEST */ * from emp where sal > 101;
select /* TEST */ * from emp where sal > 102;
select /* TEST */ * from emp where sal > 103;
select /* TEST */ * from emp where sal > 104;

SELECT sql_text,version_count,address
FROM V$SQLAREA
WHERE sql_text like 'select /* TEST */%';

SELECT * FROM V$SQL_SHARED_CURSOR WHERE kglhdpar = '&my_addr';


You will see several versions , each with no obvious reason for not being shared
Explanation:
One of the cursor sharing criteria when literal replacement is enabled with cursor_sharing as similar is that bind value
should match initial bind value
if the execution plan is going to change depending on the value of the literal. The reason for this is we _might_ get a sub
optimal plan if we use the same
cursor. This would typically happen when depending on the value of the literal optimizer is going to chose a different plan.
Thus in this test case we have a
predicate with > , if this was a equality we would always share the same child cursor. If application developers are ready to
live with a sub-optimal plan and
save on memory , then they need to set the parameter to force.

"The difference between SIMILAR and FORCE is that SIMILAR forces similar statements to share the SQL area without
deteriorating execution plans.
Setting CURSOR_SHARING to FORCE forces similar statements to share the SQL area potentially deteriorating execution
plans."

It is also possible to tell from 10046 trace (level 4/12 - BINDS) if a bind is considered to be unsafe

The flag oacfl2 in 9i and fl2 in 10g will show if a variable is unsafe.

BINDS #2:
bind 0: dty=2 mxl=22(04) mal=00 scl=00 pre=00 oacflg=10 oacfl2=500 size=24
offset=0
bfp=1036d6408 bln=22 avl=04 flg=09
value=16064
bind 1: dty=2 mxl=22(04) mal=00 scl=00 pre=00 oacflg=10 oacfl2=500 size=24
offset=0
bfp=1036d4340 bln=22 avl=04 flg=09

[top]

X: What articles, white papers, or manuals should I read for more information on cursor
sharing?

Solution:
Note 377847.1Unsafe Peeked Bind Variables and Histograms

IMPORTANT: If you did not find the information you were searching for, please review the following [Product and
Subcomponent] information:
[Notice to the document author: Link either to the individual documents as noted below or, for eBusiness Suite Self Service
Toolkits link to the appropriate place section within the Self-Service Toolkit. Remove all unused references before
publishing.]

[Product Subcomponent] Current Issues [Note:xxxxxx.1]


[Product Subcomponent] Interim Patches [Note:xxxxxx.1]
[Product Subcomponent] Setup and Usage Instructions [Note:xxxxxx.1]
[Product Subcomponent] Frequently Asked Questions (FAQs) [Note:xxxxxx.1]
[Product Subcomponent] Troubleshooting Guide [Note:xxxxxx.1]

[Product Top Tech Docs or Product Self-Service Toolkit] [Note:xxxxxx.1]


Log File Sync waits

Log file sync waits occur when sessions wait for redo data to be written to disk.
Typically this is caused by slow writes or committing too frequently in the application.

See: Note 34592.1 WAITEVENT: "log file sync" Reference Note

It is recommended that customers experiencing log file sync issues on 10.2.0.3 proactively apply the patch for
Bug 5896963

"log file sync" Reference Note


This is a reference note for the wait event "log file sync" which includes the following subsections:

• Brief definition
• Individual wait details (eg: For waits seen in <View:V$SESSION_WAIT>)
• Systemwide wait details (eg: For waits seen in <View:V$SYSTEM_EVENT>)
• Reducing waits / wait times
• Data Guard Perspective
• Known bugs

See Note 61998.1 for an introduction to Wait Events.

Definition:

• Versions:7.0 - 11.1 Documentation: 11g 10g


• When a user session(foreground process) COMMITs (or rolls back), the session's redo information needs to be flushed
to the redo logfile. The user session will post the LGWR to write all redo required from the log buffer to the redo log file.
When the LGWR has finished it will post the user session. The user session waits on this wait event while waiting for
LGWR to post it back to confirm all redo changes are safely on disk.

This may be described further as the time user session/foreground process spends waiting for redo to be flushed to
make the commit durable. Therefore, we may think of these waits as commit latency from the foreground process (or
commit client generally).

See Reducing Waits section below for more detailed breakdown of this wait event.

("log file sync" also applies to ROLLBACK/UNDO in that once the rollback/undo is complete the end of the
rollback/undo operation requires all changes to complete the rollback/undo to be flushed to the redo log)

Individual Waits:

Parameters:

• P1 = buffer#
• P2 = Not used
• P3 = Not used

• buffer#

All changes up to this buffer number (in the log buffer) must be flushed to disk and the writes confirmed to ensure that the
transaction is committed , and will remain committed upon an instance crash. Hence the wait is for LGWR to flush up to this
buffer#.
Wait Time:

The wait is entirely dependent on LGWR to write out the necessary redo blocks and confirm completion back to the user
session. The wait time includes the writing of the log buffer and the post. The waiter times out and increments the sequence
number every second while waiting.

Finding Blockers:

If a session continues to wait on the the same buffer# then the SEQ# column of <View:V$SESSION_WAIT> should
increment every second. If not then the local session has a problem with wait event timeouts. If the SEQ# column is
incrementing then the blocking process is the LGWR process. Check to see what LGWR is waiting on as it may be stuck.

Systemwide Waits:
Systemwide figures for waits on "log file sync" show the time spent waiting for COMMITs to complete. If this is significant
then there may be a problem with LGWR's ability to flush redo out quickly enough. One can also look at:

• "log file parallel write" waits for LGWR (See Note 34583.1)
• "user commits" statistic shows the number of commits.

Reducing Waits / Wait times:


Here are 3 main general tuning tips to help you reduce waits on "log file sync":

• Tune LGWR to get good throughput to disk . eg: Do not put redo logs on RAID 5.
• If there are lots of short duration transactions see if it is possible to BATCH transactions together so there are fewer
distinct COMMIT operations. Each commit has to have it confirmed that the relevant REDO is on disk. Although
commits can be "piggybacked" by Oracle reducing the overall number of commits by batching transactions can have a
very beneficial effect.
• See if any of the processing can use the COMMIT NOWAIT option (be sure to understand the semantics of this before
using it).
• See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options.

For more detailed analysis for reducing waits on LOG FILE SYNC please see below:

The overall wait time for LOG FILE SYNC may be broken down into subsections or components.
If your system still shows high "log file sync" wait times after ensuring the general tuning tips above are completed, you
should break down the total wait time into the individual components, then tune those components that make up the largest
time.

The log file sync wait may be broken down into the following components:
1. Wakeup LGWR if idle
2. LGWR gathers the redo to be written and issue the I/O
3. Time for the log write I/O to complete
4. LGWR I/O post processing
5. LGWR posting the foreground/user session that the write has completed
6. Foreground/user session wakeup

Tuning advice based on log file sync component breakdown above:


Steps 2 and 3 are accumulated in the "redo write time" statistic. (i.e. as found under STATISICS section of Statspack and
AWR)
Step 3 is the "log file parallel write" wait event. (Note 34583.1:"log file parallel write" Reference Note:)
Steps 5 and 6 may become very significant as the system load increases. This is because even after the foreground has
been posted it may take a some time for the OS to schedule it to run. May require monitoring from O/S level.
Data Guard Perspective:
For Data Guard with synchronous (SYNC) transport and commit WAIT defaults, the above tuning steps still apply, except
step 3 also includes the time for the network write and the RFS/redo write to the standby redo logs.
This wait event and how it applies to Data Guard is explained in detail in the MAA OTN white paper:
Note 387174.1:MAA - Data Guard Redo Transport and Network Best Practices.
Known Bugs
NB Bug Fixed Description
6193945 10.2.0.5, 11.1.0.7, 11.2 High LGWR CPU use and long 'log file sync' latency in RAC
5147386 10.2.0.5, 11.1.0.6 Long waits on "log file sync" /random ORA-27152 "attempt to post process failed"
6319685 10.2.0.4, 11.1.0.7, 11.2 LGWR posts do not scale on some platforms
5087592 10.2.0.4, 11.1.0.6 "log file sync" waits from read only commits
5896963 10.2.0.4, 11.1.0.6 High LGWR CPU and longer "log file sync" with fix for bug 5065930
5061068 10.2.0.3, 11.1.0.6 RAC using "broadcast on commit" can see delayed commit times
5065930 10.2.0.3, 11.1.0.6 "log file sync" timeouts can occur
2640686 9.2.0.5, 10.1.0.2 Long waits for "log file sync" with broadcast SCN in RAC
3311210 9.2.0.5, 10.1.0.2 Unnecessary 0.5 seconds waits for "Broadcast on commit" SCN scheme
2663122 9.2.0.5, 10.1.0.2 Unneccessarily long waits on "log file sync" can occur
Buffer Busy waits/Cache Buffers Chains Latch waits

Buffer Busy waits occur when a session wants to access a database block in the buffer cache but it cannot as the
buffer is "busy"
Cache Buffers Chains Latch waits are caused by contention where multiple sessions waiting to read the same
block.

Typical solutions are:-


o Look for SQL that accesses the blocks in question and determine if the repeated reads are necessary.
o Check for suboptimal SQL (this is the most common cause of the events) - look at the execution plan for the
SQL being run and try to reduce the gets per executions which will minimise the number of blocks being accessed

and therefore reduce the chances of multiple sessions contending for the same block

Note 34405.1 WAITEVENT: "buffer busy waits" Reference Note

"buffer busy waits" Reference Note


This is a reference note for the wait event "buffer busy waits" which includes the following subsections:

• Brief definition
• Individual wait details (eg: For waits seen in <View:V$SESSION_WAIT>)
• Systemwide wait details (eg: For waits seen in <View:V$SYSTEM_EVENT>)
• Reducing waits / wait times

See Note 61998.1 for an introduction to Wait Events.

Definition:

• Versions:7.0 - 10.2 Documentation: 9.0


• This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is
"busy". The two main cases where this can occur are:
1. Another session is reading the block into the buffer
2. Another session holds the buffer in an incompatible mode to our request

Individual Waits:

Parameters:

• P1 = file# (Absolute File# in Oracle8 onwards)


• P2 = block#
• P3 = id (Reason Code)/Block Class# in 10g

• file# (Absolute File# in Oracle8 onwards)

This is the file number of the data file that contains the block that the waiting session wants.

• block#

This is the block number in the above file# that the waiting session wants access to.
See Note 181306.1 to determine the tablespace, filename and object for this file#,block# pair.
• id (Reason Code)

The buffer busy wait event is called from different places in the Oracle code. Each place in the code uses a different
"Reason Code" . These codes can differ between versions thus:

Versions Values used


7.1 - 8.0.6 Uses one set of ID codes (mostly >1000)
8.1.5 8.1.5 does not include a value for P3 when waiting
8.1.6 - 9.2 Uses a different set of ID codes (100-300)
10.1+ Uses the block class

Buffer Busy Waits ID's and Meanings

Reason Code (Id)


8.1.6- Reason
<=8.0.6 >=10.1
9.2
0 0 n/a A block is being read
We want to NEW the block but the block is currently being read by another session (most
1003 100 n/a
likely for undo).
We want to NEW the block but someone else has is using the current copy so we have to
1007 200 n/a
wait for them to finish.
Trying to get a buffer in CR/CRX mode , but a modification has started on the buffer that
1010 230 n/a
has not yet been completed.
1012 - n/a A modification is happening on a SCUR or XCUR buffer, but has not yet completed
1012 CR/CRX scan found the CURRENT block, but a modification has started on the buffer that
231 n/a
(dup.) has not yet been completed.
Block is being read by another session and no other suitable block image was found e.g.
CR version, so we wait until the read is completed. This may also occur after a buffer
cache assumed deadlock. The kernel can't get a buffer in a certain amount of time and
1013 130 n/a assumes a deadlock. Therefore it will read the CR version of the block. This should not
have a negative impact on performance, and basically replaces a read from disk with a
wait for another process to read it from disk, as the block needs to be read one way or
another.
We want the CURRENT block either shared or exclusive but the Block is being read into
1014 110 n/a
cache by another session, so we have to wait until their read() is completed.
1014 We want to get the block in current mode but someone else is currently reading it into the
120 n/a
(duplicate) cache. Wait for them to complete the read. This occurs during buffer lookup.
The session wants the block in SCUR or XCUR mode. If this is a buffer exchange or the
session is in discrete TX mode, the session waits for the first time and the second time
1016 210 n/a escalates the block as a deadlock and so does not show up as waiting very long. In this
case the statistic: "exchange deadlocks" is incremented and we yield the CPU for the
"buffer deadlock" wait event.
1016 During buffer lookup for a CURRENT copy of a buffer we have found the buffer but
220 n/a
(duplicate) someone holds it in an incompatible mode so we have to wait.

Wait Time:

Normal wait time is 1 second. If the session has been waiting for an exclusive buffer during the last wait then it waits 3
seconds this wait. The session will keep timing-out/waiting until it acquires the buffer.

Finding Blockers:
Finding the blocking process can be quite difficult as the information required is not externalised. If P3 (Reason Code)
shows that the "buffer busy wait" is waiting for a block read to complete then the blocking session is likely to be waiting on
an IO wait (eg: "db file sequential read" or "db file scattered read") for the same file# and block#.

If the wait is due to the buffer being held in an incompatible mode then it should be freed very soon. If not then it
is advisable to contact Oracle Support and get 3 SYSTEMSTATE dumps at one minute intervals as the blocking
session may be spinning. (Look for ACTIVE sessions with high CPU utilisation).

Systemwide Waits:
If the TIME spent waiting for buffers is significant then it is best to determine which segment/s is/are suffering from
contention. The "Buffer busy wait statistics" section of the Bstat/estat or STATSPACK reports shows which block type/s are
seeing the most contention. This information is derived from <View:V$WAITSTAT> which can be queried in isolation:
SELECT time, count, class
FROM V$WAITSTAT
ORDER BY time,count
;
This shows the class of block with the most waits at the BOTTOM of the list.
Oracle Support may also request that the following query be run to show where the block is held from when a wait occurs:
SELECT kcbwhdes, why0+why1+why2 "Gets", "OTHER_WAIT"
FROM x$kcbsw s, x$kcbwh w
WHERE s.indx=w.indx
and s."OTHER_WAIT">0
ORDER BY 3
;
Note: "OTHER_WAIT" is "OTHER WAIT" in Oracle8i (a space rather than an underscore)
Additional information regarding which files contain the blocks being waited for can be obtained from the internal
<View:X$KCBFWAIT> thus:
SELECT count, file#, name
FROM x$kcbfwait, v$datafile
WHERE indx + 1 = file#
ORDER BY count
;
This shows the file/s with the most waits (at the BOTTOM of the list) so by combining the above of information we know
what block type/s in which file/s are causing waits. The segments in each file can be seen using a query like:
SELECT distinct owner, segment_name, segment_type
FROM dba_extents
WHERE file_id= &FILE_ID
;
If there are a large number of segments of the type listed then monitoring <View:V$SESSION_WAIT> may help isolate
which object is causing the waits.
Eg: Repeatedly run the following statement and collect the output. After a period of time sort the results to see which file &
blocks are showing contention:
SELECT p1 "File", p2 "Block", p3 "Reason"
FROM v$session_wait
WHERE event='buffer busy waits'
;
Note:
In the above query there is no reference to WAIT_TIME as you are not interested in whether a session is currently waiting or
not, just what buffers are causing waits.

If a particular block or range of blocks keep showing waits you can try to isolate the object using the queries in
Note 181306.1.

One can also look at:

• Capturing session trace and noting the "buffer busy waits" may help - See Note 62160.1.
Reducing Waits / Wait times:
As buffer busy waits are due to contention for particular blocks then you cannot take any action until you know which blocks
are being competed for and why. Eliminating the cause of the contention is the best option. Note that "buffer busy waits" for
data blocks are often due to several processes repeatedly reading the same blocks (eg: if lots of people scan the same
index) - the first session processes the blocks that are in the buffer cache quickly but then a block has to be read from disk -
the other sessions (scanning the same index) quickly 'catch up' and want the block which is currently being read from disk -
they wait for the buffer as someone is already reading the block in.

The following hints may be useful for particular types of contention - these are things that MAY reduce
contention for particular situations:

Block Type Possible Actions


Eliminate HOT blocks from the application. Check for repeatedly scanned / unselective indexes. Change
data blocks PCTFREE and/or PCTUSED. Check for 'right- hand-indexes' (indexes that get inserted into at the same point
by many processes). Increase INITRANS. Reduce the number of rows per block.
segment Increase of number of FREELISTs. Use FREELIST GROUPs (even in single instance this can make a
header difference).
freelist Add more FREELISTS. In case of Parallel Server make sure that each instance has its own FREELIST
blocks GROUP(s).
undo
Add more rollback segments.
header

Related:
Bug can cause "buffer busy waits" and latch contention in 817/901 Note 176129.1
Tracing User sessions Note 62160.1

Note 42152.1 LATCH: CACHE BUFFERS CHAINS


Subject: LATCH: CACHE BUFFERS CHAINS
Doc ID: 42152.1 Type: REFERENCE
Modified Date : 20-OCT-2005 Status: PUBLISHED

Latch: cache buffers chains
Identifier:
Registered In:

Description:
Blocks in the buffer cache are placed on linked lists
(cache buffer chains) which hang off a hash table.
The hash chain that a block is placed on is based on the DBA
and CLASS of the block. Each hash chain is protected by a 
single child latch. Processes need to get the relevant latch
to allow them the scan a hash chain for a buffer so that the
linked list does not change underneath them.

Contention: Contention for these latches can be caused by:

­ Very long buffer chains.
  There is a known problem that can result in long
  buffer chains ­ 
­ very very heavy access to a single block.
  This would require the application to be reviewed.
To identify the heavily accessed buffer chain look at
the latch stats for this latch under <View:V$Latch_Children>
and match this to <View:X$BH>. 
             
                *** IMPORTANT: As of Oracle8i there are many hash buckets
                               to each latch and so there will be lots
                               of buffers under each latch. 
                               In 8i the steps below will not help much.

Eg: Given ADDR from V$LATCH_CHILDREN for a heavily contended
    child latch:
select dbafil, dbablk, class, state
   from X$BH where HLADDR='address of latch';

One of these is 'potentially' a hot block in the database.

   
  **Please see Note 163424.1 How To Identify a Hot Block Within The Database 
    to correctly identify this issue

    Once the object/table is found you can reduce the number of blocks requested
    on the particular object/table by redesigning the application or by
    spreading the hits in the buffer cache over different hash chains.
    You can achieve this by implementing PARTITIONING and storing segements of 
    the same table/object in different files.
  
  *NOTE*  IF YOU ARE RUNNING 8.1.7:
   
   Please see Note 176129.1 ALERT: LATCH FREE And FREE_BUFFER_WAITS 
                                  Cause Performance Degradation/Hang

Note 155971.1 Ext/Pub Resolving Intense and "Random" Buffer Busy Wait Performance Problems:
Note 163424.1 Ext/Pub How To Identify a Hot Block Within The Database Buffer Cache.:
Enqueue waits

TX - Note 62354.1 TX Transaction locks - Example wait scenarios

Subject: TX Transaction locks - Example wait scenarios


Doc ID: 62354.1 Type: TROUBLESHOOTING
Modified Date : 04-AUG-2008 Status: PUBLISHED

Introduction
~~~~~~~~~~~~
  This short article gives examples of TX locks and the waits which can 
  occur in various circumstances. Often such waits will go unnoticed unless
  they are of a long duration or when they trigger a deadlock scenario (which
  raises an ORA­60 error).

  The examples here demonstrate fundamental locking scenarios which should
  be understood by application developers and DBA's alike. 
  The examples require select privilege on the V$ views.

Useful SQL statements 
~~~~~~~~~~~~~~~~~~~~~
  If you encounter a lock related hang scenario the following SQL statements
  can be used to help isolate the waiters and blockers:

    Show all sessions waiting for any lock:

select event,p1,p2,p3 from v$session_wait 
 where wait_time=0 and event='enqueue';

    Show sessions waiting for a TX lock:

select * from v$lock where type='TX' and request>0;

    Show sessions holding a TX lock:

select * from v$lock where type='TX' and lmode>0;

What is a TX lock ?
~~~~~~~~~~~~~~~~~~~
  A TX lock is acquired when a transaction initiates its first change and is 
  held until the transaction does a COMMIT or ROLLBACK. It is used mainly as
  a queuing mechanism so that other sessions can wait for the transaction to
  complete. The lock name (ID1 and ID2) of the TX lock reflect the transaction
  ID of the active transaction.

Example Tables
~~~~~~~~~~~~~~
  The lock waits which can occur are demonstrated using the following
  tables. Connect as SCOTT/TIGER or some dummy user to set up the test
  environment using the following SQL:

    DROP TABLE tx_eg;
    CREATE TABLE tx_eg ( num number, txt varchar2(10), sex varchar2(10) )
      INITRANS 1 MAXTRANS 1;
    INSERT into tx_eg VALUES ( 1, 'First','FEMALE' );
    INSERT into tx_eg VALUES ( 2, 'Second','MALE' );
    INSERT into tx_eg VALUES ( 3, 'Third','MALE' );
    INSERT into tx_eg VALUES ( 4, 'Fourth','MALE' );
    INSERT into tx_eg VALUES ( 5, 'Fifth','MALE' );
    COMMIT;

  In the examples below three sessions are required: 

Ses#1  indicates the TX_EG table owners first session
Ses#2  indicates the TX_EG table owners second session
DBA  indicates a SYSDBA user with access to <View:V$LOCK>

  The examples covered below include:

Waits due to Row being locked by an active Transaction
Waits due to Unique or Primary Key Constraint enforcement
Waits due to Insufficient 'ITL' slots in the Block
Waits due to rows being covered by the same BITMAP index fragment

Waits due to Row being locked by an active Transaction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  When a session updates a row in a table the row is locked by the sessions
  transaction. Other users may SELECT that row and will see row as it was
  BEFORE the UPDATE occurred. If another session wishes to UPDATE the same
  row it has to wait for the first session to commit or rollback. The 
  second session waits for the first sessions TX lock in EXCLUSIVE mode.

  Eg:
Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Garbage' where num=1;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST 
 from v$lock where type='TX';

SID        TY ID1        ID2        LMODE      REQUEST
­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
         8 TX     131075        597          6          0
        10 TX     131075        597          0          6

> This shows SID 10 is waiting for the TX lock held by SID 8 and it
> wants the lock in exclusive mode (as REQUEST=6).

The select below is included to demonstrate that a session waiting
on a lock will show as waiting on an 'enqueue' in V$SESSION_WAIT
and that the values of P1RAW, P2 and P3 indicate the actual lock
being waited for. When using Parallel Server the EVENT will be
'DFS enqueue lock acquisition' rather than 'enqueue'.
This select will be omitted from the following examples.

DBA: select sid,p1raw, p2, p3
  from v$session_wait 
 where wait_time=0 and event='enqueue';
SID        P1RAW    P2         P3
­­­­­­­­­­ ­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
        10 54580006     131075        597
>    ~~~~  ~~ ~~~~~~       ~~~
>    type|mode       id1       id2
>     T X   6 131075       597

The next select shows the object_id and the exact row that the
session is waiting for. This information is only valid in V$SESSION
when a session is waiting due to a row level lock. The statement
is only valid in Oracle 7.3 onwards. As SID 10 is the waiter above
   then this is the session to look at in V$SESSION:

DBA: select ROW_WAIT_OBJ#,
       ROW_WAIT_FILE#,ROW_WAIT_BLOCK#,ROW_WAIT_ROW#
       from v$session
      where sid=10;

ROW_WAIT_O ROW_WAIT_F ROW_WAIT_B ROW_WAIT_R
­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
      3058          4       2683          0

> The waiter is waiting for the TX lock in order to lock row 0
> in file 4, block 2683 of object 3058.

Ses#1: rollback;
Ses#2: rollback;

Waits due to Unique or Primary Key Constraint enforcement
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  If a table has a primary key constraint, a unique constraint
  or a unique index then the uniqueness of the column/s referenced by
  the constraint is enforced by a unique index. If two sessions try to 
  insert the same key value the second session has to wait to see if an
  ORA­0001 should be raised or not.

  Eg: 
Ses#1:  ALTER TABLE tx_eg ADD CONSTRAINT tx_eg_pk PRIMARY KEY( num );
Ses#1: insert into tx_eg values (10,'New','MALE');
Ses#2: insert into tx_eg values (10,'OtherNew',null);
        DBA:    select SID,TYPE,ID1,ID2,LMODE,REQUEST
                 from v$lock where type='TX';

SID        TY ID1        ID2        LMODE      REQUEST
­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
         8 TX     196625         39          6          0
        10 TX     262155         65          6          0
        10 TX     196625         39          0          4

This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). SID 10 holds a TX lock 
for its own transaction.
Ses#1: commit;
Ses#2:  ORA­00001: unique constraint (SCOTT.TX_EG_PK) violated
Ses#2: rollback;

Waits due to Insufficient 'ITL' slots in a Block
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  Oracle keeps note of which rows are locked by which transaction in an area
  at the top of each data block known as the 'interested transaction list'.
  The number of ITL slots in any block in an object is controlled by
  the INITRANS and MAXTRANS attributes. INITRANS is the number of slots
  initially created in a block when it is first used, while MAXTRANS places
  an upper bound on the number of entries allowed. Each transaction which
  wants to modify a block requires a slot in this 'ITL' list in the block.

  MAXTRANS places an upper bound on the number of concurrent transactions
  which can be active at any single point in time within a block.

  INITRANS provides a minimum guaranteed 'per­block' concurrency.

  If more than INITRANS but less than MAXTRANS transactions want to be 
  active concurrently within the same block then the ITL list will be extended
  BUT ONLY IF THERE IS SPACE AVAILABLE TO DO SO WITHIN THE BLOCK.

  If there is no free 'ITL' then the requesting session will wait on one
  of the active transaction locks in mode 4.

  Eg:   Ses#1:  update tx_eg set txt='Garbage' where num=1;
        Ses#2:  update tx_eg set txt='Different' where num=2;
        DBA:    select SID,TYPE,ID1,ID2,LMODE,REQUEST
                 from v$lock where type='TX';

SID        TY ID1        ID2        LMODE      REQUEST
­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
         8 TX     327688         48          6          0
        10 TX     327688         48          0          4

This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). 

Ses#1: commit;
Ses#2: commit;
Ses#1: ALTER TABLE tx_eg MAXTRANS 2;
        Ses#1:  update tx_eg set txt='First' where num=1;
        Ses#2:  update tx_eg set txt='Second' where num=2;

Both rows update as there is space to grow the ITL list to 
accommodate both transactions.

Ses#1: commit;
Ses#2: commit;

Also from 9.2 you can check the ITL Waits in v$segment_statistics 
with a query like :
     SELECT t.OWNER, t.OBJECT_NAME, t.OBJECT_TYPE, t.STATISTIC_NAME, t.VALUE
     FROM v$segment_statistics t
     WHERE t.STATISTIC_NAME = 'ITL waits' 
     AND t.VALUE > 0;

If need be, increase INITTRANS and MAXTRANS. 

Waits due to rows being covered by the same BITMAP index fragment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  Bitmap indexes index key values and a range of ROWIDs. Each 'entry' 
  in a bitmap index can cover many rows in the actual table.
  If 2 sessions wish to update rows covered by the same bitmap index
  fragment then the second session waits for the first transaction to
  either COMMIT or ROLLBACK by waiting for the TX lock in mode 4.

  Eg: Ses#1:  CREATE Bitmap Index tx_eg_bitmap on tx_eg ( sex );
        Ses#1:  update tx_eg set sex='FEMALE' where num=3;
        Ses#2:  update tx_eg set sex='FEMALE' where num=4;
        DBA:    select SID,TYPE,ID1,ID2,LMODE,REQUEST
                 from v$lock where type='TX';

SID        TY ID1        ID2        LMODE      REQUEST
­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
         8 TX     262151         62          6          0
        10 TX     327680         60          6          0
        10 TX     262151         62          0          4

This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). 

Ses#1: commit;
Ses#2: commit;

Other Scenarios
~~~~~~~~~~~~~~~
  There are other wait scenarios which can result in a SHARE mode wait for a TX
  lock but these are rare compared to the examples given above. 
  Eg: If a session wants to read a row locked by a transaction in a PREPARED
      state then it will wait on the relevant TX lock in SHARE mode (REQUEST=4).
      As a PREPARED transaction should COMMIT , ROLLBACK or go to an in­doubt
      state very soon after the prepare this is not generally noticeable..

TM - Note 33453.1 REFERENTIAL INTEGRITY AND LOCKING

Subject: REFERENTIAL INTEGRITY AND LOCKING


Doc ID: 33453.1 Type: FAQ
Modified Date : 04-DEC-2008 Status: PUBLISHED

                     REFERENTIAL INTEGRITY and LOCKING
                     ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­

This bulletin explains what referential integrity means and
how locking takes place with tables joined by the referential integrity
rule. In addition, this bulletin explains how inserting/updating/deleting one
table can cause another table to get locked.
REFERENTIAL INTEGRITY: is a rule defined on a column (or set of columns) in one
table that allows the insert or update of a row only if the value for the column
or set of columns (in the child table) matches the value in a column of a
related table (parent table).

Example 1:
SQL> create table DEPT (deptno number constraint pk_dept primary key,
     dname varchar2(10))

SQL> create table EMP (deptno number(2) constraint fk_deptno references
     dept(deptno), ename varchar2(20))

In the above example "DEPT" is the parent table having the primary key
constraint 'pk_dept' on the 'deptno' column. Similarly "EMP" is the child table
having the foreign key constraint 'fk_deptno' on the 'deptno' column. However,
this foreign key constraint references the 'deptno' column of the parent table
(DEPT) thus enforcing the referential integrity rule. Therefore you cannot add
an employee into a department number that doesn't exist in the DEPT table.

Example 2:

SQL> insert into DEPT values (1, 'COSTCENTER');

1 row created.

SQL> insert into EMP values (1, 'SCOTT');

1 row created.

SQL> insert into EMP values (2, 'SCOTT');
insert into EMP values (2, 'SCOTT')
            *
ERROR at line 1:
ORA­02291: integrity constraint (SCOTT.FK_DEPTNO) violated ­ parent key not
found

The query that can be issued to find out the primary and foreign key relation
is as follows:

SQL> select a.owner for_owner, a.table_name for_table, a.constraint_name
     for_constr, b.owner pri_owner, b.table_name pri_table, b.constraint_name
     pri_constr from user_constraints a, user_constraints b
     where a.r_constraint_name = b.constraint_name
     and a.constraint_type = 'R'
     and b.constraint_type = 'P';

FOR_OWNER                      FOR_TABLE
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
FOR_CONSTR                     PRI_OWNER
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
PRI_TABLE                      PRI_CONSTR
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
SCOTT                          EMP
FK_DEPTNO                      SCOTT
DEPT                           PK_DEPT

where USER_CONSTRAINTS      : data dictionary view
      CONSTRAINT_TYPE = 'R' : stands for the foreign key constraint
      CONSTRAINT_TYPE = 'P' : stands for the primary key constraint

The data dictionary contains the following views of interest with integrity
constraints:

a) ALL_CONSTRAINTS
b) ALL_CONS_CONSTRAINTS
c) CONSTRAINT_COLUMNS
d) CONSTRAINT_DEFS
e) USER_CONSTRAINTS
f) USER_CONS_COLUMNS
g) USER_CROSS_REFS
h) DBA_CONSTRAINTS
i) DBA_CONS_COLUMNS
j) DBA_CROSS_REFS

LOCKING:   Indexes play an important part when dealing with referential
integrity and locking. The existence of an index determines the type of lock
necessary, if any. Below are examples that will describe this locking
phenomenon.

Each example displays output from a Data Dictionary object, V$LOCK. This view
gives information about the different types of locks held within the
In order to fully understand the output of this view, below is a description
of this object.

SQL> desc v$lock;

 Name                            Null?    Type
 ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­­­­­­­­ ­­­­
 ADDR                                     RAW(4)
 KADDR                                    RAW(4)
 SID                                      NUMBER
 TYPE                                     VARCHAR2(2)
 ID1                                      NUMBER
 ID2                                      NUMBER
 LMODE                                    NUMBER
 REQUEST                                  NUMBER

where   ADDR = address of lock state object
       KADDR = address of lock
         SID = identifier of process holding the lock
        TYPE = resource type
         ID1 = resource identifier #1
         ID2 = resource identifier #2
       LMODE = lock mode held: 1 (null), 2 (row share), 3 (row exclusive),
                               4 (share), 5 (share row exclusive),
                               6 (exclusive)
     REQUEST = lock mode requested (same values as LMODE)

   TYPE                 LOCK ID1                     LOCK ID2

a) TX(transaction)      Decimal representation of    Decimal rrepresentation
                        rollback segment number      of "wrap" number (number of
                        and slot number              times the rollback slot has
                                                     been reused)

b) TM(table locks)      Object id of table being     Always 0
                        modified

c) UL(user supplied     Please refer to Appendix B­81 of the Oracle7 Server
      lock)             Administrator's Guide.

Examples:

NOTE: In all the examples given below, the object_id for the DEPT and the EMP
      tables are 2989 and 2991 respectively. The ID1 column from the V$LOCK data
      dictionary object corresponds to the OBJECT_ID column from the DBA_OBJECTS
      view.

SQL> select object_name from sys.dba_objects where object_id = 2989;

OBJECT_NAME
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
DEPT

SQL> select object_name from sys.dba_objects where object_id = 2991;

OBJECT_NAME
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
EMP

                           **** WITHOUT INDEXES  ****

1) AN INSERT/DELETE/UPDATE INTO THE CHILD TABLE CAUSES THE PARENT TABLE TO GET
   LOCKED. Notice that a share lock (LMODE=4) of the entire parent table is
   required until the transaction containing the insert/delete/update statement
   for the child table is committed, thus preventing any modifications to the
   parent table.

NOTE:  In 7.1.6 and higher, an insert, update, and delete statement on the
child table will not acquire any locks on the parent table anymore, although 
insert and update statements will wait for a row­lock on the index of the 
parent table to clear.

SQL> insert into DEPT values (1, 'COSTCENTER');

SQL> commit;

SQL> insert into EMP values (1, 'SCOTT');
SQL> select * from v$lock
where sid in (select sid from v$session where audsid = userenv('SESSIONID'));

ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST
­­­­­­­­ ­­­­­­­­ ­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
40078664 40078678         15 TM       2989          0          4          0
4007AD74 4007AE08         15 TX     196667         54          6          0
400786C8 400786DC         15 TM       2991          0          3          0

2) AN INSERT/DELETE/UPDATE ON THE PARENT TABLE CAUSES THE CHILD TABLE TO GET
   LOCKED. A share lock (LMODE=4) of the entire child table is required
   until the transaction containing the insert/delete/update statement
   for the parent table is committed, thus preventing any modifications to the
   child table. It even can be a SSX (LMODE=5) lock when deleting from the 
   parent table with a delete cascade constraint.

NOTE:  In 7.1.6 and higher, INSERT into the parent table do not lock the child
table. In Oracle 9.0.1 or higher, those looks became temporal: they are only needed
during the execution time of the UPDATE/DELETE statements. Those locks are 
downgraded to 'mode 3 Row­X (SX)' locks when the execution is finished. 
In 9.2.0, the downgraded 'mode 3 Row­X (SX)' locks are no longer required 
except when deleting from a parent table with a 'delete cascade' constraint.

SQL> update dept set deptno = 1;

SQL> select * from v$lock
where sid in (select sid from v$session where audsid = userenv('SESSIONID'));

ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST
­­­­­­­­ ­­­­­­­­ ­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
40078664 40078678         15 TM       2991          0          4          0
4007AD74 4007AE08         15 TX     196667         54          6          0
400786C8 400786DC         15 TM       2989          0          3          0

                             ****  WITH INDEXES  ****

1) AN INSERT/DELETE/UPDATE ON THE CHILD TABLE DOES NOT PLACE LOCKS OF ANY KIND
   ON THE PARENT TABLE IF THERE IS AN INDEX ON THE FOREIGN KEY OF THE CHILD
   TABLE.  Therefore, any type of DML statement can be issued on the parent
   table, including inserts, updates, deletes and queries.

NOTE:  In 9.2.0 onwards, Oracle requires 'mode 2 Row­S (SS)' locks on the 
parent table (see Note 223303.1).

SQL> create index ind_emp on emp (deptno, ename);

SQL> insert into DEPT values (1, 'COSTCENTER');

SQL> commit;

SQL> insert into EMP values (1, 'SCOTT');
SQL> select * from v$lock
where sid in (select sid from v$session where audsid = userenv('SESSIONID'));

ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST
­­­­­­­­ ­­­­­­­­ ­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
40078664 40078678         15 TX     196667         54          6          0
4007AD74 4007AE08         15 TM       2991          0          3          0

2) AN INSERT/DELETE/UPDATE ON THE PARENT TABLE WILL ONLY ACQUIRE A ROW LEVEL
   LOCK ON THE PARENT TABLE IF THERE IS AN INDEX ON THE FOREIGN KEY OF THE
   CHILD TABLE. The child table will have NO locks on it and so any type of
   modifications can be made to the child table.

NOTE:  In v7.1.6 and higher, inserts, updates and deletes on the parent table
do not require any locks on the child table, although updates and deletes
will wait for row­level locks to clear on the child table index.  If the
child table specifies ON DELETE CASCADE, waiting and locking rules are the
same as if you deleted from the child table after performing the delete from
the parent. In 9.2.0 onwards, Oracle requires 'mode 2 Row­S (SS)' locks on the 
child table (see Note 223303.1).

SQL> update DEPT set deptno = 1;

SQL> select * from v$lock
where sid in (select sid from v$session where audsid = userenv('SESSIONID'));

ADDR     KADDR    SID        TY ID1        ID2        LMODE      REQUEST
­­­­­­­­ ­­­­­­­­ ­­­­­­­­­­ ­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
40078664 40078678         15 TX     196667         54          6          0
4007AD74 4007AE08         15 TM       2989          0          3          0

For more information on the above, see
Note 15476.1  FAQ about Detecting and Resolving Locking Conflicts
Bug 3801750 ORA­372 can occur if FOREIGN KEY is in READ ONLY tablespace
Bug 6175584 Insert hang with Foreign Key constraint and read­only tablespace

SQ/US/HW on RAC - Note 226569.1 9iRAC Most Common Performance Problem Areas (INTERNAL ONLY)

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!

This Issue occurs when the database detects that a waiter had waited for a resource for longer than a particular
threshold. The message "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!" appears in the alert
log and trace and
systemstates are dumped.

Typically this is caused by two (or more) incompatible operations being run simltaneously.
Note 278316.1 Potential reasons for "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! "

ORA-60 DEADLOCK DETECTED/enqueue hash chains latch


Note 62365.1 What to do with "ORA-60 Deadlock Detected" Errors

Subject: What to do with "ORA-60 Deadlock Detected" Errors


Doc ID: 62365.1 Type: TROUBLESHOOTING
Modified Date : 28-AUG-2007 Status: PUBLISHED

Checked for relevance on 28­8­2007

Introduction
~~~~~~~~~~~~
  This short article describes how to determine the cause of ORA­60 
  "deadlock detected while waiting for resource" errors.
  Note that 99% of the time deadlocks are caused by application or 
  configuration issues. This article attempts to highlight the most 
  common deadlock scenarios.

What is Deadlock?
~~~~~~~~~~~~~~~~~
  A deadlock occurs when a session (A) wants a resource held by another 
  session (B) , but that session also wants a resource held by the first
  session (A). There can be more than 2 sessions involved but the idea is
  the same.

Example of Deadlock
~~~~~~~~~~~~~~~~~~~
  To reinforce the description the following simple test demonstrates a 
  a deadlock scenario. This is on Oracle 8.0.4 so if you are used to Oracle7
  the ROWIDs may look a little strange:

    Setup: create table eg_60 ( num number,  txt varchar2(10) );
insert into eg_60 values ( 1, 'First' );
insert into eg_60 values ( 2, 'Second' );
commit;
select rowid, num, txt from eg_60;

ROWID                     NUM TXT
­­­­­­­­­­­­­­­­­­ ­­­­­­­­­­ ­­­­­­­­­­
AAAAv2AAEAAAAqKAAA          1 First
AAAAv2AAEAAAAqKAAB          2 Second

    Ses#1: update eg_60 set txt='ses1' where num=1;

    Ses#2: update eg_60 set txt='ses2' where num=2;
     update eg_60 set txt='ses2' where num=1;
   
> Ses#2 is now waiting for the TX lock held by Ses#1

    Ses#1: update eg_60 set txt='ses1' where num=2;

> This update would cause Ses#1 to wait on the TX lock
> held by Ses#2, but Ses#2 is already waiting on this session.
  > This causes a deadlock scenario so one of the sessions
> signals an ORA­60. 
    Ses#2: ORA­60 error

    Ses#1: Still blocked until Ses#2 commits or rolls back as ORA­60
only rolls back the current statement and not the entire
transaction.

Diagnostic information produced by an ORA­60
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  Although an ORA­60 error does not write information to the alert log
  the user that gets the ORA­60 error writes information to their trace file.
  The exact format of this varies between Oracle releases. The trace 
  file will be written to the directory indicated by the USER_DUMP_DEST 
  init.ora parameter.

  The trace file will contain a deadlock graph and additional information
  similar to that shown below. This is the trace output from the above example 
  which signaled an ORA­60 to Ses#2:

   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   DEADLOCK DETECTED
   Current SQL statement for this session:
                    update eg_60 set txt='ses2' where num=1

   The following deadlock is not an ORACLE error. It is a
   deadlock due to user error in the design of an application
   or from issuing incorrect ad­hoc SQL. The following
   information may aid in determining the deadlock:
   Deadlock graph:
                       ­­­­­­­­­Blocker(s)­­­­­­­­  ­­­­­­­­­Waiter(s)­­­­­­­­­
   Resource Name       process session holds waits  process session holds waits
   TX­00020012­0000025e     12      11     X             11      10           X
   TX­00050013­0000003b     11      10     X             12      11           X
   session 11: DID 0001­000C­00000001      session 10: DID 0001­000B­00000001
   session 10: DID 0001­000B­00000001      session 11: DID 0001­000C­00000001
   Rows waited on:
   Session 10: obj ­ rowid = 00000BF6 ­ AAAAv2AAEAAAAqKAAB
   Session 11: obj ­ rowid = 00000BF6 ­ AAAAv2AAEAAAAqKAAA
   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­

What does the trace information mean ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  In this section we explain each part of the above trace.
  Note that not all this information is produced in all Oracle releases.

   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   DEADLOCK DETECTED
   Current SQL statement for this session:
                    update eg_60 set txt='ses2' where num=1
   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   This shows the statement which was executing which received the ORA­60
   error. It is this statement which was rolled back.
   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   Deadlock graph:
                       ­­­­­­­­­Blocker(s)­­­­­­­­  ­­­­­­­­­Waiter(s)­­­­­­­­­
   Resource Name       process session holds waits  process session holds waits
   TX­00020012­0000025e     12      11     X             11      10           X
   TX­00050013­0000003b     11      10     X             12      11           X
   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   This shows who was holding each lock, and who was waiting for each lock.
   The columns in the graph indicate:

Resource Name Lock name being held / waited for.
process  V$PROCESS.PID of the Blocking / Waiting session
session V$SESSION.SID of the Blocking / Waiting session
holds Mode the lock is held in
waits Mode the lock is requested in

   So in this example:

SID 11  holds TX­00020012­0000025e in X mode
    and wants TX­00050013­0000003b in X mode

SID 10  holds TX­00050013­0000003b in X mode
    and wants TX­00020012­0000025e in X mode

   The important things to note here are the LOCK TYPE, the MODE HELD and
   the MODE REQUESTED for each resource as these give a clue as to the 
   reason for the deadlock.

   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   Rows waited on:
   Session 10: obj ­ rowid = 00000BF6 ­ AAAAv2AAEAAAAqKAAB
   Session 11: obj ­ rowid = 00000BF6 ­ AAAAv2AAEAAAAqKAAA
   ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
   If the deadlock is due to row­level locks being obtained in different 
   orders then this section of the trace file indicates the exact rows that 
   each session is waiting to lock for themselves. Ie: If the lock requests
   are TX mode X waits then the 'Rows waited on' may show useful information.
   For any other lock type / mode the 'Rows waited on' is not relevant and
   usually shows as "no row".

   In the above example:

SID 10  was waiting for ROWID 'AAAAv2AAEAAAAqKAAB' of object 0xBF6 
(which is 3062 in decimal)

SID 11  was waiting for ROWID 'AAAAv2AAEAAAAqKAAA' of object 0xBF6

   This can be decoded to show the exact row/s. 
   Eg: SID 10 can be shown to be waiting thus:

SELECT owner, object_name, object_type 
  FROM dba_objects WHERE object_id = 3062;

Owner Object_Name Object_Type


­­­­­­­ ­­­­­­­­­­­­­­­ ­­­­­­­­­­­­­­­
SYSTEM EG_60 TABLE
SELECT * FROM system.eg_60 WHERE ROWID='AAAAv2AAEAAAAqKAAB';

NUM        TXT       
­­­­­­­­­­ ­­­­­­­­­­
         2 Second    

Avoiding Deadlock
~~~~~~~~~~~~~~~~~
  The above deadlock example occurs because the application which issues
  the update statements has no strict ordering of the rows it updates.
  Applications can avoid row­level lock deadlocks by enforcing some ordering
  of row updates. This is purely an application design issue.
  Eg: If the above statements had been forced to update rows in ascending
      'num' order then:

    Ses#1: update eg_60 set txt='ses1' where num=1;

    Ses#2: update eg_60 set txt='ses2' where num=1;
> Ses#2 is now waiting for the TX lock held by Ses#1

    Ses#1: update eg_60 set txt='ses1' where num=2;
> Succeeds as no­one is locking this row
commit;
> Ses#2 is released as it is no longer waiting for this TX

    Ses#2: update eg_60 set txt='ses2' where num=2;
commit;

  The strict ordering of the updates ensures that a deadly embrace cannot
  occur. This is the simplest deadlock scenario to identify and resolve.
  Note that the deadlock need not be between rows of the same table ­ it
  could be between rows in different tables. Hence it is important to place 
  rules on the order in which tables are updated as well as the order of the
  rows within each table.
  Other deadlock scenarios are discussed below.

Different Lock Types and Modes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  The most common lock types seen in deadlock graphs are TX and TM locks.
  These may appear held / requested in a number of modes. It is the
  lock type and modes which help determine what situation has caused the
  deadlock. 

Lock  Mode
Type Requested Probable Cause
~~~~ ~~~~~~~~~ ~~~~~~~~~~~~~~
TX  X (mode 6) Application row level conflict.
Avoid by recoding the application to ensure
rows are always locked in a particular order.
TX S (mode 4)  There are a number of reasons that a TX lock
may be requested in S mode. See Note 62354.1
for a list of when TX locks are requested in
mode 4.
TM SSX (mode 5)  This is usually related to the existence of
        or foreign key constraints where the columns
        S (mode 4)  are not indexed on the child table.
See Note 33453.1 for how to locate such
constraints. See below for locating
the OBJECT being waited on.

   Although other deadlock scenarios can happen the above are the most common.

TM locks ­ which object ?
~~~~~~~~~~~~~~~~~~~~~~~~~
  ID1 of a TM lock indicates which object is being locked. This makes it
  very simple to isolate the object involved in a deadlock when a TM lock
  is involved. 

  1. Given the TM lock id in the form TM­AAAAAAAA­BBBBBBBB
     convert AAAAAAAA from hexadecimal to a decimal number

  2. Locate the object using DBA_OBJECTS:

SELECT * FROM dba_objects WHERE object_id= NNNN;

  This is the object id that the TM lock covers. 
  Note that with TM locks it is possible that the lock is already held in 
  some mode in which case the REQUEST is to escalate the lock mode.

Additional Information
~~~~~~~~~~~~~~~~~~~~~~
  If you are still having problems identifying the cause of a deadlock
  Oracle Support may be able to help. Additional information can be collected 
  by adding the following to the init.ora parameters:

event="60 trace name errorstack level 3;name systemstate level 10"

  Note that this can generate a very large trace file which may get
  truncated unless MAX_DUMP_FILE_SIZE is large enough to accommodate the output.
  
  When this is set any session encountering an ORA­60 error will write 
  information about all processes on the database at the time of the error.
  This may help show the cause of the deadlock as it can show information 
  about both users involved in the deadlock. Oracle Support will need 
  all the information you have collected in addition to the new trace file
  to help identify where in the application you should look for problems.

  It may be necessary to run the offending jobs with SQL_TRACE enabled
  to show the order in which each session issues its commands in order
  to get into a deadlock scenario.

Known Issues and References
~~~~~~~~~~~~~~~~~~~~~~~~~~~
  TX lock waits and why they occur Note 62354.1

  TM locks and Foreign Key Constraints         Note 33453.1

  Example TM locks During Referential Integrity Enforcement     Note 38373.1

  INSERT into a clustered table can give ORA­60  (Fixed 7.1.4.)   Bug 197942

  ORA­60 / ORA­604 against UET$  (Fixed 7.2)      Bug 231455

  ORA­60 from ANALYZE ... VALIDATE ...   
This can occur if the data dictionary has been ANALYZED and contains
statistics. Delete the statistics.

  ORA­60 on startup in Oracle 6
This can be caused by a datafile being inaccessible which is not
    marked as offline. Offline the file.

The reason 'enqueue hash chains latch waits' are here is that, typically, during deadlock detection (ie the
routine Oracle uses to determine if a deadlock actually exists), there is a heavy need for the latch which can
cause issues for other sessions.
If there is a problem with this latch, check if a trace file is generated for the ORA-60 and resolve that issue.

Das könnte Ihnen auch gefallen