Beruflich Dokumente
Kultur Dokumente
PDFmyURL.com
An example report is
Date: 03/25/05
Time: 17:51 PM
Page:
Shared Pool Utilization
SYSTEM
whoville database
users
Non-Shared SQL Shared SQL
Percent Shared
-------------------- -------------- -------------- -------------WHOAPP
532,097,982
1,775,745
.333
SYS
5,622,594
5,108,017
47.602
DBSNMP
678,616
219,775
24.463
SYSMAN
439,915
2,353,205
84.250
SYSTEM
425,586
20,674
4.633
-------------------------- -------------- -------------5
541,308,815
9,502,046
1.725
As you can see the majority owner in this application, WHOAPP is only showing 0.3 percent of reusable code by memory usage and is tying
up an amazing 530 megabytes with non-reusable code! Lets look at a database with good reuse statistics. Look at this one:
Date: 11/13/05
Time: 03:15 PM
Page:
1
PERFSTAT
users
Non-Shared SQL Shared SQL
Percent Shared
-------------------- -------------- -------------- -------------DBAVILLAGE
9,601,173
81,949,581
89.513
PERFSTAT
2,652,827
199,868
7.006
DBASTAGER
1,168,137
35,468,687
96.812
SYS
76,037
5,119,125
98.536
-------------------------- -------------- -------------4
13,498,174
122,737,261
90.092
Notice how the two application owners, DBAVILLAGE and DBASTAGER show 89.513 and 96.812 reuse percentage by memory footprint for
code.
So what else can we look at to see about code reusage, the above reports give us a gross indication, how about something with a bit more
usability to correct the situation? The V$SQLAREA and V$SQLTEXT views give us the capability to look at the current code in the shared
pool and determine if it is using, or not using bind variables.
set
col
col
col
PDFmyURL.com
It shows a simple script to determine, based on the first x characters (input when the report is executed) the number of SQL statements that
are identifical up to the first x characters. This shows us the repeating code in the database and helps us to track down the offending
statements for correction. An example output :
Date: 02/23/05
Time: 10:20 AM
Similar SQL
whoville database
SubString - 120
Page:
SYSTEM
User
Characters
--------------- ------------------------------------------------------Number
Of
Repeats
---------WHOAPP
SELECT Invoices."INVOICEKEY", Invoices."CLIENTKEY", Invoices."BUYSTATUS", Invoices."DEBTORKEY",
Invoices."INPUTTRANSKEY"
1752
WHOAPP
SELECT DisputeCode.DisputeCode , DisputeCode.Disputed , InvDispute."ROWID" , DisputeCode."ROWID" FROM
InvDispute , Disp
458
WHOAPP
SELECT Transactions.PostDate , Payments.PointsAmt , Payments.Type_ AS PmtType , Payments.Descr ,
Payments.FeeBasis , Pay
449
SYS
SELECT SUM(Payments.Amt) AS TotPmtAmt , SUM(Payments.FeeEscrow) AS TotFeeEscrow , SUM(Payments.RsvEscrow) AS
TotRsvEscro
428
WHOAPP
SELECT SUM(Payments.Amt) AS TotPmtAmt, SUM(Payments.FeeEscrow) AS TotFeeEscrow, SUM(Payments.RsvEscrow) AS
TotRsvEscrow
428
WHOAPP
SELECT Transactions.BatchNo , Payments.Amt , Payments."ROWID" , Transactions."ROWID" FROM Payments ,
Transactions WHERE
396
WHOAPP
INSERT INTO Payments (PaymentKey, AcctNo, Amt, ChargeAmt, Descr, FeeBasis, FeeEarned, FeeEscrow, FeeRate,
PDFmyURL.com
WHOAPP
INSERT INTO Payments (PaymentKey, AcctNo, Amt, ChargeAmt, Descr, FeeBasis, FeeEarned, FeeEscrow, FeeRate,
FeeTaxAmt, Hol
244
WHOAPP
SELECT Clients.Name , Clients.ClientNo , Invoices.InvNo , Invoices.ClientKey AS InvClientKey ,
Transactions.ClientKey AS
244
SYS
SELECT COUNT(*) AS RecCount , INVOICES."ROWID" , TRANSACTIONS."ROWID" , PROGRAMS."ROWID" FROM INVOICES ,
TRANSACTIONS ,
232
Using a substring from the above SQL the V$SQLTEXT view can be used to pull an entire listing of the code
The proper fix for non-bind variable usage is to re-write the application to use bind variables. This of course can be an expensive and time
consuming process, but ultimately it provides the best fix for the problem. However, what if you cant change the code? Oracle has provided
the CURSOR_SHARING initialization variable that will automatically replace the literals in your code with bind variables. The settings for
CURSOR_SHARING are EXACT (the default), FORCE, and SIMILAR.
EXACT The statements have to match exactly to be reusable
FORCE Always replace literals
SIMILAR Perform literal peeking and replace when it makes sense
We usually suggest the use of the SIMILAR option for CURSOR_SHARING
PDFmyURL.com
Notice that I didnt limit myself to just full table scans, I also looked for expensive index scans as well. The Report shows:
Fri Aug 24
page
OBJECT_NAME
NO_OF_FULL_SCANS rows|blocks|pool
---------------------------- ---------------- --------------------------------SQLTEXT
-------------------------------------------------------------------------------------------------------------------------------------------LOOKUP_WORKTYPE
956170 17 |
5 | DEFAULT
SELECT WORKTYPEID FROM LOOKUP_WORKTYPE WHERE WORKTYPECODE = :B1
ROUTINGNUMBER
294118 520 |
5 | DEFAULT
SELECT ROUTINGNUMBERID, ROUTINGNUMBER, BANKID, CENTERID FROM ROUTINGNUMBER WHERE BANKID = :B1
EXCHANGEITEMEXCEPTION
39421 72280 |
1566 | DEFAULT
SELECT COUNT(1) FROM EXCHANGEITEMQUERY EIQU, EXCHANGEITEMEXCEPTION EIEX WHERE :B1 =EIQU.EXCHANGEITEMID AND EIQU.EXCHANGEITEMQUERYID=EIEX.EXC
HANGEITEMQUERYID AND EIEX.REMOVED = 0
ANDOR
3454 20 |
5 | DEFAULT
SELECT ANDORID, EXCEPTIONID, ISAND, LEFTID, RIGHTID FROM ANDOR ORDER BY EXCEPTIONID, ANDORID
EXCEPTIONS
3377 97 |
60 | DEFAULT
SELECT E.EXCEPTIONID, EXCEPTIONNAME, DESCRIPTION, EXCEPTIONCODE, E.CENTERID, E.BANKID, E.CUSTOMERID, E.ACCOUNTID, DATASOURCEID, DATAFIELDID,
INEQUALITYID, CONSTRAINTDATASOURCEID, CONSTRAINTDATAVALUE, D.DEFINITIONID, DEFINITIONATTRIBUTEID, E.ACTIVESTATUSID, E.APPLICATIONID, ISUSER
DEFINED FROM EXCEPTIONS E, DEFINITION D WHERE E.APPLICATIONID = :B1 AND E.EXCEPTIONID = D.EXCEPTIONID (+) ORDER BY E.EXCEPTIONNAME, D.DEFINI
TIONID
X937USERRECORD
3317 0 |
1 | DEFAULT
INSERT INTO X937USERRECORD_ARCH SELECT * FROM X937USERRECORD WHERE OUTJOBID = :B1
UN_CENTERNAME
1679
SELECT CENTERID, CENTERNAME, ACTIVESTATUSID AS CENTERACTIVESTATUSID, COMMENTS AS CENTERCOMMENTS, ITEMSETTINGID AS CENTERITEMSETTINGID, CENTE
RCODE, EXPORTSTATUSID AS CENTEREXPORTSTATUSID, EXPORTTIME AS CENTEREXPORTTIME, GLACCOUNTNUMBER, NULL AS BANKID FROM CENTER ORDER BY CENTERNA
ME
MACHINE
1481 3 |
5 | DEFAULT
SELECT M.MACHINEID, MACHINENAME, IPADDRESS, S.SERVICEID, SERVICENAME, APPLICATIONID FROM SERVICE S, MACHINE M, PROCESS P WHERE S.SERVICEID =
P.SERVICEID AND M.MACHINEID = P.MACHINEID ORDER BY MACHINENAME, SERVICENAME
Notice instead of trying to capture the full SQL statement I just grab the HASH value.
I can then use the hash value to pull the interesting SQL statements using SQL similar to:
PDFmyURL.com
select sql_text
from v$sqltext
where hash_value=&hash
order by piece;
Once I see the SQL statement I use SQL similar to this to pull the table indexes:
set lines 132
col index_name form a30
col table_name form a30
col column_name format a30
select a.table_name,a.index_name,a.column_name,b.index_type
from dba_ind_columns a, dba_indexes b
where a.table_name =upper('&tab')
and a.table_name=b.table_name
and a.index_owner=b.owner
and a.index_name=b.index_name
order by a.table_name,a.index_name,a.column_position;
set lines 80
Once I have both the SQL and the indexes for the full scanned table I can usually quickly come to a tuning decision if any additional indexes
are needed or, if an existing index should be used. In some cases there is an existing index that could be used of the SQL where rewritten. In
that case I will usually suggest the SQL be rewritten. An example extract from a SQL analysis of this type is shown here:
SQL> @get_it
Enter value for hash: 605795936
SQL_TEXT
---------------------------------------------------------------DELETE FROM BOUNCE WHERE UPDATED_TS < SYSDATE - 21
SQL> @get_tab_ind
Enter value for tab: bounce
TABLE_NAME
INDEX_NAME
------------ -------------------------BOUNCE
BOUNCE_MAILREPRECJOB_UNDX
BOUNCE
BOUNCE_MAILREPRECJOB_UNDX
BOUNCE
BOUNCE_MAILREPRECJOB_UNDX
BOUNCE
BOUNCE_MAILREPRECJOB_UNDX
BOUNCE
BOUNCE_PK
BOUNCE
BOUNCE_PK
BOUNCE
BOUNCE_PK
COLUMN_NAME
-------------MAILING_ID
RECIPIENT_ID
JOB_ID
REPORT_ID
MAILING_ID
RECIPIENT_ID
JOB_ID
INDEX_TYPE
---------NORMAL
NORMAL
NORMAL
NORMAL
NORMAL
NORMAL
NORMAL
SQL_TEXT
---------------------------------------------------------------SELECT VERSION_TS, CURRENT_MAJOR, CURRENT_MINOR, CURRENT_BUILD,
CURRENT_URL, MINIMUM_MAJOR, MINIMUM_MINOR, MINIMUM_BUILD, MINIMU
M_URL, INSTALL_RA_PATH, HELP_RA_PATH FROM CURRENT_CLIENT_VERSION
SQL> @get_tab_ind
Enter value for tab: db_status
union
select '16k '||status as status, count(*) as num
from v$bh where file# in(select file_id
from dba_data_files
where tablespace_name in (select tablespace_name
from dba_tablespaces
where block_size=16384))
group by '16k '||status
union
select '8k '||status as status, count(*) as num
from v$bh
where file# in( select file_id
from dba_data_files
where tablespace_name in (select tablespace_name
from dba_tablespaces
where block_size=8192))
group by '8k '||status
union
select '4k '||status as status, count(*) as num
from v$bh
where file# in(select file_id
from dba_data_files
where tablespace_name in ( select tablespace_name
from dba_tablespaces
where block_size=4096))
group by '4k '||status
union
select '2k '||status as status, count(*) as num
from v$bh
where file# in(select file_id
from dba_data_files
where tablespace_name in ( select tablespace_name
from dba_tablespaces
where block_size=2048))
group by '2k '||status
union
select status, count(*) as num
from v$bh
where status='free'
group by status
order by 1
/
spool off
ttitle off
As you can see, we will need to be SYS user to run it. An example report would be:
Date: 12/13/05
Page:
1
PDFmyURL.com
Time: 10:39 PM
PERFSTAT
STATUS
NUM
--------- ---------32k cr
2930
32k xcur
29064
8k cr
1271
8k free
3
8k read
4
8k xcur
378747
free
10371
As you can see, while there are free buffers, only 3 of them are available to the 8k, default area and none are available to our 32K area. The
free buffers are actually assigned to a keep or recycle pool area (hence the null value for the blocksize) and are not available for normal
usage.
So, if you see buffer busy waits, db block waits and the like and you run the above report and see no free buffers it is probably a good bet
you need to increase the number of available buffers for the area showing no free buffers. You should not immediately assume you need
more buffers because of buffer busy waits as these can be caused by other problems such as row lock waits, itl waits and other issues.
Luckily Oracle10g has made it relatively simple to determine if we have these other types of waits:
-- Crosstab of object and statistic for an owner
-col "Object" format a20
set numwidth 12
set lines 132
set pages 50
@title132 'Object Wait Statistics'
spool rep_out\&&db\obj_stat_xtab
select * from(
select DECODE(GROUPING(a.object_name), 1, 'All Objects',
a.object_name) AS "Object",
sum(case when a.statistic_name = 'ITL waits'
then a.value else null end) "ITL Waits",
sum(case when a.statistic_name = 'buffer busy waits'
then a.value else null end) "Buffer Busy Waits",
sum(case when a.statistic_name = 'row lock waits'
then a.value else null end) "Row Lock Waits",
sum(case when a.statistic_name = 'physical reads'
then a.value else null end) "Physical Reads",
sum(case when a.statistic_name = 'logical reads'
then a.value else null end) "Logical Reads"
from v$segment_statistics a
where a.owner like upper('&owner')
group by rollup(a.object_name)) b
where (b."ITL Waits">0 or b."Buffer Busy Waits">0)
/
spool off
clear columns
PDFmyURL.com
clear columns
ttitle off
This is an object statistic cross tab report based on the V$SEGMENT_STATISTICS view. The cross tab report generates a listing showing
the statistics of concern as headers across the page rather than listings going down the page and summarizes them by object. This allows
us to easily compare total buffer busy waits to the number of ITL or row lock waits. This ability to compare the ITL and row lock waits to
buffer busy waits lets us see what objects may be experiencing contention for ITL lists, which may be experiencing excessive locking activity
and through comparisons, which are highly contended for without the row lock or ITL waits. An example of the output of the report, edited
for length, is shown here:
Date: 12/09/05
Page: 1
Time: 07:17 PM
Object Wait Statistics PERFSTAT
whoville database
ITL Buffer Busy Row Lock Physical Logical
Object Waits
Waits Waits Reads Reads
-------------- ----- ----------- -------- ---------- ----------BILLING 0 63636 38267 1316055 410219712
BILLING_INDX1 1 16510 55 151085 21776800
...
DELIVER_INDX1 1963 36096 32962 1952600 60809744
DELIVER_INDX2 88 16250 9029 18839481 342857488
DELIVER_PK 2676 99748 29293 15256214 416206384
DELIVER_INDX3 2856 104765 31710 8505812 467240320
...
All Objects 12613 20348859 1253057 1139977207 20947864752
In the above report the BILLING_INDX1 index has a large number of buffer busy waits but we cant account for them from the ITL or Row
lock waits, this indicates that the index is being constantly read and the blocks then aged out of memory forcing waits as they are re-read
for the next process. On the other hand, almost all of the buffer busy waits for the DELIVER_INDX1 index can be attributed to ITL and Row
Lock waits.
In situations where there are large numbers of ITL waits we need to consider the increase of the INITRANS setting for the table to remove
this source of contention. If the predominant wait is row lock waits then we need to determine if we are properly using locking and cursors in
our application (for example, we may be over using the SELECTFOR UPDATE type code.) If, on the other hand all the waits are unaccounted for buffer busy waits, then we need to consider increasing the amount of database block buffers we have in our SGA.
As you can see, this object wait cross tab report can be a powerful addition to our tuning arsenal.
By knowing how our buffers are being used and seeing exactly what waits are causing our buffer wait indications we can quickly determine if
we need to tune objects or add buffers, making sizing buffer areas fairly easy.
But what about the Automatic Memory Manager in 10g? It is a powerful tool for DBAs with systems that have a predictable load profile,
however if your system has rapid changes in user and memory loads then AMM is playing catch up and may deliver poor performance as a
result. In the case of memory it may be better to hand the system too much rather than just enough, just in time (JIT).
PDFmyURL.com
As many companies have found when trying the JIT methodology in their manufacturing environment it only works if things are easily
predictable.
The AMM is utilized in 10g by setting two parameters, the SGA_MAX_SIZE and the SGA_TARGET. The Oracle memory manager will size the
various buffer areas as needed within the range between base settings or SGA_TARGET and SGA_MAX_SIZE using the SGA_TARGET
setting as an optimal and the SGA_MAX_SIZE as a maximum with the manual settings used in some cases as a minimum size for the
specific memory component.
FILE#
----13
33
7
54
40
15
IO Timing Analysis
whoraw database
Page:
1
PERFSTAT
NAME
PHYRDS PHYWRTS READTIM/PHYRDS WRITETIM/PHYWRTS
-------------- ---------- ------- -------------- ---------------/dev/raw/raw19
77751 102092
76.8958599
153.461829
/dev/raw/raw35
32948
52764
65.7045041
89.5749375
/dev/raw/raw90
245854 556242
57.0748615
76.1539869
/dev/raw/raw84
208916 207539
54.5494409
115.610912
/dev/raw/raw38
4743
27065
38.4469745
47.1722889
/dev/raw/raw41
3850
7216
35.6272727
66.1534091
PDFmyURL.com
15
12
16
18
23
5
/dev/raw/raw41
/dev/raw/raw4
/dev/raw/raw50
/dev/raw/raw24
/dev/raw/raw58
/dev/raw/raw91
3850
323691
10917
3684
63517
102783
7216
481471
46483
4909
78160
94639
35.6272727
32.5510193
31.9372538
30.8045603
29.8442779
29.1871516
66.1534091
100.201424
74.5476626
71.7942554
84.4477866
87.8867909
As you can see we are looking at an example report from a RAW configuration using single disks. Notice how both read and write times
exceed even the rather large good practice limits of 10-20 milliseconds for a disk read. However in my experience for reads you should not
exceed 5 milliseconds and usually with modern buffered reads, 1-2 milliseconds. Oracle is more tolerant for write delays since it uses a
delayed write mechanism, so 10-20 milliseconds on writes will normally not cause significant Oracle waits, however, the smaller you can get
read and write times, the better!
For the money, I would suggest RAID0/1 or RAID1/0, that is, striped and mirrored. It provides nearly all of the dependability of RAID5 and
gives much better write performance. You will usually take at least a 20 percent write performance hit using RAID5. For read-only applications
RAID5 is a good choice, but in high-transaction/high-performance environments the write penalties may be too high.
Table 1 shows how Oracle suggests RAID should be used with Oracle database files.
RAID
0
1
1+0
3
5
Type of
Raid
Striping
Shadowing
Striping
and
Shadowing
Striping
with static
parity
Striping
with
rotating
parity
Cont rol
File
Avoid
Best
OK
Dat abase
File
OK
OK
Best
Redo
Log File
Avoid
Best
Avoid
Archive
Log File
Avoid
Best
Avoid
OK
OK
Avoid
Avoid
OK
Best if
RAID0-1
not
available
Avoid
Avoid
Page:
SYS
Operation
--------------GROUP BY (HASH)
GROUP BY (SORT)
GROUP BY (HASH)
GROUP BY (HASH)
As you can see the whoville database had no hashes, at the time the report was run, going to disk. We can also look at the cumulative
statistics in the v$sysstat view for cumulative sort data.
Date: 12/09/05
Time: 03:36 PM
Sorts Report
sd3p database
Type Sort
Number Sorts
-------------------- -------------sorts (memory)
17,213,802
sorts (disk)
230
sorts (rows)
3,268,041,228
Page:
PERFSTAT
Another key indicator that hashes are occurring are if there is excessive IO to the temporary tablespace yet there are few or no disk sorts.
The PGA_AGGREGATE_TARGET is the target total amount of space for all PGA memory areas. However, only 5% or a maximum of 200
megabytes can be assigned to any single process. The limit for PGA_AGGREGATE_TARGET is 4 gigabytes (supposedly) however you can
increase the setting above this point. The 200 megabyte limit is set by the _pga_max_size undocumented parameter, this parameter can be
PDFmyURL.com
reset but only under the guidance of Oracle support. But what size should PGA_AGGREGATE_TARGET be set? The AWRRPT report in
10g provides a sort histogram which can help in this decision.
PGA Aggr Target Histogram
DB/Inst: OLS/ols Snaps: 73-74
-> Optimal Executions are purely in-memory operations
Low
High
Optimal Optimal
Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ -----------2K
4K
1,283,085
1,283,085
0
0
64K
128K
2,847
2,847
0
0
128K
256K
1,611
1,611
0
0
256K
512K
1,668
1,668
0
0
512K
1024K
91,166
91,166
0
0
1M
2M
690
690
0
0
2M
4M
174
164
10
0
4M
8M
18
12
6
0
-------------------------------------------------------------
In this case we are seeing 1-pass executions indicating disk sorts are occurring with the maximum size being in the 4m to 8m range. For an
8m sort area the PGA_AGGREGATE_TARGET should be set at 320 megabytes (sorts get 0.5*(.05*PGA_AGGREGATE_TARGET)). For this
system the setting was at 160 so 4 megabytes was the maximum sort size, as you can see we were seeing 1-pass sorts in the 2-4m range
as well even at 160m.
By monitoring the realtime or live hashes and sorts and looking at the sort histograms from the AWRRPT reports you can get a very good
idea of the needed PGA_AGGREGATE_TARGET setting. If you need larger than 200 megabyte sort areas you may need to get approval
from Oracle support through the i-tar process to set the _pga_max_size parameter to greater than 200 megabytes.
c1
c2
c3
c4
c5
heading
heading
heading
heading
heading
format
format
format
format
format
9999.999
9999.999
9.99
9.99
999
PDFmyURL.com
c3,
c4,
Percent of
Percent of
Average Waits for Average Waits for
I/O Waits
I/O Waits
Full Scan Read I/O
Index Read I/O for Full Scans for Index Scans
------------------ ----------------- -------------- --------------1.473
.289
.02
.98
Starting
Value
for
optimizer
index
cost
adj
--------20
As you can see, the suggested starting value for opt imizer_index_cost _adj may be too high because 98% of data waits are on index
(sequential) block access. How we can "weight" this starting value for optimizer_index_cost_adj to reflect the reality that this system has
only 2% waits on full-table scan reads (a typical OLTP system with few full-table scans)? As a practical matter, we never want an automated
value for optimizer_index_cost_adj to be less and 1, nor more than 100.
Another one:
- OPTIMIZER_INDEX_CACHING
This initialization parameter represents a percentage value, ranging between the values of 0 and 99. The default value of 0 indicates to the
CBO that 0% of database blocks accessed using indexed access can be expected to be found in the Buffer Cache of the Oracle SGA. This
implies that all index accesses will require a physical read from the I/O subsystem for every logical read from the Buffer Cache, also known as
a 0% hit ratio on the Buffer Cache. This parameter applies only to the CBOs calculations of accesses for blocks in an index, not for the
PDFmyURL.com
The query produces several parse metrics aggregated by program name. The parses column indicates the total hard parse count.
parses_per_session is the average number of parses for all sessions running the program, and parses_per_hour is the average number of
parses per hour for all sessions running the program. Search for high numbers in the parses_per_hour column . The term high is relative. For
OLTP programs, numbers below 10 are reasonable. For batch programs, higher values are acceptable. Any programs with values higher than
10 should be investigated further.
For programs that are suspect, query the library cache to identify the SQL statements being executed using the following query. Run this
query as many times as are required to get a reasonable sample.
SELECT /*+ RULE */ t.sql_text
FROM v$sql t, v$session s
WHERE s.sql_address = t.address
AND s.sql_hash_value = t.hash_value
AND s.sid = &SID;
spool off
Check for statements with a lot of executions. It is bad to have the PARSE_CALLS value in the above statement close to the EXECUTIONS
value. The previous query will fire only for DML statements (to check on other types of statements use the appropriate command type
number). Also ignore Recursive calls (dictionary access), as it is internal to Oracle
--Identifying unnecessary parse calls at session level
spool unnecessary_parse_calls_sess_level.txt
select b.sid, substr(c.username,1,12) username,
substr(c.program,1,15) program, substr(a.name,1,20) name, b.value
from v$sesstat b, v$statname a , v$session c
where a.name in ('parse count (hard)', 'execute count')
and b.statistic# = a.statistic#
and b.sid = c.sid
and c.username not in ('SYS','SYSTEM')
order by sid;
spool off
Identify the sessions involved with a lot of re-parsing (VALUE column). Query these sessions from V$SESSION and then locate the program
that is being executed, resulting in so much parsing.
select a.parse_calls, a.executions, substr(a.sql_text, 1, 100)
from
v$sqlarea a, v$session b
where b.schema# = a.parsing_schema_id
and b.sid = &sid
order by 1 desc;
As stated earlier, excessive parsing will result in higher than optimal CPU consumption.
However, the greater impact is likely to be contention for resources in the shared pool. If many small statements are hard parsed, shared
pool fragmentation is likely to result. As the shared pool becomes more fragmented, the amount of time required to complete a hard parse
increases. As the process of executing many unique statements continues, resource contention worsens. The critical resources will likely be
memory in the library cache and the various latches associated with the shared pool. There are several straightforward methods to detect
contention. The following query shows a list events on which sessions are waiting to complete before continuing. Since v$session_wait
contains one row for each session, the query will return the total number of sessions waiting for each event. The view contains real-time
data so it should be run repeatedly to detect possible problems.
SELECT /*+ RULE */ SUBSTR(event,1,30) event, COUNT(*)
FROM v$session_wait
WHERE wait_time = 0
GROUP BY SUBSTR (event,1,30), state;
If the latch free event appears continuously, then there is latch resource contention. The following query can be used to determine which
latches have contention. Since v$latchholder contains one row for each session, the query will return the total number of sessions waiting
for each latch. The view contains real-time data so it should be run repeatedly.
SELECT /*+ RULE */ name, COUNT(*)
FROM v$latchholder
GROUP BY name;
PDFmyURL.com
If library cache or shared pool latches appear continuously with any frequency, then there is contention.
Lat ch Cont ent ion Analysis
When an Oracle session needs to place a new SQL statement in the shared pool, it has to acquire a latch, or internal lock. Under some
circumstances, contention for these latches can result in poor performance. This does not happen frequently but it is worth checking. Set
the db_block_lru_latches to a higher number if you are experiencing a high number of misses or sleeps.
spool latch_content_analysis.txt
clear breaks
clear computes
clear columns
column name heading "Latch Type" format a25
column pct_miss heading "Misses/Gets (%)" format 999.99999
column pct_immed heading "Immediate Misses/Gets (%)" format 999.99999
ttitle 'Latch Contention Analysis Report' skip
select n.name, misses*100/(gets+1) pct_miss,
immediate_misses*100/(immediate_gets+1) pct_immed
from v$latchname n,v$latch l
where n.latch# = l.latch#
and n.name in('%cache bugffer%','%protect%');
spool off
It is straightforward to verify that an application is using bind variables using the Oracle trace facility and tkprof, the application profiler.
Tkprof produces a list of all SQL statements executed along with their execution plans and some performance statistics. These metrics are
aggregated for each unique SQL statement. Verify that excess parsing is not occurring. Below is an example of a query that was parsed
once for each execution. Notice that in the count
column, the number of parses is equal to the number of executions. The Parse row indicates the number of hard parses that occurred for
the statement. In the ideal case, the statement would be parsed once and executed many times. call count cpu elapsed disk query current
rows
call
count
------- -----Parse
27
Execute
27
Fetch
108
------- -----total
162
cpu
elapsed
disk
query
current
-------- ---------- ---------- ---------- ---------0.02
0.00
0
0
0
0.00
0.00
0
0
0
0.03
0.00
0
189
0
-------- ---------- ---------- ---------- ---------0.05
0.00
2
189
0
rows
---------0
0
81
---------81
Once the application has been corrected, the size of the shared pool should be reevaluated to determine if it could be reduced to its original
size. If shared pool flushes were employed as a temporary remedy, try to reduce the number of flushes to perhaps once per day. Excessive
shared pool flushes will also result in performance degradation.
PDFmyURL.com
heading
heading
heading
heading
"Date"
'Number|of|Switches'
"Redo Size"
"Redo Log File Size (Mb)"
PDFmyURL.com
spool recent_full_table_scans.txt
-- Recent full table scan
-- Should be run as SYS user
set verify off
col object_name form a30
col o.owner
form a15
PROMPT Column flag in x$bh table is set to value 0x80000, when
PROMPT block was read by a sequential scan.
SELECT o.object_name,o.object_type,o.owner, count(*)
FROM dba_objects o,x$bh x
WHERE x.obj=o.object_id
AND o.object_type='TABLE'
AND standard.bitand(x.flag,524288)>0
PDFmyURL.com
AND standard.bitand(x.flag,524288)>0
AND o.owner<>'SYS'
having count(*) > 2
group by o.object_name,o.object_type,o.owner
order by 4 desc;
spool off
spool unused_indexes.txt
-- Do these tables contain indexes ??
-- This query creates a mini "unused indexes" report that you can use to ensure that
-- any large tables that are being scanned on your system have the proper indexing scheme.
SELECT DISTINCT substr(a.object_owner,1,10) table_owner,
substr(a.object_name,1,15) table_name,
b.bytes / 1024 size_kb,
d.index_name
FROM sys.v_$sql_plan a, sys.dba_segments b, sys.dba_indexes d
WHERE a.object_owner (+) = b.owner
AND a.object_name (+) = b.segment_name
AND b.segment_type IN ('TABLE', 'TABLE PARTITION')
AND a.operation LIKE '%TABLE%'
AND a.options = 'FULL'
AND b.bytes / 1024 > 1024
AND b.segment_name = d.table_name
AND b.owner = d.table_owner
AND b.owner != 'SYS'
ORDER BY 1, 2;
spool off
spool physical_IO.txt
--How much physical I/O, etc., a large table scan causes on a system
--It displays I/O and some wait metrics that can give a DBA more insight into what Oracle is doing behind the scenes to access the object.
--Solution: Create indexes, force use with hints
SELECT DISTINCT substr(a.object_owner,1,8) table_owner,
substr(a.object_name,1,15) table_name,
b.bytes / 1024 size_kb,
substr(c.tablespace_name,1,10) Tablespace,
substr(c.statistic_name,1,27) Statistic_Name ,
substr(c.value,1,5) Value
FROM sys.v_$sql_plan a,
sys.dba_segments b,
sys.v_$segment_statistics c
WHERE a.object_owner (+) = b.owner
AND
a.object_name (+) = b.segment_name
AND
b.segment_type IN ('TABLE', 'TABLE PARTITION')
AND
a.operation LIKE '%TABLE%'
AND
a.options = 'FULL'
AND
b.bytes / 1024 > 1024
AND
b.owner = c.owner
AND
b.owner != 'SYS'
AND
b.segment_name = c.object_name
ORDER BY 1, 2;
spool off
Solut ion
Create indexes, force use with hints
PDFmyURL.com
PDFmyURL.com
Solut ion
Check if is it OK those access. Pin those tables and indexes.
Example: alter table/index . Storage (buffer_pool keep);
PDFmyURL.com
Solut ion
Adjust parameters OPTIMIZER_INDEX_COST_ADJ=15 AND OPTIMIZER_INDEX_CACHING=85 with the % of indexes on data buffer cache
compute statistics;
2- Run the following query to see the BLEVEL of the index and if you need to rebuid them. If the blevel is higher than 3, you should rebuild it.
spool Unbalanced_Indexes.txt
--If the blevel is higher than 3, you should rebuild it
select substr(table_name,1,15) "Table Name",
substr(index_name,1,20) "Index Name", blevel,
decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL', null,'?????????','***BLEVEL HIGH****') OK
from dba_indexes
where owner=UPPER('&OWNER')
order by 1,2;
spool off
3- Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual
table.
analyze index xxxxxxxxx validate structure;
4-The INDEX_STATS view will hold information for one index at a time: it will never contain more than one row. Therefore you need to query
this view before you analyze next index
select name "INDEXNAME", HEIGHT,
DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
(LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
from index_stats;
The PCT_DELETED column shows what percent of leaf entries (index entries) have been deleted and remain unfilled. The more deleted
entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for
rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high
PCT_DELETED without rebuild might cause excessive redo allocation on some systems.
The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average. For example, if a table has
10000 records and 9000 distinct SSN values, the formula would result in (10000-9000) x 100 / 10000 = 10. This shows a good distribution of
values. If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98.
This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a
rebuild but good candidates for bitmapped indexes.
PDFmyURL.com
The following PL/SQL code will analyze your indexes and then create a report of the indexes to rebuild. Run it as the owner of the indexes.
declare
pMaxHeight integer := 3;
pMaxLeafsDeleted integer := 20;
cursor csrIndexStats is
select name, height, lf_rows as leafRows,
del_lf_rows as leafRowsDeleted
from index_stats;
vIndexStats csrIndexStats%rowtype;
cursor csrGlobalIndexes is
select index_name, tablespace_name
from user_indexes
where partitioned = 'NO';
cursor csrLocalIndexes is
select index_name, partition_name, tablespace_name
from user_ind_partitions
where status = 'USABLE';
vCount integer := 0;
begin
dbms_output.enable(100000);
/* Working with Global/Normal indexes */
for vIndexRec in csrGlobalIndexes
loop
execute immediate 'analyze index ' || vIndexRec.index_name ||' validate structure';
open csrIndexStats;
fetch csrIndexStats into vIndexStats;
if csrIndexStats%FOUND then
if (vIndexStats.height > pMaxHeight)
or (vIndexStats.leafRows > 0
and vIndexStats.leafRowsDeleted > 0
and (vIndexStats.leafRowsDeleted * 100 / vIndexStats.leafRows) > pMaxLeafsDeleted) then
vCount := vCount + 1;
dbms_output.put_line('Rebuilding index ' || vIndexRec.index_name || '...');
execute immediate 'alter index ' || vIndexRec.index_name ||
' rebuild online parallel nologging compute statistics' ||
' tablespace ' || vIndexRec.tablespace_name;
end if;
end if;
close csrIndexStats;
end loop;
dbms_output.put_line('Global indexes rebuilt: ' || to_char(vCount));
vCount := 0;
/* Local indexes */
for vIndexRec in csrLocalIndexes
loop
execute immediate 'analyze index ' || vIndexRec.index_name ||
' partition (' || vIndexRec.partition_name ||
PDFmyURL.com
Fragmentation on DB Objects
Another performance problem may be the DB fragmentation. Run the following to detect:
REM Segments that are fragmented and level of fragmentation
REM It counts number of extents
set heading on
set termout on
set pagesize 66
set line 132
select substr(de.owner,1,8) "Owner",
substr(de.segment_type,1,8) "Seg_Type",
substr(de.segment_name,1,25) "Segment_Name",
substr(de.tablespace_name,1,15) "Tblspace_Name",
count(*) "Frag NEED",
substr(df.name,1,40) "DataFile_Name"
from sys.dba_extents de, v$datafile df
where de.owner not in ('SYS','SYSTEM')
and de.file_id = df.file#
and de.segment_type = 'TABLE'
group by de.owner, de.segment_name, de.segment_type, de.tablespace_name, df.name
having count(*) > 4
order by count(*) asc;
every time a data block is accessed either from the block buffers or from the disk:
NAME Name of the buffer pool
PHYSICAL_READS Number of physical reads
DB_BLOCK_GETS Number of reads for INSERT, UPDATE and DELETE
CONSISTENT_GETS Number of reads for SELECT
DB_BLOCK_GETS + CONSISTENT_GETS = Total Number of reads
Based on above statistics we can calculate the percentage of data blocks being accessed from the memory to that of the disk (block buffer hit ratio). The following
SQL statement will return the block buffer hit ratio:
SELECT NAME, 100 round ((PHYSICAL_READS / (DB_BLOCK_GETS + CONSISTENT_GETS))*100,2) HitRatio
FROM V$BUFFER_POOL_STATISTICS;
NAME
HITRATIO
-------------------- ---------DEFAULT
96.82
Before measuring the database buffer hit ratio, it is very important to check that the database is running in a steady state with normal workload and no unusual activity
has taken place. For example, when you run a SQL statement just after database startup, no data blocks have been cached in the block buffers. At this point, Oracle
reads the data blocks from the disk and will cache the blocks in the memory. If you run the same SQL statement again, then most likely the data blocks will still be
present in the cache, and Oracle will not have to perform disk IO. If you run the same SQL statement multiple times you will get a higher buffer hit ratio. On the other
hand, if you either run SQL statements that rarely query the same data, or run a select on a very large table, the data block may not be in the buffer cache and Oracle
will have to perform disk IO, thereby lowering the buffer hit ratio.
A hit ratio of 95% or greater is considered to be a good hit ratio for OLTP systems. The hit ratio for DSS (Decision Support System) may vary depending on the
database load. A lower hit ratio means Oracle is performing more disk IO on the server. In such a situation, you can increase the siz e of database block buffers to
increase the database performance. You may have to increase the physical memory on the server if the server starts swapping after increasing block buffers.
St ep 2: Ident if y f requent ly used and rarely used dat a blocks. Cache f requent ly used blocks and discard rarely used blocks.
If you have a low buffer hit ratio and you cannot increase the siz e of the database block buffers, you can still gain some performance advantage by tuning the block
buffers and efficiently caching the data block that will provide maximum benefits. Ideally, we should cache data blocks that are either frequently used in SQL
statements, or data blocks used by performance sensitive SQL statements (A SQL statement whose performance is critical to the system performance). An ad- hoc
query that scans a large table can significantly degrade overall database performance. A SQL on a large table may flush out frequently used data blocks from the
buffer cache to store data blocks from the large table. During the peak time, ad- hoc queries that select data from large tables or from tables that are rarely used
should be avoided. If we cannot avoid such queries, we can limit the impact on the buffer cache by using RECYCLE buffer pool.
A DBA can create multiple buffer pools in the SGA to store data blocks efficiently. For example, we can use RECYCLE pool to cache data blocks that are rarely used
in the application. Typically, this will be a small area in the SGA to store data blocks for current SQL statement / transaction that we do not intend to hold in the memory
after the transaction is completed. Similarly, we can use KEEP pool to cache data blocks that are frequently used by the application. Typically, this will be big enough to
store data blocks that we want to always keep in memory. By storing data blocks in KEEP and RECYCLE pools you can store frequently used data blocks separately
from the rarely used data blocks, and control which data blocks are flushed from the buffer cache. Using RECYCLE pool, we can also prevent a large table scan from
flushing frequently used data blocks. You can create the RECYCLE and KEEP pools by specifying the following init.ora parameters:
DB_KEEP_CACHE_SIZE = <size of KEEP pool>
DB_RECYCLE_CACHE_SIZE = < size of RECYCLE pool>
PDFmyURL.com
When you use the above parameters, the total memory allocated to the block buffers is the sum of DB_KEEP_CACHE_SIZE, DB_RECYCLE_CACHE_SIZE, and
DB_CACHE_SIZE.
St ep 3: Assign t ables t o KEEP / RECYCLE pool. Ident if y buf f er hit rat io f or KEEP, RECYCLE, and DEFAULT pool. Adjust t he init ializ at ion paramet ers
f or opt imum perf ormance.
By default, data blocks are cached in the DEFAULT pool. The DBA must configure the table to use the KEEP or the RECYCLE pool by specifying BUFFER_POOL
keyword in the CREATE TABLE or the ALTER TABLE statement. For example, you can assign a table to the recycle pool by using the following ALTER TABLE SQL
statement.
ALTER TABLE <TABLE NAME> STORAGE (BUFFER_POOL RECYCLE)
The DBA can take help from application designers in identifying tables that should use KEEP or RECYCLE pool. You can also query X$BH to examine the current block
buffer usage by database objects (You must log in as SYS to query X$BH).
spool tables_to_RECYCLE_Pool.txt
--The following query returns a list of tables that are rarely used and can be assigned to the RECYCLE pool.
Col owner
format a14
Col object_name format a36
Col object_type format a15
SELECT o.owner, object_name, object_type, COUNT(1) buffers
FROM SYS.x$bh, dba_objects o
WHERE (tch = 1 OR (tch = 0 AND lru_flag < 8))
AND obj = o. object_id
AND o.owner upper('&OWNER')
GROUP BY o.owner, object_name, object_type
ORDER BY buffers;
spool off
spool tables_to_KEEP_Pool.txt
--The following query will return a list of tables that are frequently
-- used by SQL statements and can be assigned to the KEEP pool.
Col owner
format a14
Col object_name format a36
Col object_type format a15
SELECT o.owner, object_name, object_type, COUNT(1) buffers
FROM SYS.x$bh, dba_objects o
WHERE tch > 10
AND lru_flag = 8
AND obj = o.object_id
AND o.owner = upper('&OWNER')
GROUP BY o.owner, object_name, object_type
ORDER BY buffers;
spool off
Once you have setup the database to use KEEP and RECYCLE pools, you can monitor the buffer hit ratio by querying V$BUFFER_POOL_STATISTICS and
V$DB_CACHE_ADVICE to adjust the buffer pool initializ ation parameters.
PDFmyURL.com
PDFmyURL.com
sess_io.consistent_gets,
sess_io.physical_reads,
sess_io.block_changes,
sess_io.consistent_changes
from v$sess_io sess_io, v$session sesion
where sesion.sid = sess_io.sid
and sesion.username is not null;
-- If by chance the query shown earlier in the V$SQLAREA view did not show your full SQL text
-- because it was larger than 1000 characters, this V$SQLTEXT view should be queried
-- to extract the full SQL. It is a piece by piece of 64 characters by line,
-- that needs to be ordered by the column PIECE.
-- SQL to show the full SQL executing for active sessions
select sesion.sid,
sql_text
from v$sqltext sqltext, v$session sesion
where sesion.sql_hash_value = sqltext.hash_value
and sesion.sql_address
= sqltext.address
and sesion.username is not null
order by sqltext.piece;
spool off
set serveroutput on
DECLARE
tsn
VARCHAR2(40);
tss
NUMBER(10);
aex
BOOLEAN;
unr
NUMBER(5);
rgt
BOOLEAN;
retval BOOLEAN;
BEGIN
retval := dbms_undo_adv.undo_info(tsn, tss, aex, unr, rgt);
dbms_output.put_line('UNDO Tablespace is: ' || tsn);
dbms_output.put_line('UNDO Tablespace size is: ' || TO_CHAR(tss));
IF aex THEN
dbms_output.put_line('Undo Autoextend is set to: TRUE');
ELSE
dbms_output.put_line('Undo Autoextend is set to: FALSE');
END IF;
dbms_output.put_line('Undo Retention is: ' || TO_CHAR(unr));
IF rgt THEN
dbms_output.put_line('Undo Guarantee is set to: TRUE');
ELSE
dbms_output.put_line('Undo Guarantee is set to: FALSE');
END IF;
END;
/
PDFmyURL.com
begin
dbms_output.put_line(chr(10)||chr(10)||chr(10)||chr(10) || 'To optimize UNDO you have two choices :');
dbms_output.put_line('====================================================' || chr(10));
for rec1 in get_undo_stat loop
dbms_output.put_line('A) Adjust UNDO tablespace size according to UNDO_RETENTION :' || chr(10));
dbms_output.put_line(rpad('ACTUAL UNDO SIZE ',61,'.')|| ' : ' || TO_CHAR(rec1.c1,'999999') || ' MEGS');
dbms_output.put_line(rpad('OPTIMAL UNDO SIZE WITH ACTUAL UNDO_RETENTION (' ||
ltrim(TO_CHAR(rec1.c2/60,'999999'))
|| ' MINUTES) ',61,'.') || ' : '
|| TO_CHAR(rec1.c3,'999999') || ' MEGS');
dbms_output.put_line(chr(10));
dbms_output.put_line('B) Adjust UNDO_RETENTION according to UNDO tablespace size :' || chr(10));
dbms_output.put_line(rpad('ACTUAL UNDO RETENTION ',61,'.') || ' : ' || TO_CHAR(rec1.c2/60,'999999')
|| ' MINUTES');
dbms_output.put_line(rpad('OPTIMAL UNDO RETENTION WITH ACTUAL UNDO SIZE (' || ltrim(TO_CHAR(rec1.c1,'999999'))
|| ' MEGS) ',61,'.') || ' : ' || TO_CHAR(rec1.c4/60,'999999')
|| ' MINUTES');
end loop;
dbms_output.put_line(chr(10)||chr(10));
end;
/
select 'Number of "ORA-01555 (Snapshot too old)" encountered since the last startup of the instance : ' || sum(ssolderrcnt)
from v$undostat;
spool off
PDFmyURL.com
This script is great for finding non-reusable SQL statements that contain embedded literals. As you may know, non-reusable SQL
statements place a heavy burden on the Oracle library cache. When cursor_sharing=FORCE, Oracle8i will re-write the SQL with literal values
so it can use a host variable instead. This is a great silver bullet for system where the literal SQL cannot be changed.
If you're running several N-tiered applications with multiple webservers, you may find it useful to monitor open cursors by username and
machine:
--total cursors open, by username & machine
select sum(a.value) total_cur, avg(a.value) avg_cur, max(a.value) max_cur, s.username, s.machine
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic# and s.sid=a.sid
and b.name = 'opened cursors current'
PDFmyURL.com
The best advice for tuning OPEN_CURSORS is not to tune it. Set it high enough that you won't have to worry about it. If your sessions are
running close to the limit you've set for OPEN_CURSORS, raise it. If you set OPEN_CURSORS to a high value, this doesn't mean that every
session will have that number of cursors open. Cursors are opened on an as-needed basis
To see if you've set OPEN_CURSORS high enough, monitor v$sesstat for the maximum opened cursors current. If your sessions are
running close to the limit, up the value of OPEN_CURSORS.
select max(a.value) as highest_open_cur, p.value as max_open_cur
from v$sesstat a, v$statname b, v$parameter p
where a.statistic# = b.statistic#
and b.name = 'opened cursors current'
and p.name= 'open_cursors'
group by p.value;
HIGHEST_OPEN_CUR MAX_OPEN_CUR
---------------- -----------1953
2500
You can also see directly what is in the session cursor cache by querying v$open_cursor. v$open_cursor lists session cached cursors by SID,
and includes the first few characters of the statement and the sql_id, so you can actually tell what the cursors are for.
select c.user_name, c.sid, sql.sql_text
from v$open_cursor c, v$sql sql
where c.sql_id=sql.sql_id
and c.sid=&sid;
Tuning SESSION_CACHED_CURSORS
If you choose to use SESSION_CACHED_CURSORS to help out an application that is continually closing and reopening cursors, you can
monitor its effectiveness via two more statistics in v$sesstat. The statistic "session cursor cache hits" reflects the number of times that a
statement the session sent for parsing was found in the session cursor cache, meaning it didn't have to be reparsed and your session didn't
have to search through the library cache for it. You can compare this to the statistic "parse count (total)"; subtract "session cursor cache
hits" from "parse count (total)" to see the number of parses that actually occurred.
PDFmyURL.com
/*
This script queries the SQL area ordered by the the average cost of the statement.
The "Avg Cost" row is basically the No. of Buffer Gets per Rows processed.
Where no rows are processed, all Buffer Gets are reported for the statement.
The script also lists any potential candidates for a converting to stored procedures
by running a case insensitive query.
*/
set pagesize 66 linesize 132
set echo off
column
column
column
column
column
column
column
column
column
column
column
executions
rows_processed
loads
buffer_gets
disk_reads
elapsed_time
cpu_time
sql_text
avg_cost
gets_per_exec
reads_per_exec
heading
heading
heading
heading
heading
heading
heading
heading
heading
heading
heading
"Execs"
"Rows Procd"
"Loads"
"Buffer Gets"
"Disk Reads"
"Elasped Time"
"CPU Time"
"SQL Text"
"Avg Cost"
"Gets Per Exec"
"Read Per Exec"
format 99999999
format 99999999
format 999999.99
format
format
format
format
a120 wrap
99999999
99999999
99999999
PDFmyURL.com
column reads_per_exec
column rows_per_exec
break on report
compute sum of
compute sum of
compute avg of
compute avg of
compute avg of
compute avg of
rows_processed
executions
avg_cost
gets_per_exec
reads_per_exec
row_per_exec
on
on
on
on
on
on
report
report
report
report
report
report
PROMPT
PROMPT Top 10 most expensive SQL by Elapsed Time...
PROMPT
select rownum as rank, a.*
from ( select elapsed_Time, executions, buffer_gets, disk_reads, cpu_time, hash_value, sql_text
from v$sqlarea
where elapsed_time > 20000
order by elapsed_time desc) a
where rownum < 11;
PROMPT
PROMPT Top 10 most expensive SQL by CPU Time...
PROMPT
select rownum as rank, a.*
from ( select elapsed_Time, executions, buffer_gets, disk_reads, cpu_time, hash_value, sql_text
from v$sqlarea
where cpu_time > 20000
order by cpu_time desc) a
where rownum < 11;
PROMPT
PROMPT Top 10 most expensive SQL by Buffer Gets by Executions...
PROMPT
select rownum as rank, a.*
from (select buffer_gets, executions,
buffer_gets/ decode(executions,0,1, executions) gets_per_exec,
hash_value, sql_text
from v$sqlarea
where buffer_gets > 50000
order by buffer_gets desc) a
where rownum < 11;
PROMPT
PROMPT Top 10 most expensive SQL by Physical Reads by Executions...
PROMPT
select rownum as rank, a.*
from (select disk_reads, executions,
disk_reads / decode(executions,0,1, executions) reads_per_exec,
hash_value, sql_text
from v$sqlarea
where disk_reads > 10000
order by disk_reads desc) a
where rownum < 11;
PDFmyURL.com
PROMPT
PROMPT Top 10 most expensive SQL by Rows Processed by Executions...
PROMPT
select rownum as rank, a.*
from (select rows_processed, executions,
rows_processed / decode(executions,0,1, executions) rows_per_exec,
hash_value, sql_text
from v$sqlarea
where rows_processed > 10000
order by rows_processed desc) a
where rownum < 11;
PROMPT
PROMPT Top 10 most expensive SQL by Buffer Gets vs Rows Processed...
PROMPT
select rownum as rank, a.*
from ( select buffer_gets, lpad(rows_processed ||
decode(users_opening + users_executing, 0, ' ','*'),20) "rows_processed",
executions, loads,
(decode(rows_processed,0,1,1)) * buffer_gets/ decode(rows_processed,0,1,rows_processed) avg_cost,
sql_text
from v$sqlarea
where decode(rows_processed,0,1,1) * buffer_gets/ decode(rows_processed,0,1,rows_processed) > 10000
order by 5 desc) a
where rownum < 11;
rem
rem
rem
rem
rem
If you want to see the full text of the sql statement, you can run the following query:
select v2.sql_text, v2.address
from v$sqlarea v1, v$sqltext v2
where v1.address=v2.address
and v1.sql_text like 'SELECT COUNT(*) FROM DEPT%'
order by v2.address, v2.piece;
The next query returns the SQL text from a hash value that must be determined from each v$sqlarea row in question.
select sql_text
from v$sqltext
where hash_value=&hash_value
PDFmyURL.com
order by piece;
If you get no rows, that means that all your indexes has been used.
Next, we'll determine the top 10 tables that have incurred the most physical I/O operations.
select table_name,total_phys_io
from (select owner||'.'||object_name as table_name, sum(value) as total_phys_io
from v$segment_statistics
where owner='FRAUDGUARD'
and object_type='TABLE'
and statistic_name in ('physical reads','physical reads direct','physical writes','physical writes direct')
group by owner||'.'||object_name
order by total_phys_io desc)
where rownum <=10;
TABLE_NAME
TOTAL_PHYS_IO
------------------------------------------------------------- ------------FG83_DEV.FLOWDOCUMENT_ARCH
1011844
FG83_DEV.FLOWDOCUMENT
697512
FG83_DEV.FLOWFIELD_ARCH
21423
FG83_DEV.USERACTIVITYLOG_ARCH
13987
FG83_DEV.FLOWDATA
13607
FG83_DEV.USERACTIVITYLOG
12334
FG83_DEV.SIGNATURES
8992
FG83_DEV.PROCESSLOG
4764
FG83_DEV.EXCEPTIONITEM_ARCH
399
FG83_DEV.USERLEVELPERMISSION
276
The query above eliminated any data dictionary tables from the results. It should now be clear what the exact table is that experiences the
most physical I/O operations. Appropriate actions can now be taken to isolate this potential hotspot from other highly active database
segments.
If you've ever dealt with wait events, you may have seen the 'buffer busy waits' event. This event occurs when one session is waiting on
another session to read the buffer into the cache, or some other session is changing the buffer. This even can often be seen when querying
PDFmyURL.com
V$SYSTEM_EVENT.
If I query my database, I have approximately 13 million waits on this specific event.
select event,total_waits from v$system_event
where event='buffer busy waits';
EVENT
TOTAL_WAITS
---------------------------------------- ----------buffer busy waits
12976210
The big question is to determine which segments are contributing to this overall wait event. Querying V$SEGMENT_STATISTICS can help us
determine the answer.
select substr(segment_name,1,30) segment_name,
object_type,total_buff_busy_waits
from (select owner||'.'||object_name as segment_name,object_type, value as total_buff_busy_waits
from v$segment_statistics
where statistic_name in ('buffer busy waits')
order by total_buff_busy_waits desc)
where rownum <=10;
SEGMENT_NAME
----------------------------------WEBMAP.SDE_BLK_1103
WEBMAP.SDE_BLK_804
SRTM.SDE_BLK_1101
WEBMAP.SDE_BLK_804_UK
SYS.DBMS_LOCK_ALLOCATED
NED.SDE_BLK_1002
WEBMAP.BTS_ROADS_MD
WEBMAP.SDE_BLK_1103_UK
ARCIMS.SDE_LOGFILE_DATA_IDX1
NED.SDE_BLK_62
OBJECT_TYPE
TOTAL_BUFF_BUSY_WAITS
------------- --------------------TABLE
10522135
TABLE
1176185
TABLE
651175
INDEX
100242
TABLE
64695
TABLE
48582
TABLE
27068
INDEX
25707
INDEX
24618
TABLE
14710
From the query above, we can see that one specific table contributed 10.5 million, or approximately 80%, of the total waits.
If you ever want to know why the access to a specific table (Example: EMP) is slow, one of the first actions would be to run:
select statistic_name, value
from v$segment_statistics
where owner='SCOTT' and object_name = 'EMP';
STATISTIC_NAME
VALUE
---------------------------------------------------------------- ---------logical reads
17653
buffer busy waits
1744
db block changes
16234
PDFmyURL.com
physical reads
physical writes
physical reads direct
physical writes direct
global cache cr blocks served
global cache current blocks served
ITL waits
row lock waits
1110
516
0
0
0
0
0
6
From the above query we can see that EMP is forever being modified and rarely just being selected. And those modifications has problems
because of the high number of bussy waits (users try to access to the same block). Perhaps if that table has a higher PCTFREE the problem
would disappear. Or maybe this is a case for ASSM.
PDFmyURL.com
Please note the freelist contention also can be manifested as a buffer busy wait. This is because the block is already in the buffer, but
cannot be accessed because another task has the segment header. The section below describes the process the block address associated
with a wait. As we discussed, Oracle does not keep an accumulator to track individual buffer busy waits. To see them, you must create a
script to detect them and then schedule the task to run frequently on your database server.
vi get_busy.ksh
#!/bin/ksh
# First, we must set the environment . . . .
export ORACLE_SID=proderp
export ORACLE_HOME=`cat /var/opt/oracle/oratab|grep \^$ORACLE_SID:|cut -f2 -d':'`
export PATH=$ORACLE_HOME/bin:$PATH
export SERVER_NAME=`uname -a|awk '{print $2}'`
typeset -u SERVER_NAME
# sample every 10 seconds
SAMPLE_TIME=10
while true
do
#*************************************************************
# Test to see if Oracle is accepting connections
#*************************************************************
$ORACLE_HOME/bin/sqlplus -s /<<! > /tmp/check_$ORACLE_SID.ora
select * from v\$database;
exit
!
#*************************************************************
# If not, exit immediately . . .
#*************************************************************
check_stat=`cat /tmp/check_$ORACLE_SID.ora|grep -i error|wc -l`;
oracle_num=`expr $check_stat`
if [ $oracle_num -gt 0 ]
PDFmyURL.com
if [ $oracle_num -gt 0 ]
then
exit 0
fi
rm -f /export/home/oracle/statspack/busy.lst
$ORACLE_HOME/bin/sqlplus -s perfstat/perfstat<<!> /tmp/busy.lst
set feedback off;
select sysdate, event, substr(tablespace_name,1,14), p2
from v\$session_wait a, dba_data_files b
where a.p1 = b.file_id;
!
var=`cat /tmp/busy.lst|wc -l`
echo $var
if [[ $var -gt 1 ]];
then
echo
**********************************************************************"
echo "There are waits"
cat /tmp/busy.lst|mailx -s "Prod block wait found"\
dpafumi at yahoo com
echo
**********************************************************************"
exit
fi
sleep $SAMPLE_TIME
done
As we can see from this script, it probes the database for buffer busy waits every 10 seconds. When a buffer busy wait is found, it mails the
date, tablespace name, and block number to the DBA. Here is an example of a block alert e-mail:
SYSDATE
SUBSTR(TABLESP BLOCK
--------- -------------- ---------28-DEC-00 APPLSYSD
25654
Here we see that we have a block wait condition at block 25654 in the applsysd tablespace. The procedure for locating this block is beyond
the scope of this tip, but complete directions are in Chapter 10 of Oracle High Performance Tuning with STATSPACK
PDFmyURL.com
One of the most confounding problems with Oracle is the resolution of buffer busy wait events. Buffer busy waits are common in an I/Obound Oracle system, as evidenced by any system with read (sequential/scattered) waits in the top-five waits in the Oracle STATSPACK
report, like this:
Top 5 Timed Events
% Total
Event
Waits
Time (s)
Ela Time
--------------------------- ------------ ----------- ----------db file sequential read
2,598
7,146
48.54
db file scattered read
25,519
3,246
22.04
library cache load lock
673
1,363
9.26
CPU time
2,154
934
7.83
log file parallel write
19,157
837
5.68
The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with
fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the
buffer size won't help.
In order to look at system-wide wait events, we can query the v$system_event performance view. This view, shown below, provides the
name of the wait event, the total number of waits and timeouts, the total time waited, and the average wait time per event.
spool Wait_Events.txt
select substr(event,1,25) event, total_waits, total_timeouts, time_waited, average_wait
from v$system_event
where event like '%wait%'
order by 2 desc;
spool off
EVENT
TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
--------------------------- ----------- -------------- ----------- -----------buffer busy waits
636528
1557
549700
.863591232
write complete waits
1193
0
14799
12.4048617
free buffer waits
1601
0
622
.388507183
If you want to see all the events, you can try with:
set pages
set lines
column c1
column c2
column c3
column c4
column c5
999
90
heading
heading
heading
heading
heading
'Event|Name'
'Total|Waits'
'Seconds|Waiting'
'Total|Timeouts'
'Average|Wait|(in secs)'
format
format
format
format
format
a30
999,999,999
999,999
999,999,999
99.999
c3,
Wed Feb 14
page
1
System-wide Wait Analysis
for current wait events
Average
Event
Total Seconds
Total
Wait
Name
Waits Waiting
Timeouts (in secs)
------------------------------ ------------ -------- ------------ --------db file sequential read
812
7
0
.010
control file parallel write
645
3
0
.000
control file sequential read
378
4
0
.010
log file parallel write
213
0
127
.000
db file scattered read
111
2
0
.020
PDFmyURL.com
111
61
27
10
8
8
7
4
4
2
2
2
2
1
1
1
2
1,874
0
2
0
0
0
0
0
0
0
0
0
0
0
4
0
61
0
0
4
0
0
0
0
0
0
0
0
0
0
1
.020
30.720
.000
.180
.020
.000
.000
.000
.000
.000
.010
.000
.000
.070
.050
4.100
The type of buffer that causes the wait can be queried using the v$waitstat view. This view lists the waits per buffer type for buffer busy
waits, where COUNT is the sum of all waits for the class of block, and TIME is the sum of all wait times for that class:
select * from v$waitstat;
CLASS
COUNT
TIME
------------------ ---------- ---------data block
1961113
1870278
segment header
34535
159082
undo header
233632
86239
undo block
1886
1706
Buffer busy waits occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of the
data block is locked. This buffer busy wait condition can happen for either of the following reasons:
The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete.
Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.
Because buffer busy waits are due to contention between particular blocks, there's nothing you can do until you know which blocks are in
conflict and why the conflicts are occurring. Tuning therefore involves identifying and eliminating the cause of the block contention.
The v$session_wait performance view, shown below, can give some insight into what is being waited for and why the wait is occurring.
SQL> desc v$session_wait
Name
Null?
----------------------------------------- -------SID
SEQ#
EVENT
P1TEXT
P1
P1RAW
Type
--------------------NUMBER
NUMBER
VARCHAR2(64)
VARCHAR2(64)
NUMBER
RAW(4)
PDFmyURL.com
P2TEXT
P2
P2RAW
P3TEXT
P3
P3RAW
WAIT_TIME
SECONDS_IN_WAIT
STATE
VARCHAR2(64)
NUMBER
RAW(4)
VARCHAR2(64)
NUMBER
RAW(4)
NUMBER
NUMBER
VARCHAR2(19)
The columns of the v$session_wait view that are of particular interest for a buffer busy wait event are:
P1The absolute file number for the data file involved in the wait.
P2The block number within the data file referenced in P1 that is being waited upon.
P3The reason code describing why the wait is occurring.
Here's an Oracle data dictionary query for these values:
select p1 "File #", p2 "Block #", p3 "Reason Code"
from v$session_wait
where event = 'buffer busy waits';
If the output from repeatedly running the above query shows that a block or range of blocks is experiencing waits, the following query
should show the name and type of the segment:
select owner, segment_name, segment_type
from dba_extents
where file_id = &P1
and &P2 between block_id and block_id + blocks -1;
Once the segment is identified, the v$segment_statistics performance view facilitates real-time monitoring of segment-level statistics. This
enables a DBA to identify performance problems associated with individual tables or indexes, as shown below.
select object_name, statistic_name, value
from V$SEGMENT_STATISTICS
where object_name = 'SOURCE$';
OBJECT_NAME
----------SOURCE$
SOURCE$
SOURCE$
SOURCE$
SOURCE$
SOURCE$
SOURCE$
SOURCE$
STATISTIC_NAME
------------------------logical reads
buffer busy waits
db block changes
physical reads
physical writes
physical reads direct
physical writes direct
ITL waits
VALUE
---------11216
210
32
10365
0
0
0
0
PDFmyURL.com
SOURCE$
SOURCE$
ITL waits
row lock waits
We can also query the dba_data_files to determine the file_name for the file involved in the wait by using the P1 value from v$session_wait
for the file_id.
SQL> desc dba_data_files
Name
Null?
----------------------------------------- -------FILE_NAME
FILE_ID
TABLESPACE_NAME
BYTES
BLOCKS
STATUS
RELATIVE_FNO
AUTOEXTENSIBLE
MAXBYTES
MAXBLOCKS
INCREMENT_BY
USER_BYTES
USER_BLOCKS
Type
---------------------------VARCHAR2(513)
NUMBER
VARCHAR2(30)
NUMBER
NUMBER
VARCHAR2(9)
NUMBER
VARCHAR2(3)
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
Interrogating the P3 (reason code) value from v$session_wait for a buffer busy wait event will tell us why the session is waiting. The reason
codes range from 0 to 300 and can be decoded, as shown in Table A.
Table A
Code
Reason f or wait
100
We want to NEW the block, but the block is currently being read by another
session (most likely for undo).
110
We want the CURRENT block either shared or exclusive but the block is
being read into cache by another session, so we have to wait until its read()
is completed.
120
We want to get the block in current mode, but someone else is currently
reading it into the cache. Wait for the user to complete the read. This occurs
during buffer lookup.
130
Block is being read by another session, and no other suitable block image
was found, so we wait until the read is completed. This may also occur after a
buffer cache assumed deadlock. The kernel can't get a buffer in a certain
amount of time and assumes a deadlock. Therefore it will read the CR
version of the block.
200
We want to NEW the block, but someone else is using the current copy, so
PDFmyURL.com
200
We want to NEW the block, but someone else is using the current copy, so
we have to wait for that user to finish.
210
The session wants the block in SCUR or XCUR mode. If this is a buffer
exchange or the session is in discrete TX mode, the session waits for the first
time and the second time escalates the block as a deadlock, so does not
show up as waiting very long. In this case, the statistic: "exchange deadlocks"
is incremented, and we yield the CPU for the "buffer deadlock" wait event.
220
During buffer lookup for a CURRENT copy of a buffer, we have found the
buffer but someone holds it in an incompatible mode, so we have to wait.
230
Trying to get a buffer in CR/CRX mode, but a m odification has started on the
buffer that has not yet been completed.
231
CR/CRX scan found the CURRENT block, but a modification has started on
the buffer that has not yet been completed.
Reason codes
As I mentioned at the beginning of this article, buffer busy waits are prevalent in I/O-bound systems. I/O contention, resulting in waits for
data blocks, is often due to numerous sessions repeatedly reading the same blocks, as when many sessions scan the same index. In this
scenario, session one scans the blocks in the buffer cache quickly, but then a block has to be read from disk. While session one awaits the
disk read to complete, other sessions scanning the same index soon catch up to session one and want the same block currently being read
from disk. This is where the buffer busy wait occurswaiting for the buffer blocks that are being read from disk. The following rules of
thumb may be useful for resolving each of the noted contention situations:
Dat a block cont ent ionIdentify and eliminate HOT blocks from the application via changing PCTFREE and or PCTUSED values to
reduce the number of rows per data block. Check for repeatedly scanned indexes. Since each transaction updating a block requires a
transaction entry, increase the INITRANS value.
Freelist block cont ent ionIncrease the FREELISTS value. Also, when using Parallel Server, be certain that each instance has its own
FREELIST GROUPs.
Segment header cont ent ionAgain, increase the number of FREELISTs and use FREELIST GROUPs, which can make a difference
even within a single instance.
Undo header cont ent ionIncrease the number of rollback segments.
The following STATSPACK script is very useful for detecting those times when the database has a high-level of buffer busy waits.
prompt
prompt
prompt
prompt
prompt
column
column
select
***********************************************************
Buffer Busy Waits may signal a high update table with too
few freelists. Find the offending table and add more freelists.
***********************************************************
create table t1 as
select o.object_name object_name, o.object_type object_type,
count(1) num_blocks
from dba_objects o, v$bh bh
where o.object_id = bh.objd
and o.owner not in ('SYS','SYSTEM')
group by o.object_name, o.object_type
order by count(1) desc;
column
column
column
column
c1
c2
c3
c4
heading
heading
heading
heading
"Object|Name"
format a30
"Object|Type"
format a12
"Number of|Blocks" format 999,999,999,999
"Percentage|of object|data blocks|in Buffer" format 999
PDFmyURL.com
Wed Oct 23
page
Object
Name
--------------------------MTL_DEMAND_INTERFACE
FND_CONCURRENT_REQUESTS
WIP_TRANSACTIONS
WIP_TRANSACTION_ACCOUNTS
CRP_RESOURCE_HOURS
SO_LINES_ALL
ABC_EDI_LINES
BOM_INVENTORY_COMPONENTS
MTL_SYSTEM_ITEMS
WIP_TRANSACTION_ACCOUNTS_N1
MTL_ITEM_CATEGORIES
RA_CUSTOMER_TRX_LINES_ALL
MRP_FORECAST_DATES
RA_CUSTOMER_TRX_ALL
WIP_OPERATIONS
SO_PICKING_LINES_ALL
MTL_DEMAND_INTERFACE_N10
BOM_OPERATION_RESOURCES
ABC_EDI_ERRORS
ABC_EDI_HEADERS
Object
Number of
Type
Blocks
------- -----------TABLE
38,745
TABLE
16,636
TABLE
14,777
TABLE
13,390
TABLE
7,806
TABLE
7,576
TABLE
7,041
TABLE
6,882
TABLE
4,747
INDEX
3,996
TABLE
3,390
TABLE
3,264
TABLE
3,082
TABLE
2,739
TABLE
2,311
TABLE
2,006
INDEX
1,482
TABLE
1,456
TABLE
1,427
TABLE
1,188
Percentage
of object
data blocks
in Buffer
----------100
88
100
33
100
100
100
46
63
38
100
100
99
97
34
100
76
45
100
100
PDFmyURL.com
sys.v_$statname n_reads,
sys.v_$sesstat s_cpu,
sys.v_$sesstat s_reads
where n_reads.name in ('db block gets', 'consistent gets')
and n_cpu.name = 'CPU used by this session'
and n_cpu.statistic# = s_cpu.statistic#
and n_reads.statistic# = s_reads.statistic#
and s_cpu.sid = se.sid
and s_reads.sid = se.sid
and se.audsid = userenv('SESSIONID')
group by s_cpu.value
/
column CPU clear
column READS clear
will display nothing but blank lines but will collect values before your PL/SQL runs; immediately after your PL/SQL, run this :
-- after.sql
set echo off
set timing off
set recsep off
column CPU print format 999999
column READS print format 9999999999999
select s_cpu.value - &&before_cpu - 97 CPU,
sum(s_reads.value) - &&before_reads - 10 READS
from sys.v_$session se,
sys.v_$statname n_cpu,
sys.v_$statname n_reads,
sys.v_$sesstat s_cpu,
sys.v_$sesstat s_reads
where n_reads.name in ('db block gets', 'consistent gets')
and n_cpu.name = 'CPU used by this session'
and n_cpu.statistic# = s_cpu.statistic#
and n_reads.statistic# = s_reads.statistic#
and s_cpu.sid = se.sid
and s_reads.sid = se.sid
and se.audsid = userenv('SESSIONID')
group by s_cpu.value
/
column CPU clear
column READS clear
Check Sorts
spool sorts.txt
--The ratio of sorts (disk) to sorts (memory) should be < 5%.
-- Increase the size of SORT_AREA_SIZE if it is less than 5%.
PDFmyURL.com
Optimizing Indexes
Move Indexes to a 32k Block Size
Create a 32k_block Cache in the SPFILE
db_32k_cache_size = 32M
Create a Tablespace using 32K Blocks
CREATE TABLESPACE "TS_32K_INDEXES" LOGGING DATAFILE '/oradata/SID/TS_32K_IND.dbf'
SIZE 100M BLOCKSIZE 32768 EXTENT
MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT AUTO;
PDFmyURL.com