Sie sind auf Seite 1von 107

Oracle 11g DBA - Step by step learning to become a full fledge DBA search

Classic Home Contents

Classic

FEB
MAY RAC
How to 10g/11g
read AWR reports
17 6 1> What is the purpose of RAC?
The output of the AWR report contains a wealth of information that you can use to tune your database. The output of
2> RAC Advantages
the AWR report can be divided into the following sections:
3> RAC Architecture >> About Cluster>> 2node/3 node ?>> which shared storage>> About Interconnect >> public
Ip/Private Ip etc..
4>Cache Fusion / Cache cherency concepts
Report Header
5> RAC Installation steps in 10g
6> RAC
This Installation
section steps in which
is self explanatory 11g provides database name, id, instance if RAC , platform information and snap interval. (database
7> Clusterware
workload architecture
time duration in 10g >> OCR/Voting Disk
in review).
8> Clusterware
This report is for enhancement
instance number to 2 ofGrid Infrastructure
my RAC environment. inSo
11g
if you need to the analysis on RAC environment, you need to do
9> RAC Background
it separately Processes
of all the instances in the RAC to see if all the instances are balanced the way they should be.
10>Clusterware specific daemons
-------------------------------------------------------------

RAC
DB Videos:
Name DB Id Instance Inst num Startup Time Release RAC
Oracle
TestRAC11g Grid Infrastructure Installation
3626203793 TestRac2 2 17-Aug-11 19:08 11.1.0.6.0 YES
http://www.youtube.com/watch?v=JDWi8CepPJI

Oracle 11g RACPlatform


Host Name
RDBMS Installation CPUs Cores Sockets Memory (GB)
http://www.youtube.com/watch?v=f8JmfHMmkMQ
T estRAC Linux 64-bit for AMD 8 8 2 31.44
Dynamic View s theme. Pow ered by Blogger.
Oracle 11g RAC Database Creation using DBCA
http://www.youtube.com/watch?v=zXD1dDOWYgs

Startup and Shutdown of RAC database


http://www.youtube.com/watch?v=0iNWl4r8hmo

Troubleshooting a startup issue in Oracle 11g R2 RAC


http://www.youtube.com/watch?v=zUKB2ZAfVQ0

Oracle 11g RAC Upgrade from 11.2.0.1 to 11.2.0.2


http://www.youtube.com/watch?NR=1&v=M4wJztQiO_M&feature=fvwp

Migrating Oracle Single Instance to RAC (PPT)


http://www.youtube.com/watch?v=TI2nADjokfc

Pythian Video: "Oracle RAC - Troubleshooting Connectivity Issues"


http://www.youtube.com/watch?v=LH-88ARrroo

Prepare OEL 5.4 for RAC 11gR2 install


http://www.youtube.com/watch?v=aY9N447Tb7c

Conversion of single instance database to rac using rconfig utility


http://www.youtube.com/watch?v=5kW17hsWD0U

Posted 17th February 2014 by Oracle

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
OCT Sample AWR Report
13 WARNING: Since the DB Time is less than one second, there was minimal foreground activity in the snapshot period.
Some of the percentage values will be invalid.

WORKLOAD REPOSITORY report for


DB
DB Id Instance Inst num Startup Time Release RAC
Name
26329275
DB11G DB11G 1 16-May-13 11:05 11.2.0.1.0 NO
5

Host Name Platform CPUs Cores Sockets Memory (GB)


swood.saahyog Linux IA (32-bit) 1 1.98

Snap Id Snap Time Sessions Cursors/Session


Begin Snap: 15 16-May-13 11:20:36 27 1.4
End Snap: 16 16-May-13 11:38:53 25 1.2
Elapsed: 18.27 (mins)
DB Time: 0.00 (mins)

Report Summary

Cache Sizes
Begin End
Buffer Cache: 748M 748M Std Block Size: 8K
Shared Pool Size: 260M 260M Log Buffer: 4,528K

Load Profile
Per Second Per Transaction Per Exec Per Call
DB Time(s): 0.0 0.1 0.00 0.00
DB CPU(s): 0.0 0.0 0.00 0.00
Redo size: 1,112.8 1,220,076.0
Logical reads: 11.0 12,108.0
Block changes: 3.6 3,970.0
Physical reads: 0.4 408.0
Physical writes: 0.4 390.0
User calls: 0.1 54.0
Parses: 1.3 1,417.0
Hard parses: 0.1 146.0
W/A MB processed: 0.0 17.4
Logons: 0.0 4.0
Executes: 2.9 3,137.0
Rollbacks: 0.0 0.0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Transactions: 0.0

Instance Efficiency Percentages (Target 100%)


Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 96.63 In-memory Sort %: 100.00
Library Hit %: 86.92 Soft Parse %: 89.70
Execute to Parse %: 54.83 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 57.14 % Non-Parse CPU: -1,141.59

Shared Pool Statistics


Begin End
Memory Usage %: 36.58 39.43
% SQL with executions>1: 44.35 84.81
% Memory for SQL w/exec>1: 41.19 72.58

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class


db file sequential read 17 1 30 555.25 User I/O
DB CPU 0 31.81
control file sequential read 42 0 0 3.35 System I/O
asynch descriptor resize 14 0 0 0.26 Other
SQL*Net break/reset to client 2 0 0 0.25 Application
Host CPU (CPUs: 1 Cores: Sockets: )
Load Average Begin Load Average End %User %System %WIO %Idle
0.17 0.14 1.0 1.3 1.6 97.5
Instance CPU
%Total CPU %Busy CPU %DB time waiting for CPU (Resource Manager)
0.4 16.3 0.0
Memory Statistics
Begin End
Host Mem (MB): 2,026.4 2,026.4
SGA use (MB): 1,024.0 1,024.0
PGA use (MB): 109.4 103.1
% Host Mem used for SGA+PGA: 55.93 55.62

Main Report
Report Summary
Wait Events Statistics
SQL Statistics
Instance Activity Statistics
IO Stats
Buffer Pool Statistics
Advisory Statistics
Wait Statistics
Undo Statistics
Latch Statistics
Segment Statistics
Dictionary Cache Statistics
Library Cache Statistics

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Memory Statistics
Streams Statistics
Resource Limit Statistics
Shared Server Statistics
init.ora Parameters

Back to Top

Wait Events Statistics


Time Model Statistics
Operating System Statistics
Operating System Statistics - Detail
Foreground Wait Class
Foreground Wait Events
Background Wait Events
Wait Event Histogram
Wait Event Histogram Detail (64 msec to 2 sec)
Wait Event Histogram Detail (4 sec to 2 min)
Wait Event Histogram Detail (4 min to 1 hr)
Service Statistics
Service Wait Class Stats
Back to Top

Time Model Statistics

Total time in database user-calls (DB Time): .1s


Statistics including the word "background" measure background process time, and so do not contribute to the DB
time statistic
Ordered by % or DB time desc, Statistic name

Statistic Name Time (s) % of DB Time


parse time elapsed 0.29 319.55
hard parse elapsed time 0.26 289.50
hard parse (sharing criteria) elapsed time 0.26 286.34
PL/SQL execution elapsed time 0.04 39.57
DB CPU 0.03 31.81
sql execute elapsed time 0.03 31.57
PL/SQL compilation elapsed time 0.00 4.18
repeated bind elapsed time 0.00 0.23
DB time 0.09
background elapsed time 17.77
background cpu time 4.33
Back to Wait Events Statistics
Back to Top

Operating System Statistics

*TIME statistic values are diffed. All others display actual values. End Value is displayed if different
ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name

Statistic Value End Value


BUSY_TIME 2,677
IDLE_TIME 103,184
IOWAIT_TIME 1,736

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
SYS_TIME 1,413
USER_TIME 1,033
LOAD 0 0
PHYSICAL_MEMORY_BYTES 2,124,869,632
NUM_CPUS 1
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 1,048,576
TCP_RECEIVE_SIZE_DEFAULT 87,380
TCP_RECEIVE_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 4,194,304
TCP_SEND_SIZE_MIN 4,096
Back to Wait Events Statistics
Back to Top

Operating System Statistics - Detail

Snap Time Load %busy %user %sys %idle %iowait


16-May 11:20:36 0.17
16-May 11:38:53 0.14 2.53 0.98 1.33 97.47 1.64
Back to Wait Events Statistics
Back to Top

Foreground Wait Class

s - second, ms - millisecond - 1000th of a second


ordered by wait time desc, waits desc
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Captured Time accounts for 591.0% of Total DB time .09 (s)
Total FG Wait Time: .51 (s) DB CPU time: .03 (s)

Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) %DB time
User I/O 17 0 1 30 555.25
DB CPU 0 31.81
System I/O 42 0 0 0 3.35
Other 14 100 0 0 0.26
Application 2 0 0 0 0.25
Network 30 0 0 0 0.09
Commit 0 0 0.00
Concurrency 0 0 0.00
Back to Wait Events Statistics
Back to Top

Foreground Wait Events

s - second, ms - millisecond - 1000th of a second


Only events with Total Wait Time (s) >= .001 are shown
ordered by wait time desc, waits desc (idle events last)
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0

Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % DB time

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
db file sequential read 17 0 1 30 17.00 555.25
control file sequential read 42 0 0 0 42.00 3.35
SQL*Net message from client 31 0 1,457 46997 31.00
Back to Wait Events Statistics
Back to Top

Background Wait Events

ordered by wait time desc, waits desc (idle events last)


Only events with Total Wait Time (s) >= .001 are shown
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0

Wait %Time - Total Wait Time Avg wait Waits


Event % bg time
s outs (s) (ms) /txn
db file async I/O submit 211 0 5 23 211.00 27.19
db file sequential read 485 0 3 6 485.00 16.22
control file parallel write 363 0 2 5 363.00 10.80
log file parallel write 74 0 2 23 74.00 9.57
os thread startup 4 0 1 157 4.00 3.53
control file sequential read 1,436 0 0 0 1,436.00 1.05
log file sync 1 0 0 151 1.00 0.85
direct path sync 1 0 0 46 1.00 0.26
LGWR wait for redo copy 2 0 0 9 2.00 0.10
latch free 40 0 0 0 40.00 0.02
direct path write 4 0 0 0 4.00 0.01
rdbms ipc message 4,493 99 15,036 3346 4,493.00
DIAG idle wait 2,183 100 2,187 1002 2,183.00
shared server idle wait 37 100 1,110 30004 37.00
Streams AQ: qmn slave idle wait 40 0 1,092 27306 40.00
Streams AQ: qmn coordinator idle
39 100 1,092 28006 39.00
wait
pmon timer 382 95 1,092 2859 382.00
Space Manager: slave idle wait 220 99 1,090 4957 220.00
dispatcher timer 18 100 1,080 60005 18.00
smon timer 3 100 900 300004 3.00
Back to Wait Events Statistics
Back to Top

Wait Event Histogram

Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000


% of Waits: value of .0 indicates value was <.05%; value of null is truly 0
% of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
Ordered by Event (idle events last)

% of Waits
Event Total Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
Disk file operations I/O 5 100.0
LGWR wait for redo copy 2 50.0 50.0
SQL*Net break/reset to client 2 100.0
SQL*Net message to client 30 100.0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
asynch descriptor resize 50 100.0
control file parallel write 363 21.8 68.6 2.5 3.0 1.9 2.2
control file sequential read 1478 99.7 .2 .1
db file async I/O submit 211 7.6 3.8 19.9 1.9 4.3 40.8 21.8
db file sequential read 345 26.4 30.1 7.8 6.4 17.1 9.3 2.9
direct path sync 1 100.0
direct path write 4 100.0
latch free 40 100.0
log file parallel write 74 50.0 2.7 6.8 4.1 2.7 12.2 21.6
log file sync 1 100.0
os thread startup 4 100.0
DIAG idle wait 2179 99.7 .3
SQL*Net message from client 31 80.6 6.5 3.2 3.2 6.5
Space Manager: slave idle wait 221 .9 99.1
Streams AQ: qmn coordinator idle wait 39 100.0
Streams AQ: qmn slave idle wait 40 2.5 97.5
class slave wait 4 100.0
dispatcher timer 18 100.0
pmon timer 382 4.5 .3 95.3
rdbms ipc message 4484 .1 .2 .0 24.7 74.9
shared server idle wait 37 100.0
smon timer 3 100.0
Back to Wait Events Statistics
Back to Top

Wait Event Histogram Detail (64 msec to 2 sec)

Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000


Units for % of Total Waits: ms is milliseconds s is 1024 milliseconds (approximately 1 second)
% of Total Waits: total waits for all wait classes, including Idle
% of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
Ordered by Event (only non-idle events are displayed)

% of Total Waits
Event Waits 64ms to 2s <32ms <64ms <1/8s <1/4s <1/2s <1s <2s >=2s
control file parallel write 8 97.8 .3 1.9
db file async I/O submit 46 78.2 16.1 5.2 .5
db file sequential read 10 97.1 1.7 1.2
direct path sync 1 100.0
log file parallel write 16 78.4 12.2 6.8 1.4 1.4
log file sync 1 100.0
os thread startup 4 75.0 25.0
Back to Wait Events Statistics
Back to Top

Wait Event Histogram Detail (4 sec to 2 min)

No data exists for this section of the report.


Back to Wait Events Statistics
Back to Top

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Wait Event Histogram Detail (4 min to 1 hr)

No data exists for this section of the report.


Back to Wait Events Statistics
Back to Top

Service Statistics

ordered by DB Time

Service Name DB Time (s) DB CPU (s) Physical Reads (K) Logical Reads (K)
SYS$USERS 0 0 0 0
DB11G 0 0 0 0
ORCLXDB 0 0 0 0
SYS$BACKGROUND 0 0 1 15
Back to Wait Events Statistics
Back to Top

Service Wait Class Stats

Wait Class info for services in the Service Statistics section.


Total Waits and Time Waited displayed for the following wait classes: User I/O, Concurrency, Administrative,
Network
Time Waited (Wt Time) in seconds

Service User I/O User I/O Concurcy Concurcy Admin Admin Wt Network Network
Name Total Wts Wt Time Total Wts Wt Time Total Wts Time Total Wts Wt Time
SYS$USERS 16 0 0 0 0 0 30 0
SYS$BACKG
521 3 5 1 0 0 0 0
ROUND
Back to Wait Events Statistics
Back to Top

SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by User I/O Wait Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Physical Reads (UnOptimized)
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Complete List of SQL Text
Back to Top

SQL ordered by Elapsed Time

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
% Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
%Total - Elapsed Time as a percentage of Total DB time
%CPU - CPU Time as a percentage of Elapsed Time
%IO - User I/O Time as a percentage of Elapsed Time

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Captured SQL account for 2.7E+03% of Total DB Time (s): 0
Captured PL/SQL account for 2.0E+03% of Total DB Time (s): 0

Elapsed Execu Elapsed Time %To %C %I


SQL Id SQL Module SQL Text
Time (s) tions per Exec (s) tal PU O
BEGIN
185 53. 43. 1uk5m5qb sqlplus@swood.saahyo
1.69 0 dbms_workload_repository
4.04 43 20 zj1vt g (TNS V1-V3)
...
832. 94. 3.7 bunssq95 insert into
0.76 1 0.76
69 20 7 0snhf wrh$_sga_target_ad...
205. 48. 52. 3c1kubcdj update sys.col_usage$ set
0.19 656 0.00
50 05 87 nppq equa...
183. 5.3 94. 0z103199 insert into
0.17 1 0.17
19 9 86 1bd7w wrh$_sysmetric_sum...
111. 37. 62. dpyzg5ds insert into
0.10 1 0.10
34 44 64 n3g60 wrh$_seg_stat_obj ...
103. 83. 0.0 5dfmd823 insert into
0.09 1 0.09
99 35 0 r8dsp wrh$_memory_resize...
begin
95.8 60. 46. 6ajkhukk7
0.09 1 0.09 prvt_hdm.auto_execute(
0 70 62 8nsr
:...
89.5 4.9 94. 7qjhf5dzm SELECT snap_id , OBJ#,
0.08 1 0.08
4 0 75 azsr DATAOBJ...
78.7 9.7 89. d87cd6s7 insert into wrh$_seg_stat
0.07 1 0.07
9 5 90 2f197 (sna...
59.3 20. 29. 71k5024z insert into
0.05 1 0.05
2 34 78 n7c9a wrh$_latch_misses_...
57.1 30. 68. 4dy1xm4n insert into
0.05 1 0.05
1 74 03 xc0gf wrh$_system_event ...
insert into
50.3 19. 83. 5ax5xu96
0.05 1 0.05 WRH$_EVENT_HISTOGR.
9 59 04 u2ztd
..
39.9 24. 77. gybqfa1y3 insert into
0.04 1 0.04
7 71 84 u5db wrh$_tempstatxs (s...
39.9 27. 71. 953bgyvrv insert into wrh$_filestatxs
0.04 1 0.04
2 48 16 ryq1 (s...
38.6 25. 75. fnk7155m insert into
0.04 1 0.04
3 56 71 k2jq6 wrh$_sysmetric_his...
38.4 28. 80. 6hwjmjgrp insert into
0.04 1 0.04
9 50 85 suaa wrh$_enqueue_stat ...
37.5 29. 72. 84k66tf2s insert into
0.03 1 0.03
1 24 48 7y1c wrh$_bg_event_summ...
33.0 33. 72. 7g732rx1 insert into
0.03 1 0.03
1 23 90 6j8jc WRH$_SERVICE_STAT ...
32.4 101 0.0 aykvshm7 select size_for_estimate,
0.03 36 0.00
0 .59 0 zsabd size...
31.8 13. 88. fz2htd7p7 insert into wrh$_waitstat
0.03 1 0.03
4 78 60 23p5 (sna...
INSERT /*+
31.8 24. 73. g3b2qwdu
0.03 1 0.03 LEADING(@"SEL$F5BB7..
2 14 54 398wt
.
30.8 46. 50. 6xpsr8v27 insert into
0.03 1 0.03
0 31 33 pmy2 WRH$_IOSTAT_FUNCTI...
26.6 32. 91. cvn54b7y select /*+ index(idl_ub1$

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
26.6 32. 91. cvn54b7y select /*+ index(idl_ub1$
0.02 3 0.01
8 90 59 z0s8u i_id...
26.2 20. 75. dgzmkv9j6 insert into
0.02 1 0.02
7 89 50 rfhg WRH$_SERVICE_WAIT_...
24.4 44. 50. 84qubbrsr insert into wrh$_latch
0.02 1 0.02
8 82 98 0kfn (snap_i...
22.4 19. 90. 39m4sx9k select /*+ index(idl_ub2$
0.02 3 0.01
4 56 14 63ba2 i_id...
22.4 24. 80. dnwpm0g insert into
0.02 1 0.02
2 46 14 dccrph wrh$_process_memor...
21.4 30. 0.0 0ut2yw7m insert into
0.02 1 0.02
3 72 0 xf0aq WRM$_SNAPSHOT (sna...
19.6 100 0.0 6c06mfv0 update wrh$_seg_stat_obj
0.02 1 0.02
0 .77 0 1xt2h sso s...
19.1 62. 35. 350myuyx insert into
0.02 1 0.02
8 92 14 0t1d6 wrh$_tablespace_st...
18.4 106 0.0 f3223cb4 select next_run_date,
0.02 15 0.00
9 .82 0 ng6hq obj#, ru...
18.3 53. 46. 66gs90fyy insert into
0.02 1 0.02
3 87 27 nks7 wrh$_instance_reco...
16.9 32. 76. 5rygsj4db insert into sys.mon_mods$
0.02 12 0.00
0 46 61 w6jt (obj...
16.3 121 0.0 b2gnxm5z lock table sys.col_usage$
0.01 258 0.00
1 .10 0 6r51n in e...
15.4 99. 1.4 f0s0bk5k7 insert into wrh$_parameter
0.01 1 0.01
6 32 3 13yb (sn...
14.4 90. 0.0 1tn90bbp UPDATE wrh$_tempfile tfh
0.01 1 0.01
8 91 0 yjshq SET (...
select
14.3 106 0.0 772s25v1
0.01 36 0.00 shared_pool_size_for_es..
9 .77 0 y0x8k
.
14.3 68. 14. g00cj285j update sys.mon_mods$
0.01 79 0.00
3 92 12 mgsw set inser...
14.1 46. 74. dbycq2n4 SELECT snap_id , SQL_ID
0.01 2 0.01
8 44 78 3t7n5 FROM (...
14.1 38. 70. 1uym1vta insert into
0.01 1 0.01
5 78 32 995yb wrh$_rowcache_summ...
13.6 88. 8.0 1cq3qr77 insert into
0.01 1 0.01
2 62 5 4cu45 WRH$_IOSTAT_FILETY...
insert into
13.0 100 0.0 du7vxbdk
0.01 1 0.01 WRH$_RSRC_CONSUME
7 .71 0 4n9wj
R...
13.0 92. 19. 586b2udq insert into wrh$_sysstat
0.01 1 0.01
3 60 82 6dbng (snap...
12.2 98. 7.4 cp3gpd7z insert into wrh$_sgastat
0.01 1 0.01
3 65 2 878w8 (snap...
11.6 94. 7.2 7tdugauk insert into
0.01 1 0.01
7 04 5 22j8t WRH$_IOSTAT_DETAIL...
UPDATE
11.3 96. 0.0 9rfkm1bf9
0.01 1 0.01 WRH$_SEG_STAT_OBJ
7 49 0 15a0
SET s...
11.0 79. 22. 0v3dvmc2 insert into sys.col_usage$
0.01 12 0.00
5 39 28 2qnam (ob...

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
10.2 106 0.0 2xn8yx0uz SELECT S.SNAP_ID,
0.01 1 0.01
8 .70 0 75fm S.BEGIN_INTE...
106 0.0 b07vcvuxr UPDATE wrh$_datafile dfh
0.01 1 0.01 9.29
.25 0 yvg9 SET (...
84. 11. 9n8xc314 insert into
0.01 1 0.01 9.10
36 11 xdm0t wrh$_shared_server...
105 0.0 5y3g9x2y insert into
0.01 1 0.01 8.31
.64 0 cmw0s WRH$_RSRC_PLAN (sn...
90. 12. 71y370j64 insert into wrh$_thread
0.01 1 0.01 7.29
35 47 28cb (snap_...
106 0.0 32mk33ry INSERT INTO
0.01 1 0.01 7.21
.55 0 1g665 wrh$_datafile (sna...
78. 17. 79uvsz1g insert into
0.01 1 0.01 6.96
80 01 1c168 wrh$_buffer_pool_s...
84. 0.0 bwsx6utfb insert into
0.01 1 0.01 6.50
40 0 h15q wrh$_filemetric_hi...
126 0.0 350f5yrnn lock table sys.mon_mods$
0.00 79 0.00 5.20
.58 0 mshs in ex...
79. 25. 4rcrxkvaqt insert into
0.00 1 0.00 4.15
36 88 37f wrh$_shared_pool_a...
80. 25. 7v80b35m insert into
0.00 1 0.00 4.09
39 54 1n54u wrh$_pga_target_ad...
86. 0.0 0gkq5s6h insert into
0.00 1 0.00 3.81
41 0 qx8hy wrh$_sessmetric_hi...
94. 0.0 3ktacv9r5 select owner#, name,
0.00 2 0.00 3.48
45 0 6b51 namespace...
94. 0.0 f0wj261b update wrm$_wr_control
0.00 2 0.00 3.47
84 0 m8snd set sna...
109 1.5 6xvp6nxs4 select nvl(sum(space), 0)
0.00 4 0.00 3.01
.33 7 a9n4 from...
73. 0.0 fsbqktj5vw select next_run_date,
0.00 3 0.00 2.97
94 0 6n9 obj#, ru...
116 0.0 7g9w4mq insert into
0.00 1 0.00 2.83
.23 0 gaf1w9 wrh$_service_name ...
Back to SQL Statistics
Back to Top

SQL ordered by CPU Time

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
%Total - CPU Time as a percentage of Total DB CPU
%CPU - CPU Time as a percentage of Elapsed Time
%IO - User I/O Time as a percentage of Elapsed Time
Captured SQL account for 5.0E+03% of Total CPU Time (s): 0
Captured PL/SQL account for 3.3E+03% of Total CPU Time (s): 0

CPU
Execu CPU per %To Elapsed %C %I
Time SQL Id SQL Module SQL Text
tions Exec (s) tal Time (s) PU O
(s)
BEGIN
311 53. 43. 1uk5m5q sqlplus@swood.saahy
0.90 0 1.69 dbms_workload_repositor
3.86 43 20 bzj1vt og (TNS V1-V3)
y...
246 94. 3.7 bunssq95 insert into
0.71 1 0.71 0.76
5.57 20 7 0snhf wrh$_sga_target_ad...

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
310. 48. 52. 3c1kubcdj update sys.col_usage$
0.09 656 0.00 0.19
36 05 87 nppq set equa...
272. 83. 0.0 5dfmd823 insert into
0.08 1 0.08 0.09
42 35 0 r8dsp wrh$_memory_resize...
begin
182. 60. 46. 6ajkhukk7
0.05 1 0.05 0.09 prvt_hdm.auto_execute(
77 70 62 8nsr
:...
131. 37. 62. dpyzg5ds insert into
0.04 1 0.04 0.10
04 44 64 n3g60 wrh$_seg_stat_obj ...
103. 101 0.0 aykvshm7 select size_for_estimate,
0.03 36 0.00 0.03
46 .59 0 zsabd size...
update
62.0 100 0.0 6c06mfv0
0.02 1 0.02 0.02 wrh$_seg_stat_obj sso
7 .77 0 1xt2h
s...
62.0 121 0.0 b2gnxm5z lock table sys.col_usage$
0.02 258 0.00 0.01
7 .10 0 6r51n in e...
62.0 106 0.0 f3223cb4 select next_run_date,
0.02 15 0.00 0.02
7 .82 0 ng6hq obj#, ru...
55.1 30. 68. 4dy1xm4n insert into
0.02 1 0.02 0.05
8 74 03 xc0gf wrh$_system_event ...
select
48.2 106 0.0 772s25v1
0.01 36 0.00 0.01 shared_pool_size_for_es.
8 .77 0 y0x8k
..
48.2 99. 1.4 f0s0bk5k insert into
0.01 1 0.01 0.01
7 32 3 713yb wrh$_parameter (sn...
insert into
44.8 46. 50. 6xpsr8v2
0.01 1 0.01 0.03 WRH$_IOSTAT_FUNCTI..
3 31 33 7pmy2
.
41.3 90. 0.0 1tn90bbp UPDATE wrh$_tempfile
0.01 1 0.01 0.01
8 91 0 yjshq tfh SET (...
insert into
41.3 100 0.0 du7vxbdk
0.01 1 0.01 0.01 WRH$_RSRC_CONSUME
8 .71 0 4n9wj
R...
37.9 92. 19. 586b2udq insert into wrh$_sysstat
0.01 1 0.01 0.01
3 60 82 6dbng (snap...
37.9 20. 29. 71k5024z insert into
0.01 1 0.01 0.05
3 34 78 n7c9a wrh$_latch_misses_...
37.9 98. 7.4 cp3gpd7z insert into wrh$_sgastat
0.01 1 0.01 0.01
3 65 2 878w8 (snap...
insert into
37.9 88. 8.0 1cq3qr77
0.01 1 0.01 0.01 WRH$_IOSTAT_FILETY..
3 62 5 4cu45
.
37.9 62. 35. 350myuyx insert into
0.01 1 0.01 0.02
3 92 14 0t1d6 wrh$_tablespace_st...
insert into
34.4 94. 7.2 7tdugauk
0.01 1 0.01 0.01 WRH$_IOSTAT_DETAIL..
9 04 5 22j8t
.
34.4 44. 50. 84qubbrs insert into wrh$_latch
0.01 1 0.01 0.02
9 82 98 r0kfn (snap_i...
34.4 27. 71. 953bgyvr insert into wrh$_filestatxs
0.01 1 0.01 0.04
9 48 16 vryq1 (s...
34.4 106 0.0 2xn8yx0u SELECT S.SNAP_ID,
0.01 1 0.01 0.01
8 .70 0 z75fm S.BEGIN_INTE...

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
34.4 28. 80. 6hwjmjgrp insert into
0.01 1 0.01 0.04
8 50 85 suaa wrh$_enqueue_stat ...
insert into
34.4 33. 72. 7g732rx1
0.01 1 0.01 0.03 WRH$_SERVICE_STAT
8 23 90 6j8jc
...
34.4 29. 72. 84k66tf2s insert into
0.01 1 0.01 0.03
8 24 48 7y1c wrh$_bg_event_summ...
UPDATE
34.4 96. 0.0 9rfkm1bf9
0.01 1 0.01 0.01 WRH$_SEG_STAT_OBJ
8 49 0 15a0
SET s...
31.0 106 0.0 b07vcvuxr UPDATE wrh$_datafile
0.01 1 0.01 0.01
4 .25 0 yvg9 dfh SET (...
31.0 25. 75. fnk7155m insert into
0.01 1 0.01 0.04
4 56 71 k2jq6 wrh$_sysmetric_his...
31.0 24. 77. gybqfa1y insert into
0.01 1 0.01 0.04
4 71 84 3u5db wrh$_tempstatxs (s...
31.0 5.3 94. 0z103199 insert into
0.01 1 0.01 0.17
3 9 86 1bd7w wrh$_sysmetric_sum...
insert into
31.0 19. 83. 5ax5xu96
0.01 1 0.01 0.05 WRH$_EVENT_HISTOGR
3 59 04 u2ztd
...
31.0 53. 46. 66gs90fy insert into
0.01 1 0.01 0.02
3 87 27 ynks7 wrh$_instance_reco...
31.0 68. 14. g00cj285j update sys.mon_mods$
0.01 79 0.00 0.01
3 92 12 mgsw set inser...
27.5 32. 91. cvn54b7y select /*+ index(idl_ub1$
0.01 3 0.00 0.02
9 90 59 z0s8u i_id...
insert into
27.5 105 0.0 5y3g9x2y
0.01 1 0.01 0.01 WRH$_RSRC_PLAN
8 .64 0 cmw0s
(sn...
27.5 79. 22. 0v3dvmc2 insert into sys.col_usage$
0.01 12 0.00 0.01
8 39 28 2qnam (ob...
24.1 84. 11. 9n8xc314 insert into
0.01 1 0.01 0.01
4 36 11 xdm0t wrh$_shared_server...
INSERT /*+
24.1 24. 73. g3b2qwd
0.01 1 0.01 0.03 LEADING(@"SEL$F5BB7.
4 14 54 u398wt
..
24.1 106 0.0 32mk33ry INSERT INTO
0.01 1 0.01 0.01
4 .55 0 1g665 wrh$_datafile (sna...
24.1 9.7 89. d87cd6s7 insert into wrh$_seg_stat
0.01 1 0.01 0.07
4 5 90 2f197 (sna...
insert into
20.6 30. 0.0 0ut2yw7m
0.01 1 0.01 0.02 WRM$_SNAPSHOT
9 72 0 xf0aq
(sna...
20.6 126 0.0 350f5yrnn lock table
0.01 79 0.00 0.00
9 .58 0 mshs sys.mon_mods$ in ex...
20.6 90. 12. 71y370j6 insert into wrh$_thread
0.01 1 0.01 0.01
9 35 47 428cb (snap_...
20.6 46. 74. dbycq2n4 SELECT snap_id ,
0.01 2 0.00 0.01
9 44 78 3t7n5 SQL_ID FROM (...
17.2 32. 76. 5rygsj4db insert into
0.01 12 0.00 0.02
4 46 61 w6jt sys.mon_mods$ (obj...
17.2 84. 0.0 bwsx6utfb insert into
0.01 1 0.01 0.01

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
0.01 1 0.01 0.01
4 40 0 h15q wrh$_filemetric_hi...
insert into
17.2 20. 75. dgzmkv9j
0.01 1 0.01 0.02 WRH$_SERVICE_WAIT_.
4 89 50 6rfhg
..
17.2 38. 70. 1uym1vta insert into
0.00 1 0.00 0.01
4 78 32 995yb wrh$_rowcache_summ...
17.2 78. 17. 79uvsz1g insert into
0.00 1 0.00 0.01
4 80 01 1c168 wrh$_buffer_pool_s...
17.2 24. 80. dnwpm0g insert into
0.00 1 0.00 0.02
4 46 14 dccrph wrh$_process_memor...
13.8 19. 90. 39m4sx9k select /*+ index(idl_ub2$
0.00 3 0.00 0.02
0 56 14 63ba2 i_id...
13.8 4.9 94. 7qjhf5dzm SELECT snap_id , OBJ#,
0.00 1 0.00 0.08
0 0 75 azsr DATAOBJ...
13.8 13. 88. fz2htd7p7 insert into wrh$_waitstat
0.00 1 0.00 0.03
0 78 60 23p5 (sna...
13.7 158 0.0 3nkd3g3j select obj#, type#, ctime,
0.00 6 0.00 0.00
9 .38 0 u5ph1 mti...
10.3 86. 0.0 0gkq5s6h insert into
0.00 1 0.00 0.00
5 41 0 qx8hy wrh$_sessmetric_hi...
10.3 251 0.0 0kkhhb2w update seg$ set
0.00 3 0.00 0.00
5 .68 0 93cx0 type#=:4, bloc...
10.3 116 0.0 7g9w4mq insert into
0.00 1 0.00 0.00
5 .23 0 gaf1w9 wrh$_service_name ...
10.3 80. 25. 7v80b35 insert into
0.00 1 0.00 0.00
5 39 54 m1n54u wrh$_pga_target_ad...
10.3 79. 25. 4rcrxkvaq insert into
0.00 1 0.00 0.00
4 36 88 t37f wrh$_shared_pool_a...
10.3 109 1.5 6xvp6nxs select nvl(sum(space), 0)
0.00 4 0.00 0.00
4 .33 7 4a9n4 from...
10.3 94. 0.0 3ktacv9r5 select owner#, name,
0.00 2 0.00 0.00
4 45 0 6b51 namespace...
Back to SQL Statistics
Back to Top

SQL ordered by User I/O Wait Time

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
%Total - User I/O Time as a percentage of Total User I/O Wait time
%CPU - CPU Time as a percentage of Elapsed Time
%IO - User I/O Time as a percentage of Elapsed Time
Captured SQL account for 27.0% of Total User I/O Wait Time (s): 3
Captured PL/SQL account for 22.1% of Total User I/O Wait Time (s): 3

%T
User I/O Execu UIO per Elapsed %C %I
ota SQL Id SQL Module SQL Text
Time (s) tions Exec (s) Time (s) PU O
l
BEGIN
20. 53. 43. 1uk5m5q sqlplus@swood.saahyo
0.73 0 1.69 dbms_workload_reposit
90 43 20 bzj1vt g (TNS V1-V3)
ory...
4.5 5.3 94. 0z103199 insert into
0.16 1 0.16 0.17
4 9 86 1bd7w wrh$_sysmetric_sum...
2.8 48. 52. 3c1kubcd update sys.col_usage$
0.10 656 0.00 0.19
4 05 87 jnppq set equa...

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
2.2 4.9 94. 7qjhf5dz SELECT snap_id ,
0.08 1 0.08 0.08
1 0 75 mazsr OBJ#, DATAOBJ...
1.8 9.7 89. d87cd6s7 insert into
0.06 1 0.06 0.07
5 5 90 2f197 wrh$_seg_stat (sna...
1.8 37. 62. dpyzg5ds insert into
0.06 1 0.06 0.10
2 44 64 n3g60 wrh$_seg_stat_obj ...
begin
1.1 60. 46. 6ajkhukk
0.04 1 0.04 0.09 prvt_hdm.auto_execute
7 70 62 78nsr
( :...
insert into
1.0 19. 83. 5ax5xu96
0.04 1 0.04 0.05 WRH$_EVENT_HISTO
9 59 04 u2ztd
GR...
1.0 30. 68. 4dy1xm4 insert into
0.04 1 0.04 0.05
1 74 03 nxc0gf wrh$_system_event ...
0.8 94. 3.7 bunssq95 insert into
0.03 1 0.03 0.76
2 20 7 0snhf wrh$_sga_target_ad...
Back to SQL Statistics
Back to Top

SQL ordered by Gets

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
%Total - Buffer Gets as a percentage of Total Buffer Gets
%CPU - CPU Time as a percentage of Elapsed Time
%IO - User I/O Time as a percentage of Elapsed Time
Total Buffer Gets: 12,108
Captured SQL account for 38.4% of Total

Buffer Execut Gets per %To Elapsed %C %I


SQL Id SQL Module SQL Text
Gets ions Exec tal Time (s) PU O
18. 48. 52. 3c1kubcdj update sys.col_usage$
2,285 656 3.48 0.19
87 05 87 nppq set equa...
BEGIN
9.1 53. 43. 1uk5m5q sqlplus@swood.saahyo
1,110 0 1.69 dbms_workload_reposit
7 43 20 bzj1vt g (TNS V1-V3)
ory...
begin
2.2 60. 46. 6ajkhukk7
274 1 274.00 0.09 prvt_hdm.auto_execute(
6 70 62 8nsr
:...
2.0 68. 14. g00cj285j update sys.mon_mods$
249 79 3.15 0.01
6 92 12 mgsw set inser...
1.4 46. 74. dbycq2n4 SELECT snap_id ,
174 2 87.00 0.01
4 44 78 3t7n5 SQL_ID FROM (...
1.4 106 0.0 f3223cb4 select next_run_date,
174 15 11.60 0.02
4 .82 0 ng6hq obj#, ru...
1.2 99. 1.4 f0s0bk5k7 insert into
146 1 146.00 0.01
1 32 3 13yb wrh$_parameter (sn...
insert into
0.8 19. 83. 5ax5xu96
106 1 106.00 0.05 WRH$_EVENT_HISTOG
8 59 04 u2ztd
R...
0.8 5.3 94. 0z103199 insert into
105 1 105.00 0.17
7 9 86 1bd7w wrh$_sysmetric_sum...
0.8 0.0 0.0 8swypbbr select order#, columns,
102 2 51.00 0.00
4 0 0 0m372 types ...
Back to SQL Statistics
Back to Top

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
SQL ordered by Reads

%Total - Physical Reads as a percentage of Total Disk Reads


%CPU - CPU Time as a percentage of Elapsed Time
%IO - User I/O Time as a percentage of Elapsed Time
Total Disk Reads: 408
Captured SQL account for 22.5% of Total

Physical Execut Reads %T Elapsed %C %I


SQL Id SQL Module SQL Text
Reads ions per Exec otal Time (s) PU O
BEGIN
13. 53. 43. 1uk5m5q sqlplus@swood.saahyo
56 0 1.69 dbms_workload_reposi
73 43 20 bzj1vt g (TNS V1-V3)
tory...
4.4 48. 52. 3c1kubcdj update sys.col_usage$
18 656 0.03 0.19
1 05 87 nppq set equa...
begin
4.1 60. 46. 6ajkhukk7
17 1 17.00 0.09 prvt_hdm.auto_execute
7 70 62 8nsr
( :...
3.9 32. 91. cvn54b7y select /*+
16 3 5.33 0.02
2 90 59 z0s8u index(idl_ub1$ i_id...
1.4 28. 80. 6hwjmjgrp insert into
6 1 6.00 0.04
7 50 85 suaa wrh$_enqueue_stat ...
1.2 5.3 94. 0z103199 insert into
5 1 5.00 0.17
3 9 86 1bd7w wrh$_sysmetric_sum...
insert into
0.9 29. 72. 84k66tf2s
4 1 4.00 0.03 wrh$_bg_event_summ.
8 24 48 7y1c
..
0.7 92. 19. 586b2udq insert into wrh$_sysstat
3 1 3.00 0.01
4 60 82 6dbng (snap...
insert into
0.7 33. 72. 7g732rx1
3 1 3.00 0.03 WRH$_SERVICE_STA
4 23 90 6j8jc
T ...
0.4 44. 50. 84qubbrs insert into wrh$_latch
2 1 2.00 0.02
9 82 98 r0kfn (snap_i...
Back to SQL Statistics
Back to Top

SQL ordered by Physical Reads (UnOptimized)

UnOptimized Read Reqs = Physical Read Reqts - Optimized Read Reqs


%Opt - Optimized Reads as percentage of SQL Read Requests
%Total - UnOptimized Read Reqs as a percentage of Total UnOptimized Read Reqs
Total Physical Read Requests: 408
Captured SQL account for 22.3% of Total
Total UnOptimized Read Requests: 408
Captured SQL account for 22.3% of Total
Total Optimized Read Requests: 1
Captured SQL account for 0.0% of Total

Exec % %T
UnOptimized Physical UnOptimized
ution O ota SQL Id SQL Module SQL Text
Read Reqs Read Reqs Reqs per Exec
s pt l
BEGIN
0. 28. 1uk5m5q sqlplus@swood.saah
117 117 0 dbms_workload_rep
00 68 bzj1vt yog (TNS V1-V3)
ository...

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
insert into
0. 7.1 1cq3qr7
29 29 1 29.00 WRH$_IOSTAT_FIL
00 1 74cu45
ETY...
update
0. 4.4 3c1kubc
18 18 656 0.03 sys.col_usage$ set
00 1 djnppq
equa...
begin
0. 4.1 6ajkhukk
17 17 1 17.00 prvt_hdm.auto_exec
00 7 78nsr
ute( :...
0. 3.9 cvn54b7 select /*+
16 16 3 5.33
00 2 yz0s8u index(idl_ub1$ i_id...
insert into
0. 2.4 71y370j6
10 10 1 10.00 wrh$_thread
00 5 428cb
(snap_...
UPDATE
0. 0.9 1tn90bb
4 4 1 4.00 wrh$_tempfile tfh
00 8 pyjshq
SET (...
0. 0.9 32mk33r INSERT INTO
4 4 1 4.00
00 8 y1g665 wrh$_datafile (sna...
update
0. 0.4 g00cj285
2 2 79 0.03 sys.mon_mods$ set
00 9 jmgsw
inser...
insert into
0. 0.2 0v3dvmc
1 1 12 0.08 sys.col_usage$
00 5 22qnam
(ob...
Back to SQL Statistics
Back to Top

SQL ordered by Executions

%CPU - CPU Time as a percentage of Elapsed Time


%IO - User I/O Time as a percentage of Elapsed Time
Total Executions: 3,137
Captured SQL account for 42.9% of Total

Executi Rows Rows per Elapsed %CP SQL


%IO SQL Id SQL Text
ons Processed Exec Time (s) U Module
48.0 52. 3c1kubcdjnp update sys.col_usage$ set
656 644 0.98 0.19
5 87 pq equa...
121. 0.0 b2gnxm5z6r lock table sys.col_usage$
258 0 0.00 0.01
10 0 51n in e...
126. 0.0 350f5yrnnms lock table sys.mon_mods$
79 0 0.00 0.00
58 0 hs in ex...
68.9 14. g00cj285jmg update sys.mon_mods$
79 67 0.85 0.01
2 12 sw set inser...
select
106. 0.0 772s25v1y0
36 612 17.00 0.01 shared_pool_size_for_es..
77 0 x8k
.
101. 0.0 aykvshm7zs select size_for_estimate,
36 756 21.00 0.03
59 0 abd size...
80.0 0.0 87gaftwrm2h select o.owner#, o.name,
21 21 1.00 0.00
6 0 68 o.nam...
94.7 0.0 96g93hntrzjt select /*+ rule */
18 17 0.94 0.00
9 0 r bucket_cnt,...
106. 0.0 f3223cb4ng select next_run_date,
15 0 0.00 0.02
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
15 0 0.00 0.02
82 0 6hq obj#, ru...
79.3 22. 0v3dvmc22q insert into sys.col_usage$
12 12 1.00 0.01
9 28 nam (ob...
Back to SQL Statistics
Back to Top

SQL ordered by Parse Calls

Total Parse Calls: 1,417


Captured SQL account for 75.8% of Total

Parse Executio % Total


SQL Id SQL Module SQL Text
Calls ns Parses
0v3dvmc22qn
258 12 18.21 insert into sys.col_usage$ (ob...
am
3c1kubcdjnpp update sys.col_usage$ set
258 656 18.21
q equa...
b2gnxm5z6r5
258 258 18.21 lock table sys.col_usage$ in e...
1n
350f5yrnnms
79 79 5.58 lock table sys.mon_mods$ in ex...
hs
g00cj285jmgs update sys.mon_mods$ set
79 79 5.58
w inser...
f3223cb4ng6
15 15 1.06 select next_run_date, obj#, ru...
hq
12 12 0.85 5rygsj4dbw6jt insert into sys.mon_mods$ (obj...
9babjv8yq8ru sqlplus@swood.saahyog (TNS BEGIN
10 10 0.71
3 V1-V3) DBMS_OUTPUT.GET_LINES(:L...
7 7 0.49 bsa0wjtftg3uw select file# from file$ where ...
6xvp6nxs4a9
4 4 0.28 select nvl(sum(space), 0) from...
n4
Back to SQL Statistics
Back to Top

SQL ordered by Sharable Memory

No data exists for this section of the report.


Back to SQL Statistics
Back to Top

SQL ordered by Version Count

No data exists for this section of the report.


Back to SQL Statistics
Back to Top

Complete List of SQL Text

S
Q
L SQL Text
I
d
0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
g
k
q
5
s insert into wrh$_sessmetric_history (snap_id, dbid, instance_number, begin_time, end_time, sessid, serial#,
6 intsize, group_id, metric_id, value) select :snap_id, :dbid, :instance_number, begtime, endtime, eid, eidsq,
h intsize_csec, groupid, metricid, value from x$kewmdrmv where groupid = 4
q
x
8
h
y
0
k
k
h
h
update seg$ set type#=:4, blocks=:5, extents=:6, minexts=:7, maxexts=:8, extsize=:9, extpct=:10, user#=:11,
b
iniexts=:12, lists=decode(:13, 65535, NULL, :13), groups=decode(:14, 65535, NULL, :14), cachehint=:15,
2
hwmincr=:16, spare1=DECODE(:17, 0, NULL, :17), scanhint=:18, bitmapranges=:19 where ts#=:1 and file#=:2 and
w
block#=:3
9
3
c
x
0
0
u
t
2
y
insert into WRM$_SNAPSHOT (snap_id, dbid, instance_number, startup_time, begin_interval_time,
w
end_interval_time, snap_level, status, error_count, bl_moved, snap_flag) values (:snap_id, :dbid,
7
:instance_number, :startup_time, :begin_interval_time, :end_interval_time, :snap_level, :status, 0, 0, :bind1)
m
xf
0
a
q
0
v
3
d
v
insert into sys.col_usage$ (obj#, intcol#, equality_preds, equijoin_preds, nonequijoin_preds, range_preds,
m
like_preds, null_preds, timestamp) values ( :objn, :coln, decode(bitand(:flag, 1), 0, 0, 1), decode(bitand(:flag, 2), 0,
c
0, 1), decode(bitand(:flag, 4), 0, 0, 1), decode(bitand(:flag, 8), 0, 0, 1), decode(bitand(:flag, 16), 0, 0, 1),
2
decode(bitand(:flag, 32), 0, 0, 1), :time)
2
q
n
a
m
0
z
1
0
3
insert into wrh$_sysmetric_summary (snap_id, dbid, instance_number, begin_time, end_time, intsize, group_id,
1
metric_id, num_interval, maxval, minval, average, standard_deviation, sum_squares) select :snap_id, :dbid,
9
:instance_number, begtime, endtime, intsize_csec, groupid, metricid, numintv, max, min, avg, std, sumsq FROM
9
x$kewmsmdv WHERE groupid = 2
1

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
b
d
7
w
1
insert into WRH$_IOSTAT_FILETYPE (snap_id, dbid, instance_number, filetype_id, small_read_megabytes,
c
small_write_megabytes, large_read_megabytes, large_write_megabytes, small_read_reqs, small_write_reqs,
q
small_sync_read_reqs, large_read_reqs, large_write_reqs, small_read_servicetime, small_write_servicetime,
3
small_sync_read_latency, large_read_servicetime, large_write_servicetime, retries_on_error) (select :snap_id,
q
:dbid, :instance_number, filetype_id, sum(small_read_megabytes) small_read_megabytes,
r
sum(small_write_megabytes) small_write_megabytes, sum(large_read_megabytes) large_read_megabytes,
7
sum(large_write_megabytes) large_write_megabytes, sum(small_read_reqs) small_read_reqs,
7
sum(small_write_reqs) small_write_reqs, sum(small_sync_read_reqs) small_sync_read_reqs,
4
sum(large_read_reqs) large_read_reqs, sum(large_write_reqs) large_write_reqs, sum(small_read_servicetime)
c
small_read_servicetime, sum(small_write_servicetime) small_write_servicetime, sum(small_sync_read_latency)
u
small_sync_read_latency, sum(large_read_servicetime) large_read_servicetime, sum(large_write_servicetime)
4
large_write_servicetime, sum(retries_on_error) retries_on_error from v$iostat_file group by filetype_id)
5
1
t
n
9
0 UPDATE wrh$_tempfile tfh SET (snap_id, filename, tsname) = (SELECT :lah_snap_id, tf.name name, ts.name
b tsname FROM v$tempfile tf, ts$ ts WHERE tf.ts# = ts.ts# AND tfh.file# = tf.file# AND tfh.creation_change# =
b tf.creation_change#) WHERE (file#, creation_change#) IN (SELECT tf.tfnum, to_number(tf.tfcrc_scn)
p creation_change# FROM x$kcctf tf WHERE tf.tfdup != 0) AND dbid = :dbid AND snap_id < :snap_id
yj
s
h
q
1
u
k
5
m
5 BEGIN dbms_workload_repository.create_snapshot; END;
q
b
zj
1
vt
1
u
y
m
insert into wrh$_rowcache_summary (snap_id, dbid, instance_number, parameter, total_usage, usage, gets,
1
getmisses, scans, scanmisses, scancompletes, modifications, flushes, dlm_requests, dlm_conflicts, dlm_releases)
vt
select :snap_id, :dbid, :instance_number, parameter, sum("COUNT"), sum(usage), sum(gets), sum(getmisses),
a
sum(scans), sum(scanmisses), sum(scancompletes), sum(modifications), sum(flushes), sum(dlm_requests),
9
sum(dlm_conflicts), sum(dlm_releases) from v$rowcache group by parameter order by parameter
9
5
y
b
2
x
n
8
y SELECT S.SNAP_ID, S.BEGIN_INTERVAL, S.END_INTERVAL FROM (SELECT SS.SNAP_ID,
x CAST(SS.BEGIN_INTERVAL_TIME AS DATE) AS BEGIN_INTERVAL, CAST(SS.END_INTERVAL_TIME AS DATE)
0 AS END_INTERVAL FROM WRM$_SNAPSHOT SS WHERE SS.DBID = :B3 AND SS.INSTANCE_NUMBER = :B2

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
u AND SS.SNAP_ID <= :B1 ORDER BY SS.DBID, SS.INSTANCE_NUMBER, SS.SNAP_ID DESC) S WHERE ROWNUM
z <= 2
7
5
f
m
3
2
m
k
INSERT INTO wrh$_datafile (snap_id, dbid, file#, creation_change#, filename, ts#, tsname, block_size) SELECT /*+
3
ordered index(f) index(ts) */ :lah_snap_id lah, :dbid dbid, f.file# file#, f.crscnbas + (f.crscnwrp * power(2, 32))
3
creation_change#, v.name filename, ts.ts# ts#, ts.name tsname, ts.blocksize block_size FROM v$dbfile v, file$ f,
r
ts$ ts WHERE f.file# = v.file# and f.status$ = 2 and f.ts# = ts.ts# and not exists (SELECT 1 from wrh$_datafile dfh
y
WHERE dfh.file# = f.file# AND dfh.creation_change# = (f.crscnbas + (f.crscnwrp * power(2, 32))) AND dfh.dbid =
1
:dbid2)
g
6
6
5
3
5
0
f
5
y
r lock table sys.mon_mods$ in exclusive mode nowait
n
n
m
s
h
s
3
5
0 insert into wrh$_tablespace_stat (snap_id, dbid, instance_number, ts#, tsname, contents, status,
m segment_space_management, extent_management, is_backup) select :snap_id, :dbid, :instance_number, ts.ts#,
y ts.name as tsname, decode(ts.contents$, 0, (decode(bitand(ts.flags, 16), 16, 'UNDO', 'PERMANENT')), 1,
u 'TEMPORARY') as contents, decode(ts.online$, 1, 'ONLINE', 2, 'OFFLINE', 4, 'READ ONLY', 'UNDEFINED') as
y status, decode(bitand(ts.flags, 32), 32, 'AUTO', 'MANUAL') as segspace_mgmt, decode(ts.bitmapped, 0,
x 'DICTIONARY', 'LOCAL') as extent_management, (case when b.active_count > 0 then 'TRUE' else 'FALSE' end) as
0 is_backup from sys.ts$ ts, (select dfile.ts#, sum( case when bkup.status = 'ACTIVE' then 1 else 0 end ) as
t active_count from v$backup bkup, file$ dfile where bkup.file# = dfile.file# and dfile.status$ = 2 group by dfile.ts#) b
1 where ts.online$ != 3 and bitand(ts.flags, 2048) != 2048 and ts.ts# = b.ts#
d
6
3
9
m
4
s
x
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#, length, piece from idl_ub2$ where obj#=:1 and part=:2 and
9
version=:3 order by piece#
k
6
3
b
a
2
3

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
c
1
k
update sys.col_usage$ set equality_preds = equality_preds + decode(bitand(:flag, 1), 0, 0, 1), equijoin_preds =
u
equijoin_preds + decode(bitand(:flag, 2), 0, 0, 1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,
b
4), 0, 0, 1), range_preds = range_preds + decode(bitand(:flag, 8), 0, 0, 1), like_preds = like_preds +
c
decode(bitand(:flag, 16), 0, 0, 1), null_preds = null_preds + decode(bitand(:flag, 32), 0, 0, 1), timestamp = :time
dj
where obj# = :objn and intcol# = :coln
n
p
p
q
3
kt
a
c
v
9 select owner#, name, namespace, remoteowner, linkname, p_timestamp, p_obj#, nvl(property, 0), subname, type#,
r d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#(+) order by order#
5
6
b
5
1
3
n
k
d
3
g select obj#, type#, ctime, mtime, stime, status, dataobj#, flags, oid$, spare1, spare2 from obj$ where owner#=:1
3j and name=:2 and namespace=:3 and remoteowner is null and linkname is null and subname is null
u
5
p
h
1
4
d
y
1
x
insert into wrh$_system_event (snap_id, dbid, instance_number, event_id, total_waits, total_timeouts,
m
time_waited_micro, total_waits_fg, total_timeouts_fg, time_waited_micro_fg) select :snap_id, :dbid,
4
:instance_number, event_id, total_waits, total_timeouts, time_waited_micro, total_waits_fg, total_timeouts_fg,
n
time_waited_micro_fg from v$system_event order by event_id
x
c
0
g
f
4
r
c
rx insert into wrh$_shared_pool_advice (snap_id, dbid, instance_number, shared_pool_size_for_estimate,
k shared_pool_size_factor, estd_lc_size, estd_lc_memory_objects, estd_lc_time_saved, estd_lc_time_saved_factor,
v estd_lc_load_time, estd_lc_load_time_factor, estd_lc_memory_object_hits) select :snap_id, :dbid,
a :instance_number, shared_pool_size_for_estimate, shared_pool_size_factor, estd_lc_size,
q estd_lc_memory_objects, estd_lc_time_saved, estd_lc_time_saved_factor, estd_lc_load_time,
t estd_lc_load_time_factor, estd_lc_memory_object_hits from v$shared_pool_advice
3
7

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
f
5
8
6
b
2
u
insert into wrh$_sysstat (snap_id, dbid, instance_number, stat_id, value) select :snap_id, :dbid, :instance_number,
d
stat_id, value from v$sysstat order by stat_id
q
6
d
b
n
g
5
a
x
5
x
insert into WRH$_EVENT_HISTOGRAM (snap_id, dbid, instance_number, event_id, wait_time_milli, wait_count)
u
select :snap_id, :dbid, :instance_number, d.ksledhash, s.kslsesmaxdur, s.kslsesval from x$kslseshist s, x$ksled d
9
where s.kslsesenum = d.indx and s.kslsesval > 0 order by d.ksledhash, s.kslsesmaxdur
6
u
2
zt
d
5
d
f
m insert into wrh$_memory_resize_ops (snap_id, dbid, instance_number, component, oper_type, start_time,
d end_time, target_size, oper_mode, parameter, initial_size, final_size, status) select snap_id, dbid, instance_num,
8 component, oper_type, start_time, max(end_time), target_size, max(oper_mode), max(parameter),
2 max(initial_size), max(final_size), max(status) from (select :snap_id snap_id, :dbid dbid, :instance_number
3 instance_num, component, oper_type, start_time, end_time, target_size, oper_mode, parameter, initial_size,
r final_size, status from v$memory_resize_ops where :begin_interval_time <= end_time and end_time <
8 :end_interval_time) group by snap_id, dbid, instance_num, component, oper_type, start_time, target_size
d
s
p
5
r
y
g
sj
insert into sys.mon_mods$ (obj#, inserts, updates, deletes, timestamp, flags, drop_segments) values (:1, :2, :3, :4,
4
:5, :6, :7)
d
b
w
6j
t
5
y
3
g
9 insert into WRH$_RSRC_PLAN (snap_id, dbid, instance_number, sequence#, start_time, end_time, plan_id,
x plan_name, cpu_managed) (select :snap_id, :dbid, :instance_number, sequence#, start_time, end_time, id, name,
2 cpu_managed from v$rsrc_plan_history where sequence# not in (select v.sequence# from wrh$_rsrc_plan w,
y v$rsrc_plan_history v where w.instance_number = :instance_number and w.sequence# = v.sequence# and
c w.start_time = v.start_time) and end_time is not null and id is not null)

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
m
w
0
s
6
6
insert into wrh$_instance_recovery (snap_id, dbid, instance_number, recovery_estimated_ios, actual_redo_blks,
g
target_redo_blks, log_file_size_redo_blks, log_chkpt_timeout_redo_blks, log_chkpt_interval_redo_blks,
s
fast_start_io_target_redo_blks, target_mttr, estimated_mttr, ckpt_block_writes, optimal_logfile_size,
9
estd_cluster_available_time, writes_mttr, writes_logfile_size, writes_log_checkpoint_settings, writes_other_settings,
0
writes_autotune, writes_full_thread_ckpt) select :snap_id, :dbid, :instance_number, recovery_estimated_ios,
fy
actual_redo_blks, target_redo_blks, log_file_size_redo_blks, log_chkpt_timeout_redo_blks,
y
log_chkpt_interval_redo_blks, fast_start_io_target_redo_blks, target_mttr, estimated_mttr, ckpt_block_writes,
n
optimal_logfile_size, estd_cluster_available_time, writes_mttr, writes_logfile_size, writes_log_checkpoint_settings,
k
writes_other_settings, writes_autotune, writes_full_thread_ckpt from v$instance_recovery
s
7
6
aj
k
h
u
k
begin prvt_hdm.auto_execute( :dbid, :inst_num , :end_snap_id ); end;
k
7
8
n
s
r
6
c
update wrh$_seg_stat_obj sso set (index_type, base_obj#, base_object_name, base_object_owner) = (select
0
decode(ind.type#, 1, 'NORMAL'|| decode(bitand(ind.property, 4), 0, '', 4, '/REV'), 2, 'BITMAP', 3, 'CLUSTER', 4, 'IOT
6
- TOP', 5, 'IOT - NESTED', 6, 'SECONDARY', 7, 'ANSI', 8, 'LOB', 9, 'DOMAIN') as index_type, base_obj.obj# as
m
base_obj#, base_obj.name as base_object_name, base_owner.name as base_object_owner from sys.ind$ ind,
fv
sys.user$ base_owner, sys.obj$ base_obj where ind.obj# = sso.obj# and ind.dataobj# = sso.dataobj# and ind.bo#
0
= base_obj.obj# and base_obj.owner# = base_owner.user#) where sso.dbid = :dbid and (obj#, dataobj#) in (select
1
objn_kewrseg, objd_kewrseg from x$kewrtsegstat ss1 where objtype_kewrseg = 1) and sso.snap_id = :lah_snap_id
xt
and sso.object_type = 'INDEX'
2
h
6
h
w
j
m insert into wrh$_enqueue_stat (snap_id, dbid, instance_number, eq_type, req_reason, total_req#, total_wait#,
jg succ_req#, failed_req#, cum_wait_time, event#) select :snap_id, :dbid, :instance_number, eq_type, req_reason,
r total_req#, total_wait#, succ_req#, failed_req#, cum_wait_time, event# from v$enqueue_statistics where total_req#
p != 0 order by eq_type, req_reason
s
u
a
a
6
x
p
insert into WRH$_IOSTAT_FUNCTION (snap_id, dbid, instance_number, function_id, small_read_megabytes,
s
small_write_megabytes, large_read_megabytes, large_write_megabytes, small_read_reqs, small_write_reqs,
r
large_read_reqs, large_write_reqs, number_of_waits, wait_time) (select :snap_id, :dbid, :instance_number,
8
function_id, sum(small_read_megabytes) small_read_megabytes, sum(small_write_megabytes)
v
small_write_megabytes, sum(large_read_megabytes) large_read_megabytes, sum(large_write_megabytes)
2
large_write_megabytes, sum(small_read_reqs) small_read_reqs, sum(small_write_reqs) small_write_reqs,
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
large_write_megabytes, sum(small_read_reqs) small_read_reqs, sum(small_write_reqs) small_write_reqs,
7
sum(large_read_reqs) large_read_reqs, sum(large_write_reqs) large_write_reqs, sum(number_of_waits)
p
number_of_waits, sum(wait_time) wait_time from v$iostat_function group by function_id)
m
y
2
6
x
v
p
6
n
x select nvl(sum(space), 0) from recyclebin$ where ts# = :1
s
4
a
9
n
4
7
1
k
5
0
insert into wrh$_latch_misses_summary (snap_id, dbid, instance_number, parent_name, where_in_code,
2
nwfail_count, sleep_count, wtr_slp_count) select :snap_id, :dbid, :instance_number, parent_name, "WHERE",
4
sum(nwfail_count), sum(sleep_count), sum(wtr_slp_count) from v$latch_misses where sleep_count > 0 group by
z
parent_name, "WHERE" order by parent_name, "WHERE"
n
7
c
9
a
7
1
y
3
7
insert into wrh$_thread (snap_id, dbid, instance_number, thread#, thread_instance_number, status, open_time,
0j
current_group#, sequence#) select :snap_id, :dbid, :instance_number, t.thread#, i.instance_number, t.status,
6
t.open_time, t.current_group#, t.sequence# from v$thread t, v$instance i where i.thread#(+) = t.thread#
4
2
8
c
b
7
7
2
s
2
5
select shared_pool_size_for_estimate s, shared_pool_size_factor * 100 f, estd_lc_load_time l, 0 from
v
v$shared_pool_advice
1
y
0
x
8
k
7
9
u

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
v insert into wrh$_buffer_pool_statistics (snap_id, dbid, instance_number, id, name, block_size, set_msize,
s cnum_repl, cnum_write, cnum_set, buf_got, sum_write, sum_scan, free_buffer_wait, write_complete_wait,
z buffer_busy_wait, free_buffer_inspected, dirty_buffers_inspected, db_block_change, db_block_gets,
1 consistent_gets, physical_reads, physical_writes) select :snap_id, :dbid, :instance_number, id, name, block_size,
g set_msize, cnum_repl, cnum_write, cnum_set, buf_got, sum_write, sum_scan, free_buffer_wait,
1 write_complete_wait, buffer_busy_wait, free_buffer_inspected, dirty_buffers_inspected, db_block_change,
c db_block_gets, consistent_gets, physical_reads, physical_writes from v$buffer_pool_statistics
1
6
8
7
g
7
3
insert into WRH$_SERVICE_STAT (snap_id, dbid, instance_number, service_name_hash, stat_id, value) select
2
:snap_id, :dbid, :instance_number, stat.service_name_hash, stat.stat_id, stat.value from v$active_services asvc,
rx
v$service_stats stat where asvc.name_hash = stat.service_name_hash
1
6j
8j
c
7
g
9
w
4
m insert into wrh$_service_name (snap_id, dbid, service_name_hash, service_name) select :lah_snap_id, :dbid,
q t2.name_hash, t2.name from x$kewrattrnew t1, v$active_services t2 where t1.num1_kewrattr = t2.name_hash and
g t2.name is not null
a
f
1
w
9
7
qj
h
f
5 SELECT snap_id , OBJ#, DATAOBJ# FROM (SELECT /*+ ordered use_nl(t2) index(t2) */ t2.snap_id ,
d t1.OBJN_KEWRSEG OBJ#, t1.OBJD_KEWRSEG DATAOBJ# FROM X$KEWRTSEGSTAT t1,
z WRH$_SEG_STAT_OBJ t2 WHERE t2.dbid(+) = :dbid AND t2.OBJ#(+) = t1.OBJN_KEWRSEG AND
m t2.DATAOBJ#(+) = t1.OBJD_KEWRSEG) WHERE nvl(snap_id, 0) < :snap_id
a
z
s
r
7 insert into WRH$_IOSTAT_DETAIL (snap_id, dbid, instance_number, function_id, filetype_id,
t small_read_megabytes, small_write_megabytes, large_read_megabytes, large_write_megabytes,
d small_read_reqs, small_write_reqs, large_read_reqs, large_write_reqs, number_of_waits, wait_time) (select
u :snap_id, :dbid, :instance_number, function_id, filetype_id, nvl(sum(small_read_megabytes), 0)
g small_read_megabytes, nvl(sum(small_write_megabytes), 0) small_write_megabytes,
a nvl(sum(large_read_megabytes), 0) large_read_megabytes, nvl(sum(large_write_megabytes), 0)
u large_write_megabytes, nvl(sum(small_read_reqs), 0) small_read_reqs, nvl(sum(small_write_reqs), 0)
k small_write_reqs, nvl(sum(large_read_reqs), 0) large_read_reqs, nvl(sum(large_write_reqs), 0) large_write_reqs,
2 nvl(sum(number_of_waits), 0) number_of_waits, nvl(sum(wait_time), 0) wait_time from v$iostat_function_detail
2j group by function_id, filetype_id having sum(small_read_megabytes + small_write_megabytes +
8 large_read_megabytes + large_write_megabytes + small_read_reqs + small_write_reqs + large_read_reqs +
t large_write_reqs) > 0 )
7
v

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
8
0
b insert into wrh$_pga_target_advice (snap_id, dbid, instance_number, pga_target_for_estimate, pga_target_factor,
3 advice_status, bytes_processed, estd_extra_bytes_rw, estd_pga_cache_hit_percentage, estd_overalloc_count,
5 estd_time) select :snap_id, :dbid, :instance_number, pga_target_for_estimate, pga_target_factor, advice_status,
m bytes_processed, estd_extra_bytes_rw, estd_pga_cache_hit_percentage, estd_overalloc_count, estd_time from
1 v$pga_target_advice where advice_status = 'ON'
n
5
4
u
8
4
k
6
6 insert into wrh$_bg_event_summary (snap_id, dbid, instance_number, event_id, total_waits, total_timeouts,
tf time_waited_micro) select :snap_id, :dbid, :instance_number, event_id, total_waits - total_waits_fg, total_timeouts -
2 total_timeouts_fg, time_waited_micro - time_waited_micro_fg from v$system_event where (total_waits -
s total_waits_fg) > 0 order by event_id
7
y
1
c
8
4
q
u
b insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, gets, misses, sleeps, immediate_gets,
b immediate_misses, spin_gets, sleep1, sleep2, sleep3, sleep4, wait_time) select :snap_id, :dbid, :instance_number,
r hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, sleep2, sleep3, sleep4,
s wait_time from v$latch order by hash
r
0
kf
n
8
7
g
a
ft
w
select o.owner#, o.name, o.namespace, o.remoteowner, o.linkname, o.subname from obj$ o where o.obj#=:1
r
m
2
h
6
8
8
s
w
y
p
b
b select order#, columns, types from access$ where d_obj#=:1
r
0
m
3
7

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
2
9
5
3
b
insert into wrh$_filestatxs (snap_id, dbid, instance_number, file#, creation_change#, phyrds, phywrts, singleblkrds,
g
readtim, writetim, singleblkrdtim, phyblkrd, phyblkwrt, wait_count, time) select :snap_id, :dbid, :instance_number,
y
df.file#, (df.crscnbas + (df.crscnwrp * power(2, 32))) creation_change#, fs.kcfiopyr, fs.kcfiopyw, fs.kcfiosbr,
v
floor(fs.kcfioprt / 10000), floor(fs.kcfiopwt / 10000), floor(fs.kcfiosbt / 10000), fs.kcfiopbr, fs.kcfiopbw, fw.count,
r
fw.time from x$kcfio fs, file$ df, x$kcbfwait fw where fw.indx+1 = fs.kcfiofno and df.file# = fs.kcfiofno and df.status$ =
v
2
r
y
q
1
9
6
g
9
3 select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, minimum, maximum, distcnt,
h lowval, hival, density, col#, spare1, spare2, avgcln from hist_head$ where obj#=:1 and intcol#=:2
n
tr
zj
tr
9
b
a
bj
v
8
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
y
q
8
r
u
3
9
insert into wrh$_shared_server_summary (snap_id, dbid, instance_number, num_samples, sample_time,
n
sampled_total_conn, sampled_active_conn, sampled_total_srv, sampled_active_srv, sampled_total_disp,
8
sampled_active_disp, srv_busy, srv_idle, srv_in_net, srv_out_net, srv_messages, srv_bytes, cq_wait, cq_totalq,
x
dq_totalq) select :snap_id, :dbid, :instance_number, a.kmmsasnum, a.kmmsastime, a.kmmsastvc, a.kmmsasavc,
c
a.kmmsastsrv, a.kmmsasasrv, a.kmmsastdisp, a.kmmsasadisp, s.busy, s.idle, s.in_net, s.out_net, s.messages,
3
s.bytes, cq.wait, cq.totalq, dq.totalq from x$kmmsas a, (select sum(wait) as wait, sum(totalq) as totalq from v$queue
1
where type = 'COMMON') cq, (select sum(busy) as busy, sum(idle) as idle, sum(in_net) as in_net, sum(out_net) as
4
out_net, sum(messages) as messages, sum(bytes) as bytes from (select sum(busy) as busy, sum(idle) as idle,
x
sum(in_net) as in_net, sum(out_net) as out_net, sum(messages) as messages, sum(bytes) as bytes from
d
v$shared_server union select kmmhstsbsy, kmmhstsidl, kmmhstsneti, kmmhstsneto, kmmhstsnmg, kmmhstsnmb
m
from x$kmmhst)) s, (select sum(totalq) as totalq from (select sum(totalq) as totalq from v$q ueue where type =
0
'DISPATCHER' union select kmmhstdqtnc from x$kmmhst)) dq
t
9
rf
k
m
1
b UPDATE WRH$_SEG_STAT_OBJ SET snap_id = :lah_snap_id WHERE dbid = :dbid AND (OBJ#, DATAOBJ#) IN
f (SELECT NUM1_KEWRATTR, NUM2_KEWRATTR FROM X$KEWRATTRSTALE)
9
1
5

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
a
0
a
y
k
v
s
h
select size_for_estimate, size_factor * 100 f, estd_physical_read_time, estd_physical_reads from
m
v$db_cache_advice where id = '3'
7
z
s
a
b
d
b
0
7
v
UPDATE wrh$_datafile dfh SET (snap_id, filename, tsname) = (SELECT /*+ ordered use_nl(f) index(f) index(ts) */
c
:lah_snap_id, v.name name, ts.name tsname FROM v$dbfile v, file$ f, ts$ ts WHERE f.file# = v.file# AND f.status$ =
v
2 AND f.ts# = ts.ts# AND f.file# = dfh.file# AND (f.crscnbas + (f.crscnwrp * power(2, 32))) = dfh.creation_change#)
u
WHERE (file#, creation_change#) IN (SELECT f.file#, f.crscnbas + (f.crscnwrp * power(2, 32)) creation_change#
xr
FROM file$ f WHERE f.status$ = 2) AND dbid = :dbid AND snap_id < :snap_id
y
v
g
9
b
2
g
n
x
m
5 lock table sys.col_usage$ in exclusive mode nowait
z
6
r
5
1
n
b
s
a
0
w
jtf select file# from file$ where ts#=:1
t
g
3
u
w
b
u
n
s
s
q insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZE, SGA_SIZE_FACTOR,
9 ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbid, :instance_number, SGA_SIZE,
5 SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS from v$sga_target_advice

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
0
s
n
h
f
b
w
s
x
6 insert into wrh$_filemetric_history (snap_id, dbid, instance_number, fileid, creationtime, begin_time, end_time,
u intsize, group_id, avgreadtime, avgwritetime, physicalread, physicalwrite, phyblkread, phyblkwrite) select :snap_id,
tf :dbid, :instance_number, fileid, creationtime, begtime, endtime, intsize_csec, groupid, avrdtime, avwrtime, phyread,
b phywrite, phybkrd, phybkwr from x$kewmflmv
h
1
5
q
c
p
3
g
p
d insert into wrh$_sgastat (snap_id, dbid, instance_number, pool, name, bytes) select :snap_id, :dbid,
7 :instance_number, pool, name, bytes from (select pool, name, bytes, 100*(bytes) / (sum(bytes) over (partition by
z pool)) part_pct from v$sgastat) where part_pct >= 1 or pool is null or name = 'free memory' order by name, pool
8
7
8
w
8
c
v
n
5
4
b
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#, length, piece from idl_ub1$ where obj#=:1 and part=:2 and
7
version=:3 order by piece#
y
z
0
s
8
u
insert into wrh$_seg_stat (snap_id, dbid, instance_number, ts#, obj#, dataobj#, logical_reads_total,
logical_reads_delta, buffer_busy_waits_total, buffer_busy_waits_delta, db_block_changes_total,
db_block_changes_delta, physical_reads_total, physical_reads_delta, physical_writes_total, physical_writes_delta,
physical_reads_direct_total, physical_reads_direct_delta, physical_writes_direct_total,
d physical_writes_direct_delta, itl_waits_total, itl_waits_delta, row_lock_waits_total, row_lock_waits_delta,
8 gc_buffer_busy_total, gc_buffer_busy_delta, gc_cr_blocks_received_total, gc_cr_blocks_received_delta,
7 gc_cu_blocks_received_total, gc_cu_blocks_received_delta, space_used_total, space_used_delta,
c space_allocated_total, space_allocated_delta, table_scans_total, table_scans_delta, chain_row_excess_total,
d chain_row_excess_delta, physical_read_requests_total, physical_read_requests_delta,
6 physical_write_requests_total, physical_write_requests_delta, optimized_physical_reads_total,
s optimized_physical_reads_delta) select :snap_id, :dbid, :instance_number, tsn_kewrseg, objn_kewrseg,
7 objd_kewrseg, log_rds_kewrseg, log_rds_dl_kewrseg, buf_busy_wts_kewrseg, buf_busy_wts_dl_kewrseg,
2 db_blk_chgs_kewrseg, db_blk_chgs_dl_kewrseg, phy_rds_kewrseg, phy_rds_dl_kewrseg, phy_wrts_kewrseg,
f phy_wrts_dl_kewrseg, phy_rds_drt_kewrseg, phy_rds_drt_dl_kewrseg, phy_wrts _drt_kewrseg,
1 phy_wrts_drt_dl_kewrseg, itl_wts_kewrseg, itl_wts_dl_kewrseg, row_lck_wts_kewrseg, row_lck_wts_dl_kewrseg,
9 gc_buf_busy_kewrseg, gc_buf_busy_dl_kewrseg, gc_cr_blks_rcv_kewrseg, gc_cr_blks_rcv_dl_kewrseg,

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
7 gc_cu_blks_rcv_kewrseg, gc_cu_blks_rcv_dl_kewrseg, space_used_kewrseg, space_used_dl_kewrseg,
space_alloc_kewrseg, space_alloc_dl_kewrseg, tbl_scns_kewrseg, tbl_scns_dl_kewrseg, chn_exc_kewrseg,
chn_exc_dl_kewrseg, phy_rd_reqs_kewrseg, phy_rd_reqs_dl_kewrseg, phy_wrt_reqs_kewrseg,
phy_wrt_reqs_dl_kewrseg, opt_phy_rds_kewrseg, opt_phy_rds_dl_kewrseg from X$KEWRTSEGSTAT order by
objn_kewrseg, objd_kewrseg
d
b
y
c
q
2 SELECT snap_id , SQL_ID FROM (SELECT /*+ ordered use_nl(t2) index(t2) */ t2.snap_id , t1.SQLID_KEWRSIE
n SQL_ID FROM X$KEWRSQLIDTAB t1, WRH$_SQLTEXT t2 WHERE t2.dbid(+) = :dbid AND t2.SQL_ID(+) =
4 t1.SQLID_KEWRSIE) WHERE nvl(snap_id, 0) < :snap_id
3
t
7
n
5
d
g
z
m
insert into WRH$_SERVICE_WAIT_CLASS (snap_id, dbid, instance_number, service_name_hash, wait_class_id,
k
wait_class, total_waits, time_waited) select :snap_id, :dbid, :instance_number, stat.service_name_hash,
v
stat.wait_class_id, stat.wait_class, stat.total_waits, stat.time_waited from v$active_services asvc,
9j
v$service_wait_class stat where asvc.name_hash = stat.service_name_hash
6
rf
h
g
d
n
w
p
insert into wrh$_process_memory_summary (snap_id, dbid, instance_number, category, num_processes,
m
non_zero_allocs, used_total, allocated_total, allocated_stddev, allocated_max, max_allocated_max) select
0
:snap_id, :dbid, :instance_number, category, COUNT(*) num_processes, SUM(DECODE(allocated, 0, 0, 1))
g
non_zero_allocs, SUM(used) used_total, SUM(allocated) allocated_total, STDDEV(allocated) allocated_stddev,
d
MAX(allocated) allocated_max, MAX(max_allocated) max_allocated_max from v$process_memory group by
c
category
c
r
p
h
insert into wrh$_seg_stat_obj ( snap_id , dbid , ts# , obj# , dataobj# , owner , object_name , subobject_name ,
partition_type , object_type , tablespace_name) select :lah_snap_id , :dbid , ss1.tsn_kewrseg , ss1.objn_kewrseg ,
ss1.objd_kewrseg , ss1.ownername_kewrseg , ss1.objname_kewrseg , ss1.subobjname_kewrseg ,
d
decode(po.parttype, 1, 'RANGE', 2, 'HASH', 3, 'SYSTEM', 4, 'LIST', NULL, 'NONE', 'UNKNOWN') ,
p
decode(ss1.objtype_kewrseg, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER', 4, 'VIEW', 5, 'SYNONYM',
y
6, 'SEQUENCE', 7, 'PROCEDURE', 8, 'FUNCTION', 9, 'PACKAGE', 11, 'PACKAGE BODY', 12, 'TRIGGER', 13,
z
'TYPE', 14, 'TYPE BODY', 19, 'TABLE PARTITION', 20, 'INDEX PARTITION', 21, 'LOB', 22, 'LIBRARY', 23,
g
'DIRECTORY', 24, 'QUEUE', 28, 'JAVA SOURCE', 29, 'JAVA CLASS', 30, 'JAVA RESOURCE', 32, 'INDEXTYPE', 33,
5
'OPERATOR', 34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION', 40, 'LOB PARTITION', 41, 'LOB
d
SUBPARTITION', 42, 'MATERIALIZED VIEW', 43, 'DIMENSION', 44, 'CONTEXT', 47, 'RESOURCE PLAN', 48,
s
'CONSUMER GROUP', 51, 'SUBSCRIPTION', 52, 'LOCATION', 55, 'XML SCHEMA', 56, 'JAVA DATA', 57,
n
'SECURITY PROFILE', 'UNDEFINED') , ss1.tsname_kewrseg from x$kewrattrnew at, x$kewrtsegstat ss1, (select
3
tp.obj#, pob.parttype from sys.tabpart$ tp, sys.partobj$ pob where tp.bo# = pob.obj# union all select ip.obj#,
g
pob.parttype from sys.indpart$ ip, sys.partobj$ pob where ip.bo# = pob.obj#) po where at.num1_kewrattr =
6
ss1.objn_kewrseg and at.num2_kewrattr = ss1.objd_kewrseg and at.num1_kewrattr = po.obj#(+) and
0
(ss1.objtype_kewrseg not in (1 /* INDEX - handled below */, 10 /* NON-EXISTENT */) or (ss1.objtype_kewrseg = 1
and 1 = (select 1 from ind$ i where i.obj# = ss1.objn_kewrseg and i.type# in (1, 2, 3, 4, 5, 6, 7, 8, 9)))) and s

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
s1.objname_kewrseg != '_NEXT_OBJECT' and ss1.objname_kewrseg != '_default_auditing_options_'
insert into WRH$_RSRC_CONSUMER_GROUP (snap_id, dbid, instance_number, sequence#,
consumer_group_id, consumer_group_name, requests, cpu_wait_time, cpu_waits, consumed_cpu_time, yields,
active_sess_limit_hit, undo_limit_hit, switches_in_cpu_time, switches_out_cpu_time, switches_in_io_megabytes,
d
switches_out_io_megabytes, switches_in_io_requests, switches_out_io_requests, sql_canceled,
u
active_sess_killed, idle_sess_killed, idle_blkr_sess_killed, queued_time, queue_time_outs, io_service_time,
7
io_service_waits, small_read_megabytes, small_write_megabytes, large_read_megabytes, large_write_megabytes,
v
small_read_requests, small_write_requests, large_read_requests, large_write_requests) (select :snap_id, :dbid,
x
:instance_number, cg.sequence#, cg.id, cg.name, cg.requests, cg.cpu_wait_time, cg.cpu_waits,
b
cg.consumed_cpu_time, cg.yields, cg.active_sess_limit_hit, cg.undo_limit_hit, cg.switches_in_cpu_time,
d
cg.switches_out_cpu_time, cg.switches_in_io_megabytes, cg.switches_out_io_megabytes,
k
cg.switches_in_io_requests, cg.switches_out_io_requests, cg.sql_canceled, cg.active_sess_killed,
4
cg.idle_sess_killed, cg.idle_blkr_sess_killed, cg.queued_time, cg.queue_time_outs, cg.io_service_time,
n
cg.io_service_waits, cg.small_read_megabytes, cg.small_write_megabytes, cg.large_read_megabytes,
9
cg.large_write_megabytes, cg.small_read_requests, cg.small_write_requests, cg.large_read_requests,
w
cg.large_write_requests from v$rsrc_cons_group_history cg, v$rsrc_plan_history pl where cg.sequence# =
j
pl.sequence# and pl.sequence# not in (select v.sequence# from wrh$_rsrc_plan w, v$rsrc_plan_history v where
w.instance_number = :instance_number and w.sequence# = v.sequence# and w.start_time = v.start_time) and
pl.end_time is not null and pl.id is not null)
f
0
s
0
b insert into wrh$_parameter (snap_id, dbid, instance_number, parameter_hash, value, isdefault, ismodified) select
k :snap_id, :dbid, :instance_number, i.ksppihash hash, substr(sv.ksppstvl, 1, 512), sv.ksppstdf,
5 decode(bitand(sv.ksppstvf, 7), 1, 'MODIFIED', 'FALSE') from x$ksppi i, x$ksppsv sv where i.indx = sv.indx and
k (((i.ksppinm not like '#_%' escape '#') or (sv.ksppstdf = 'FALSE') or (bitand(sv.ksppstvf, 5) > 0)) or (i.ksppinm like
7 '#_#_%' escape '#')) order by hash
1
3
y
b
f
0
w
j2
update wrm$_wr_control set snap_interval = :bind1, snapint_num = :bind2, retention = :bind3, retention_num =
6
:bind4, most_recent_snap_id = :bind5, most_recent_snap_time = :bind6, mrct_snap_time_num = :bind7,
1
status_flag = :bind8, most_recent_purge_time = :bind9, mrct_purge_time_num = :bind10, most_recent_split_id =
b
:bind11, most_recent_split_time = :bind12, swrf_version = :bind13, registration_status = :bind14, mrct_baseline_id
m
= :bind15, topnsql = :bind16, mrct_bltmpl_id = :bind17 where dbid = :dbid
8
s
n
d
select next_run_date, obj#, run_job, sch_job from (select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#, decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job
f
sch_job from (select p.obj# obj#, p.flags flags, p.next_run_date next_run_date, p.job_status job_status,
3
p.class_oid class_oid, p.last_enabled_time last_enabled_time, p.instance_id instance_id, 1 sch_job from
2
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and ((bitand(p.flags, 134217728 + 268435456) = 0) or
2
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and ((p.instance_id is not null and
3
(to_char(p.instance_id) = :1)) or (p.instance_id is null and p.class_oid is not null and p.class_oid in (select b.obj#
c
from sys.scheduler$_class b where bitand(b.flags, :2) <> 0 and lower(b.affinity) = lower(:3)))) UNION ALL select
b
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid, q.last_enabled_time, q.instance_id, 1 from
4
sys.scheduler$_lightweight_job q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 + 268435456)
n
= 0) or (bitand(q.job_status, 1024) <> 0)) and bitand(q.flags, 4096) = 0 and ((q.instance_id is not null and
g
(to_char(q.instance_id) = :4)) or (q.instance_id is null and q.class_oid is not null and q.class_oid in (select c.obj#
6
from sys.scheduler$_c lass c where bitand(c.flags, :5) <> 0 and lower(c.affinity) = lower(:6)))) UNION ALL select
h
j.job, 0, from_tz(cast(j.next_date as timestamp), to_char(systimestamp, 'TZH:TZM')), 1, NULL,
q
from_tz(cast(j.next_date as timestamp), to_char(systimestamp, 'TZH:TZM')), j.field1, 0 from sys.job$ j where j.field1

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
is not null and j.field1 > 0 and j.field1 = :7 and j.this_date is null) a order by 1) where rownum = 1
f
n
k
7
1
insert into wrh$_sysmetric_history (snap_id, dbid, instance_number, begin_time, end_time, intsize, group_id,
5
metric_id, value) select :snap_id, :dbid, :instance_number, begtime, endtime, intsize_csec, groupid, metricid, value
5
from x$kewmdrmv order by groupid, metricid, begtime
m
k
2j
q
6
select next_run_date, obj#, run_job, sch_job from (select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#, decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job
fs sch_job from (select p.obj# obj#, p.flags flags, p.next_run_date next_run_date, p.job_status job_status,
b p.class_oid class_oid, p.last_enabled_time last_enabled_time, p.instance_id instance_id, 1 sch_job from
q sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and ((bitand(p.flags, 134217728 + 268435456) = 0) or
kt (bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and p.instance_id is NULL and (p.class_oid is null
j5 or (p.class_oid is not null and p.class_oid in (select b.obj# from sys.scheduler$_class b where b.affinity is null)))
v UNION ALL select q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid, q.last_enabled_time, q.instance_id, 1
w from sys.scheduler$_lightweight_job q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
6 268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and bitand(q.flags, 4096) = 0 and q.instance_id is NULL and
n (q.class_oid is null or (q.class_oid is not null and q.class_oid in (select c.obj# from sys.scheduler$_class c where
9 c.affinity is null))) UNION ALL select j.job, 0, from_tz(ca st(j.next_date as timestamp), to_char(systimestamp,
'TZH:TZM')), 1, NULL, from_tz(cast(j.next_date as timestamp), to_char(systimestamp, 'TZH:TZM')), NULL, 0 from
sys.job$ j where (j.field1 is null or j.field1 = 0) and j.this_date is null) a order by 1) where rownum = 1
fz
2
h
t
d
7 insert into wrh$_waitstat (snap_id, dbid, instance_number, class, wait_count, time) select :snap_id, :dbid,
p :instance_number, class, "COUNT", time from v$waitstat order by class
7
2
3
p
5
g
0
0
cj
2 update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, deletes = deletes + :del, flags =
8 (decode(bitand(flags, :flag), :flag, flags, flags + :flag)), drop_segments = drop_segments + :dropseg, timestamp =
5j :time where obj# = :objn
m
g
s
w
INSERT /*+ LEADING(@"SEL$F5BB74E1" "H"@"SEL$2" "A"@"SEL$1") USE_NL(@"SEL$F5BB74E1"
"A"@"SEL$1") */ INTO WRH$_ACTIVE_SESSION_HISTORY ( snap_id, dbid, instance_number, sample_id,
sample_time , session_id, session_serial#, session_type , flags , user_id , sql_id, sql_child_number, sql_opcode,
force_matching_signature , top_level_sql_id, top_level_sql_opcode , sql_plan_hash_value, sql_plan_line_id ,
sql_plan_operation#, sql_plan_options# , sql_exec_id, sql_exec_start , plsql_entry_object_id,
plsql_entry_subprogram_id , plsql_object_id, plsql_subprogram_id , qc_instance_id, qc_session_id,
qc_session_serial# , event_id, seq#, p1, p2, p3 , wait_time, time_waited , blocking_session,
blocking_session_serial#, blocking_inst_id , current_obj#, current_file#, current_block#, current_row# ,
top_level_call#, consumer_group_id, xid, remote_instance#, time_model , service_hash, program, module, action,

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
client_id, machine, port, ecid , tm_delta_time, tm_delta_cpu_time, tm_delta_db_time, delta_time,
delta_read_io_requests, delta_write_io_requests, delta_read_io_bytes, delta_write_io_bytes,
g
delta_interconnect_io_bytes, pga_allocated, temp_space_allocated ) (SELECT :snap_id, :dbid, :instance_number,
3
a.sample_id, a.sample_time , a.session_id, a.session_serial#, a.session_type , decode(a.flags, 0,
b
to_number(NULL), a.flags) , a.user_id , a.sql_id, a.sql_child_number, a.sql_opcode, a.force_matching_signature ,
2
a.top_level_sql_id, a.top_level_sql_opcode , a.sql_plan_hash_value, a.sql_plan_line_id , a.sql_plan_operation#,
q
a.sql_plan_options# , a.sql_exec_id, a.sql_exec_start , a.plsql_entry_object_id, a.plsql_entry_subprogram_id ,
w
a.plsql_object_id, a.plsql_subprogram_id , a.qc_instance_id, a.qc_session_id, a.qc_session_serial# , a.event_id,
d
a.seq#, a.p1, a.p2, a.p3 , a.wait_time, a.time_waited , a.blocking_session, a.blocking_session_serial#,
u
a.blocking_inst_id , a.current_obj#, a.current_file#, a.current_block#, a.current_row# , a.top_level_call#,
3
a.consumer_group_id, a.xid, a.remote_instance#, a.time_model , a.service_hash , substrb(a.program, 1, 64) ,
9
a.module, a.action, a.client_id, a.machine, a.port, a.ecid , decode(a.tm_delta_time, 0, to_number(null),
8
a.tm_delta_time), decode(a.tm_delta_time, 0, to_number(null), a.tm_delta_cpu_time), decode(a.tm_delta_time, 0,
w
to_number(null), a.tm_delta_db_time), decode(a.delta_time, 0, to_number(null), a.delta_time),
t
decode(a.delta_time, 0, to_number(null), decode(a.delta_read_io_requests, 0, to_number(null),
a.delta_read_io_requests)), decode(a.delta_time, 0, to_number(null), decode(a.delta_write_io_requests, 0,
to_number(null), a.delta_write_io_requests)), decode(a.delta_time, 0, to_number(null), dec
ode(a.delta_read_io_bytes, 0, to_number(null), a.delta_read_io_bytes)), decode(a.delta_time, 0, to_number(null),
decode(a.delta_write_io_bytes, 0, to_number(null), a.delta_write_io_bytes)), decode(a.delta_time, 0,
to_number(null), decode(a.delta_interconnect_io_bytes, 0, to_number(null), a.delta_interconnect_io_bytes)),
decode(a.pga_allocated, 0, to_number(null), a.pga_allocated), decode(a.pga_allocated, 0, to_number(null),
decode(a.temp_space_allocated, 0, to_number(null), a.temp_space_allocated)) FROM x$ash a, (SELECT
h.sample_addr, h.sample_id FROM x$kewash h WHERE ( (h.sample_id >= :begin_flushing) and (h.sample_id <
:latest_sample_id) ) and (h.is_awr_sample = 'Y') ) shdr WHERE shdr.sample_addr = a.sample_addr and
shdr.sample_id = a.sample_id and a.need_awr_sample = 'Y')
g
y
b
q
insert into wrh$_tempstatxs (snap_id, dbid, instance_number, file#, creation_change#, phyrds, phywrts,
f
singleblkrds, readtim, writetim, singleblkrdtim, phyblkrd, phyblkwrt, wait_count, time) select :snap_id, :dbid,
a
:instance_number, tf.tfnum, to_number(tf.tfcrc_scn) creation_change#, ts.kcftiopyr, ts.kcftiopyw, ts.kcftiosbr,
1
floor(ts.kcftioprt / 10000), floor(ts.kcftiopwt / 10000), floor(ts.kcftiosbt / 10000), ts.kcftiopbr, ts.kcftiopbw, fw.count,
y
fw.time from x$kcftio ts, x$kcctf tf, x$kcbfwait fw, x$kccfn fn where fn.fnfno = ts.kcftiofno and fn.fntyp = 7 and tf.tfdup
3
!= 0 and tf.tfnum = ts.kcftiofno and fw.indx+1 = (ts.kcftiofno + :db_files)
u
5
d
b

Back to SQL Statistics


Back to Top

Instance Activity Statistics


Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Back to Top

Instance Activity Stats

Ordered by statistic name

Statistic Total per Second per Trans


Batched IO (bound) vector count 0 0.00 0.00
Batched IO (full) vector count 0 0.00 0.00
Batched IO block miss count 0 0.00 0.00
Batched IO buffer defrag count 0 0.00 0.00

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Batched IO double miss count 0 0.00 0.00
Batched IO same unit count 0 0.00 0.00
Batched IO vector block count 0 0.00 0.00
Batched IO vector read count 0 0.00 0.00
Block Cleanout Optim referenced 5 0.00 5.00
CCursor + sql area evicted 0 0.00 0.00
CPU used by this session 293 0.27 293.00
CPU used when call started 340 0.31 340.00
CR blocks created 0 0.00 0.00
Commit SCN cached 5 0.00 5.00
DB time 4 0.00 4.00
DBWR checkpoint buffers written 385 0.35 385.00
DBWR checkpoints 0 0.00 0.00
DBWR transaction table writes 11 0.01 11.00
DBWR undo block writes 75 0.07 75.00
HSC Heap Segment Block Changes 986 0.90 986.00
Heap Segment Array Inserts 83 0.08 83.00
Heap Segment Array Updates 5 0.00 5.00
IMU pool not allocated 55 0.05 55.00
IMU- failed to get a private strand 55 0.05 55.00
SQL*Net roundtrips to/from client 30 0.03 30.00
active txn count during cleanout 34 0.03 34.00
background timeouts 4,447 4.06 4,447.00
buffer is not pinned count 7,146 6.52 7,146.00
buffer is pinned count 729 0.66 729.00
bytes received via SQL*Net from client 10,043 9.16 10,043.00
bytes sent via SQL*Net to client 7,787 7.10 7,787.00
calls to get snapshot scn: kcmgss 3,224 2.94 3,224.00
calls to kcmgas 396 0.36 396.00
calls to kcmgcs 444 0.40 444.00
cell physical IO interconnect bytes 45,182,976 41,209.96 45,182,976.00
change write time 14 0.01 14.00
cleanout - number of ktugct calls 34 0.03 34.00
cleanouts only - consistent read gets 0 0.00 0.00
cluster key scan block gets 535 0.49 535.00
cluster key scans 286 0.26 286.00
commit cleanout failures: callback failure 2 0.00 2.00
commit cleanouts 602 0.55 602.00
commit cleanouts successfully completed 600 0.55 600.00
commit txn count during cleanout 14 0.01 14.00
concurrency wait time 63 0.06 63.00
consistent changes 0 0.00 0.00
consistent gets 8,788 8.02 8,788.00
consistent gets - examination 4,252 3.88 4,252.00
consistent gets from cache 8,788 8.02 8,788.00
consistent gets from cache (fastpath) 4,242 3.87 4,242.00

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
cursor authentications 26 0.02 26.00
data blocks consistent reads - undo records applied 0 0.00 0.00
db block changes 3,970 3.62 3,970.00
db block gets 3,320 3.03 3,320.00
db block gets direct 5 0.00 5.00
db block gets from cache 3,315 3.02 3,315.00
db block gets from cache (fastpath) 582 0.53 582.00
deferred (CURRENT) block cleanout applications 306 0.28 306.00
enqueue conversions 53 0.05 53.00
enqueue releases 3,552 3.24 3,552.00
enqueue requests 3,551 3.24 3,551.00
enqueue timeouts 0 0.00 0.00
enqueue waits 0 0.00 0.00
execute count 3,137 2.86 3,137.00
file io service time 63,196 57.64 63,196.00
file io wait time 7,589,842 6,922.46 7,589,842.00
free buffer requested 546 0.50 546.00
heap block compress 3 0.00 3.00
immediate (CR) block cleanout applications 0 0.00 0.00
immediate (CURRENT) block cleanout applications 32 0.03 32.00
index crx upgrade (positioned) 0 0.00 0.00
index fast full scans (full) 1 0.00 1.00
index fetch by key 1,548 1.41 1,548.00
index scans kdiixs1 1,624 1.48 1,624.00
leaf node 90-10 splits 13 0.01 13.00
leaf node splits 16 0.01 16.00
lob reads 26 0.02 26.00
lob writes 5 0.00 5.00
lob writes unaligned 5 0.00 5.00
logons cumulative 4 0.00 4.00
max cf enq hold time 0 0.00 0.00
messages received 245 0.22 245.00
messages sent 245 0.22 245.00
min active SCN optimization applied on CR 26 0.02 26.00
no work - consistent read gets 3,920 3.58 3,920.00
non-idle wait count 2,676 2.44 2,676.00
non-idle wait time 1,231 1.12 1,231.00
opened cursors cumulative 2,804 2.56 2,804.00
parse count (failures) 0 0.00 0.00
parse count (hard) 146 0.13 146.00
parse count (total) 1,417 1.29 1,417.00
parse time cpu 36 0.03 36.00
parse time elapsed 63 0.06 63.00
physical read IO requests 408 0.37 408.00
physical read bytes 3,342,336 3,048.44 3,342,336.00
physical read total IO requests 1,890 1.72 1,890.00

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
physical read total bytes 27,590,656 25,164.57 27,590,656.00
physical read total multi block requests 0 0.00 0.00
physical reads 408 0.37 408.00
physical reads cache 408 0.37 408.00
physical reads cache prefetch 0 0.00 0.00
physical write IO requests 215 0.20 215.00
physical write bytes 3,194,880 2,913.95 3,194,880.00
physical write total IO requests 1,015 0.93 1,015.00
physical write total bytes 17,592,320 16,045.40 17,592,320.00
physical write total multi block requests 10 0.01 10.00
physical writes 390 0.36 390.00
physical writes direct 5 0.00 5.00
physical writes direct (lob) 5 0.00 5.00
physical writes direct temporary tablespace 0 0.00 0.00
physical writes from cache 385 0.35 385.00
physical writes non checkpoint 358 0.33 358.00
pinned cursors current 0 0.00 0.00
recovery blocks read 0 0.00 0.00
recursive calls 36,816 33.58 36,816.00
recursive cpu usage 373 0.34 373.00
redo blocks checksummed by FG (exclusive) 1,197 1.09 1,197.00
redo blocks read for recovery 0 0.00 0.00
redo blocks written 2,444 2.23 2,444.00
redo entries 2,287 2.09 2,287.00
redo k-bytes read for recovery 0 0.00 0.00
redo k-bytes read total 0 0.00 0.00
redo size 1,220,076 1,112.79 1,220,076.00
redo size for direct writes 41,140 37.52 41,140.00
redo synch time 15 0.01 15.00
redo synch writes 2 0.00 2.00
redo wastage 9,196 8.39 9,196.00
redo write time 171 0.16 171.00
redo writes 37 0.03 37.00
rollback changes - undo records applied 0 0.00 0.00
rollbacks only - consistent read gets 0 0.00 0.00
rows fetched via callback 398 0.36 398.00
session cursor cache hits 2,257 2.06 2,257.00
session logical reads 12,108 11.04 12,108.00
shared hash latch upgrades - no wait 337 0.31 337.00
sorts (memory) 532 0.49 532.00
sorts (rows) 9,515 8.68 9,515.00
sql area evicted 0 0.00 0.00
sql area purged 0 0.00 0.00
switch current to new buffer 6 0.01 6.00
table fetch by rowid 1,833 1.67 1,833.00
table fetch continued row 13 0.01 13.00

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
table scan blocks gotten 106 0.10 106.00
table scan rows gotten 551 0.50 551.00
table scans (short tables) 66 0.06 66.00
total cf enq hold time 10 0.01 10.00
total number of cf enq holders 47 0.04 47.00
total number of times SMON posted 0 0.00 0.00
undo change vector size 380,920 347.43 380,920.00
user I/O wait time 287 0.26 287.00
user calls 54 0.05 54.00
user commits 0 0.00 0.00
user rollbacks 0 0.00 0.00
workarea executions - optimal 346 0.32 346.00
Back to Instance Activity Statistics
Back to Top

Instance Activity Stats - Absolute Values

Statistics with absolute values (should not be diffed)

Statistic Begin Value End Value


session uga memory max 27,766,192 34,957,216
session pga memory 117,954,772 116,086,640
session pga memory max 145,807,572 138,172,272
session cursor cache count 924 847
session uga memory 30,071,685,600 30,071,077,024
opened cursors current 38 31
logons current 27 25
Back to Instance Activity Statistics
Back to Top

Instance Activity Stats - Thread Activity

Statistics identified by '(derived)' come from sources other than SYSSTAT

Statistic Total per Hour


log switches (derived) 0 0.00
Back to Instance Activity Statistics
Back to Top

IO Stats
IOStat by Function summary
IOStat by Filetype summary
IOStat by Function/Filetype summary
Tablespace IO Stats
File IO Stats
Back to Top

IOStat by Function summary

'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples
of 1000
ordered by (Data Read + Write) desc
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
ordered by (Data Read + Write) desc

Reads: Reqs per Data per Writes: Reqs per Data per Waits: Avg
Function Name
Data sec sec Data sec sec Count Tm(ms)
Others 23M 1.35 .020977 11M 0.66 .010032 2208 1.58
Buffer Cache
3M 0.29 .002736 0M 0.00 0M 320 6.92
Reads
DBWR 0M 0.00 0M 3M 0.19 .002736 211 22.34
LGWR 0M 0.00 0M 2M 0.07 .001824 37 45.51
Direct Writes 0M 0.00 0M 0M 0.00 0M 4 0.00
TOTAL: 26M 1.64 .023713 16M 0.93 .014593 2780 4.35
Back to IO Stats
Back to Top

IOStat by Filetype summary

'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples
of 1000
Small Read and Large Read are average service times, in milliseconds
Ordered by (Data Read + Write) desc

Filetype Reads: Reqs per Data per Writes: Reqs per Data per Small Large
Name Data sec sec Data sec sec Read Read
Control File 23M 1.35 .020977 11M 0.66 .010032 0.02
Data File 3M 0.29 .002736 4M 0.20 .003648 6.92
Log File 0M 0.00 0M 2M 0.07 .001824
TOTAL: 26M 1.64 .023713 17M 0.93 .015505 1.24
Back to IO Stats
Back to Top

IOStat by Function/Filetype summary

'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples
of 1000
Ordered by (Data Read + Write) desc for each function

Reads: Reqs per Data per Writes: Reqs per Data per Waits: Avg
Function/File Name
Data sec sec Data sec sec Count Tm(ms)
Others 23M 1.35 .020977 11M 0.66 .010032 1482 0.02
Others (Control File) 23M 1.35 .020977 11M 0.66 .010032 1478 0.02
Others (Data File) 0M 0.00 0M 0M 0.00 0M 4 0.00
Buffer Cache Reads 3M 0.27 .002736 0M 0.00 0M 295 7.33
Buffer Cache Reads
3M 0.27 .002736 0M 0.00 0M 295 7.33
(Data File)
DBWR 0M 0.00 0M 3M 0.19 .002736 0
DBWR (Data File) 0M 0.00 0M 3M 0.19 .002736 0
LGWR 0M 0.00 0M 2M 0.07 .001824 0
LGWR (Log File) 0M 0.00 0M 2M 0.07 .001824 0
Direct Writes 0M 0.00 0M 0M 0.00 0M 0
Direct Writes (Data File) 0M 0.00 0M 0M 0.00 0M 0
TOTAL: 26M 1.62 .023713 16M 0.93 .014593 1777 1.23
Back to IO Stats
Back to Top

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Tablespace IO Stats

ordered by IOs (Reads + Writes) desc

Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
SYSAUX 453 0 6.60 1.00 174 0 0 0.00
SYSTEM 90 0 7.89 1.00 10 0 0 0.00
UNDOTBS1 1 0 10.00 1.00 31 0 0 0.00
Back to IO Stats
Back to Top

File IO Stats

ordered by Tablespace, File

Tablesp Rea Av Av Av Writ Av Buffer Av Buf


Filename
ace ds Reads/s Rd(ms) Blks/Rd es Writes/s Waits Wt(ms)
/u01/oradata/db11g/sysaux
SYSAUX 453 0 6.60 1.00 174 0 0 0.00
01.dbf
/u01/oradata/db11g/system
SYSTEM 90 0 7.89 1.00 10 0 0 0.00
01.dbf
UNDOTB /u01/oradata/db11g/undotb
1 0 10.00 1.00 31 0 0 0.00
S1 s01.dbf
Back to IO Stats
Back to Top

Buffer Pool Statistics


Buffer Pool Statistics
Checkpoint Activity
Back to Top

Buffer Pool Statistics

Standard block size Pools D: default, K: keep, R: recycle


Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k

Number of Pool Buffer Physical Physical Free Buff Writ Comp Buffer Busy
P
Buffers Hit% Gets Reads Writes Wait Wait Waits
D 92,752 97 13,590 461 385 0 0 0
Back to Buffer Pool Statistics
Back to Top

Checkpoint Activity

Total Physical Writes: 390

MTTR Log Size Log Ckpt Other Settings Autotune Ckpt Thread Ckpt
Writes Writes Writes Writes Writes Writes
0 0 0 0 385 0
Back to Buffer Pool Statistics
Back to Top

Advisory Statistics

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Instance Recovery Stats
MTTR Advisory
Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Stats
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
Back to Top

Instance Recovery Stats

B: Begin Snapshot, E: End Snapshot

Targt Estd Actual Target Log Sz Log Ckpt Log Ckpt Opt
Recovery Estd RAC
MTTR MTTR RedoBlk RedoBlk RedoBlk Timeout Interval Log
Estd IOs Avail Time
(s) (s) s s s RedoBlks RedoBlks Sz(M)
B 0 24 156 300 6561 6561
E 0 24 167 0 6561 6561
Back to Advisory Statistics
Back to Top

MTTR Advisory

No data exists for this section of the report.


Back to Advisory Statistics
Back to Top

Buffer Pool Advisory

Only rows with estimated physical reads >0 are displayed


ordered by Block Size, Buffers For Estimate

Size for Size Buffers Est Phys Read Estimated Phys Reads Est Phys Est %DBtime
P
Est (M) Factor (thousands) Factor (thousands) Read Time for Rds
D 72 0.10 9 1.00 5 1 63.00
D 144 0.19 18 1.00 5 1 63.00
D 216 0.29 27 1.00 5 1 63.00
D 288 0.39 36 1.00 5 1 63.00
D 360 0.48 45 1.00 5 1 63.00
D 432 0.58 54 1.00 5 1 63.00
D 504 0.67 62 1.00 5 1 63.00
D 576 0.77 71 1.00 5 1 63.00
D 648 0.87 80 1.00 5 1 63.00
D 720 0.96 89 1.00 5 1 63.00
D 748 1.00 93 1.00 5 1 63.00
D 792 1.06 98 1.00 5 1 63.00
D 864 1.16 107 1.00 5 1 63.00
D 936 1.25 116 1.00 5 1 63.00
D 1,008 1.35 125 1.00 5 1 63.00
D 1,080 1.44 134 1.00 5 1 63.00

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
D 1,152 1.54 143 1.00 5 1 63.00
D 1,224 1.64 152 1.00 5 1 63.00
D 1,296 1.73 161 1.00 5 1 63.00
D 1,368 1.83 170 1.00 5 1 63.00
D 1,440 1.93 179 1.00 5 1 63.00
Back to Advisory Statistics
Back to Top

PGA Aggr Summary

PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory

PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written


100.00 17 0
Back to Advisory Statistics
Back to Top

PGA Aggr Target Stats

B: Begin Snap E: End Snap (rows dentified with B or E contain data which is absolute i.e. not diffed over the
interval)
Auto PGA Target - actual workarea memory target
W/A PGA Used - amount of memory used for all Workareas (manual + auto)
%PGA W/A Mem - percentage of PGA memory allocated to workareas
%Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
%Man W/A Mem - percentage of workarea memory under manual control

PGA Aggr Auto PGA PGA Mem W/A PGA %PGA W/A %Auto W/A %Man W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
B 205 104 109.36 0.00 0.00 0.00 0.00 41,943
E 205 105 103.15 0.00 0.00 0.00 0.00 41,943
Back to Advisory Statistics
Back to Top

PGA Aggr Target Histogram

Optimal Executions are purely in-memory operations

Low Optimal High Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
2K 4K 346 346 0 0
64K 128K 2 2 0 0
512K 1024K 12 12 0 0
2M 4M 2 2 0 0
Back to Advisory Statistics
Back to Top

PGA Memory Advisory

When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc
Count is 0

PGA Target Size W/A MB Estd Extra W/A MB Read/ Estd PGA Estd PGA Estd
Est (MB) Factr Processed Written to Disk Cache Hit % Overalloc Count Time
26 0.13 36.17 2.99 92.00 2 58
51 0.25 36.17 2.99 92.00 2 58

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
102 0.50 36.17 0.00 100.00 0 54
154 0.75 36.17 0.00 100.00 0 54
205 1.00 36.17 0.00 100.00 0 54
246 1.20 36.17 0.00 100.00 0 54
287 1.40 36.17 0.00 100.00 0 54
328 1.60 36.17 0.00 100.00 0 54
369 1.80 36.17 0.00 100.00 0 54
410 2.00 36.17 0.00 100.00 0 54
614 3.00 36.17 0.00 100.00 0 54
819 4.00 36.17 0.00 100.00 0 54
1,229 6.00 36.17 0.00 100.00 0 54
1,638 8.00 36.17 0.00 100.00 0 54
Back to Advisory Statistics
Back to Top

Shared Pool Advisory

SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical
number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.

Shared SP Est LC Est LC


Est LC Est LC Time Est LC Time Est LC Load Est LC Mem
Pool Size Mem Load Time
Size (M) Saved (s) Saved Factr Time Factr Obj Hits (K)
Size(M) Factr Obj (s)
92 0.35 12 1,016 292 1.00 40 1.00 4
120 0.46 28 2,325 292 1.00 40 1.00 10
148 0.57 28 2,325 292 1.00 40 1.00 10
176 0.68 28 2,325 292 1.00 40 1.00 10
204 0.78 28 2,325 292 1.00 40 1.00 10
232 0.89 28 2,325 292 1.00 40 1.00 10
260 1.00 28 2,325 292 1.00 40 1.00 10
288 1.11 28 2,325 292 1.00 40 1.00 10
316 1.22 28 2,325 292 1.00 40 1.00 10
344 1.32 28 2,325 292 1.00 40 1.00 10
372 1.43 28 2,325 292 1.00 40 1.00 10
400 1.54 28 2,325 292 1.00 40 1.00 10
428 1.65 28 2,325 292 1.00 40 1.00 10
456 1.75 28 2,325 292 1.00 40 1.00 10
484 1.86 28 2,325 292 1.00 40 1.00 10
512 1.97 28 2,325 292 1.00 40 1.00 10
540 2.08 28 2,325 292 1.00 40 1.00 10
Back to Advisory Statistics
Back to Top

SGA Target Advisory

SGA Target Size (M) SGA Size Factor Est DB Time (s) Est Physical Reads
256 0.25 104 4,707

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
512 0.50 104 4,707
768 0.75 104 4,707
1,024 1.00 104 4,707
1,280 1.25 104 4,707
1,536 1.50 104 4,707
1,792 1.75 106 4,707
2,048 2.00 106 4,707
Back to Advisory Statistics
Back to Top

Streams Pool Advisory

No data exists for this section of the report.


Back to Advisory Statistics
Back to Top

Java Pool Advisory

No data exists for this section of the report.


Back to Advisory Statistics
Back to Top

Wait Statistics
Buffer Wait Statistics
Enqueue Activity
Back to Top

Buffer Wait Statistics

No data exists for this section of the report.


Back to Wait Statistics
Back to Top

Enqueue Activity

No data exists for this section of the report.


Back to Wait Statistics
Back to Top

Undo Statistics
Undo Segment Summary
Undo Segment Stats
Back to Top

Undo Segment Summary

Min/Max TR (mins) - Min and Max Tuned Retention (minutes)


STO - Snapshot Too Old count, OOS - Out of Space count
Undo segment block stats:
uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
eS - expired Stolen, eR - expired Released, eU - expired reUsed

Undo Num Undo Number of Max Qry Max Tx Min/Max TR STO/ uS/uR/uU/

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
TS# Blocks (K) Transactions Len (s) Concurcy (mins) OOS eS/eR/eU
2 0.06 372 923 3 496.7/496.7 0/0 0/0/0/0/0/0
Back to Undo Statistics
Back to Top

Undo Segment Stats

Most recent 35 Undostat rows, ordered by Time desc

Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/
End Time
Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU
16-May
57 372 923 3 497 0/0 0/0/0/0/0/0
11:29
Back to Undo Statistics
Back to Top

Latch Statistics
Latch Activity
Latch Sleep Breakdown
Latch Miss Sources
Mutex Sleep Summary
Parent Latch Statistics
Child Latch Statistics
Back to Top

Latch Activity

"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for willing-to-wait latch get requests
"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
"Pct Misses" for both should be very close to 0.0

Get Pct Get Avg Slps Wait Time NoWait Pct NoWait
Latch Name
Requests Miss /Miss (s) Requests Miss
AQ deq hash table latch 1 0.00 0 0
ASM db client latch 726 0.00 0 0
ASM map operation hash table 1 0.00 0 0
ASM network state latch 18 0.00 0 0
AWR Alerted Metric Element list 7,544 0.00 0 0
Change Notification Hash table
364 0.00 0 0
latch
Consistent RBA 37 0.00 0 0
DML lock allocation 884 0.00 0 0
Event Group Locks 5 0.00 0 0
FAL Queue 28 0.00 0 0
FOB s.o list latch 20 0.00 0 0
File State Object Pool Parent
1 0.00 0 0
Latch
IPC stats buffer allocation latch 1 0.00 0 0
In memory undo latch 1 0.00 0 56 0.00
JS Sh mem access 1 0.00 0 0
JS queue access latch 1 0.00 0 0
JS queue state obj latch 1,944 0.00 0 0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
JS slv state obj latch 1 0.00 0 0
KFC FX Hash Latch 1 0.00 0 0
KFC Hash Latch 1 0.00 0 0
KFCL LE Freelist 1 0.00 0 0
KGNFS-NFS:SHM structure 1 0.00 0 0
KGNFS-NFS:SVR LIST 1 0.00 0 0
KJC message pool free list 1 0.00 0 0
KJCT flow control latch 1 0.00 0 0
KMG MMAN ready and startup
364 0.00 0 0
request latch
KTF sga latch 4 0.00 0 352 0.00
Locator state objects pool parent
1 0.00 0 0
latch
Lsod array latch 1 0.00 0 0
MQL Tracking Latch 0 0 21 0.00
Memory Management Latch 1 0.00 0 364 0.00
Memory Queue 1 0.00 0 0
Memory Queue Message
1 0.00 0 0
Subscriber #1
Memory Queue Message
1 0.00 0 0
Subscriber #2
Memory Queue Message
1 0.00 0 0
Subscriber #3
Memory Queue Message
1 0.00 0 0
Subscriber #4
Memory Queue Subscriber 1 0.00 0 0
MinActiveScn Latch 6 0.00 0 0
Mutex 1 0.00 0 0
Mutex Stats 1 0.00 0 0
OS process 43 0.00 0 0
OS process allocation 390 0.00 0 0
OS process: request allocation 10 0.00 0 0
PL/SQL warning settings 22 0.00 0 0
PX hash array latch 1 0.00 0 0
QMT 1 0.00 0 0
SGA blob parent 1 0.00 0 0
SGA bucket locks 1 0.00 0 0
SGA heap locks 1 0.00 0 0
SGA pool locks 1 0.00 0 0
SQL memory manager latch 1 0.00 0 363 0.00
SQL memory manager workarea
24,437 0.00 0 0
list latch
Shared B-Tree 40 0.00 0 0
Streams Generic 1 0.00 0 0
Testing 1 0.00 0 0
Token Manager 1 0.00 0 0
WCR: sync 1 0.00 0 0
Write State Object Pool Parent
1 0.00 0 0
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
1 0.00 0 0
Latch
X$KSFQP 1 0.00 0 0
XDB NFS Security Latch 1 0.00 0 0
XDB unused session pool 1 0.00 0 0
XDB used session pool 1 0.00 0 0
active checkpoint queue latch 575 0.00 0 0
active service list 983 0.00 0 958 0.00
archive control 19 0.00 0 0
archive process latch 84 0.00 0 0
begin backup scn array 4 0.00 0 0
buffer pool 1 0.00 0 0
business card 1 0.00 0 0
cache buffer handles 68 0.00 0 0
cache buffers chains 32,091 0.00 0 637 0.00
cache buffers lru chain 1,116 0.00 0 385 0.00
call allocation 508 0.00 0 0
cas latch 1 0.00 0 0
change notification client cache
1 0.00 0 0
latch
channel handle pool latch 12 0.00 0 0
channel operations parent latch 5,129 0.00 0 0
checkpoint queue latch 7,014 0.00 0 363 0.00
client/application info 5 0.00 0 0
compile environment latch 4 0.00 0 0
cp cmon/server latch 1 0.00 0 0
cp pool latch 1 0.00 0 0
cp server hash latch 1 0.00 0 0
cp sga latch 18 0.00 0 0
cvmap freelist lock 1 0.00 0 0
deferred cleanup latch 18 0.00 0 0
dml lock allocation 18 0.00 0 0
done queue latch 1 0.00 0 0
dummy allocation 11 0.00 0 0
enqueue hash chains 7,292 0.00 0 0
enqueues 5,671 0.00 0 0
fifth spare latch 1 0.00 0 0
file cache latch 26 0.00 0 0
flashback copy 1 0.00 0 0
gc element 1 0.00 0 0
gcs commit scn state 1 0.00 0 0
gcs partitioned table hash 1 0.00 0 0
gcs pcm hashed value bucket
1 0.00 0 0
hash
gcs resource freelist 1 0.00 0 0
gcs resource hash 1 0.00 0 0
gcs resource scan list 1 0.00 0 0
gcs shadows freelist 1 0.00 0 0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
ges domain table 1 0.00 0 0
ges enqueue table freelist 1 0.00 0 0
ges group table 1 0.00 0 0
ges process hash list 1 0.00 0 0
ges process parent latch 1 0.00 0 0
ges resource hash list 1 0.00 0 0
ges resource scan list 1 0.00 0 0
ges resource table freelist 1 0.00 0 0
ges value block free list 1 0.00 0 0
global tx hash mapping 1 0.00 0 0
granule operation 1 0.00 0 0
hash table column usage latch 327 0.00 0 2,612 0.00
hash table modification latch 42 0.00 0 0
heartbeat check 1 0.00 0 0
intra txn parallel recovery 1 0.00 0 0
io pool granule metadata list 1 0.00 0 0
job workq parent latch 1 0.00 0 0
job_queue_processes parameter
59 0.00 0 0
latch
k2q lock allocation 1 0.00 0 0
kdlx hb parent latch 1 0.00 0 0
kgb parent 1 0.00 0 0
kks stats 272 0.00 0 0
ksfv messages 1 0.00 0 0
kss move lock 9 0.00 0 0
ksuosstats global area 76 0.00 0 0
ksv allocation latch 35 0.00 0 0
ksv class latch 19 0.00 0 0
ksv msg queue latch 1 0.00 0 0
ksz_so allocation latch 10 0.00 0 0
ktm global data 5 0.00 0 0
kwqbsn:qsga 39 0.00 0 0
lgwr LWN SCN 381 0.00 0 0
list of block allocation 16 0.00 0 0
loader state object freelist 12 0.00 0 0
lob segment dispenser latch 1 0.00 0 0
lob segment hash table latch 5 0.00 0 0
lob segment query latch 1 0.00 0 0
lock DBA buffer during media
1 0.00 0 0
recovery
logical standby cache 1 0.00 0 0
logminer context allocation 1 0.00 0 0
logminer work area 1 0.00 0 0
longop free list parent 1 0.00 0 0
managed standby latch 28 0.00 0 0
mapped buffers lru chain 1 0.00 0 0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
message pool operations parent
1 0.00 0 0
latch
messages 9,458 0.00 0 0
mostly latch-free SCN 381 0.00 0 0
msg queue latch 1 0.00 0 0
name-service namespace bucket 1 0.00 0 0
ncodef allocation latch 18 0.00 0 0
object queue header heap 661 0.00 0 0
object queue header operation 2,472 0.00 0 0
object stats modification 504 0.00 0 0
parallel query alloc buffer 145 0.00 0 0
parallel query stats 1 0.00 0 0
parameter list 40 0.00 0 0
parameter table management 10 0.00 0 0
peshm 1 0.00 0 0
pesom_free_list 1 0.00 0 0
pesom_hash_node 1 0.00 0 0
post/wait queue 7 0.00 0 3 0.00
process allocation 14 0.00 0 4 0.00
process group creation 10 0.00 0 0
process queue 1 0.00 0 0
process queue reference 1 0.00 0 0
qmn task queue latch 160 25.00 1.00 0 0
query server freelists 1 0.00 0 0
queued dump request 3 0.00 0 0
queuing load statistics 1 0.00 0 0
recovery domain hash list 1 0.00 0 0
redo allocation 419 0.00 0 2,309 0.00
redo copy 1 0.00 0 2,309 0.22
redo writing 1,450 0.00 0 0
resmgr:active threads 11 0.00 0 0
resmgr:actses change group 5 0.00 0 0
resmgr:actses change state 1 0.00 0 0
resmgr:free threads list 10 0.00 0 0
resmgr:plan CPU method 1 0.00 0 0
resmgr:resource group CPU
1 0.00 0 0
method
resmgr:schema config 1 0.00 0 0
resmgr:session queuing 1 0.00 0 0
rm cas latch 1 0.00 0 0
row cache objects 20,417 0.00 0 0
second spare latch 1 0.00 0 0
sequence cache 5 0.00 0 0
session allocation 63 0.00 0 0
session idle bit 121 0.00 0 0
session queue latch 1 0.00 0 0
session state list latch 11 0.00 0 0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
session switching 49 0.00 0 0
session timer 382 0.00 0 0
shared pool 15,627 0.01 1.00 0 0
shared pool sim alloc 10 0.00 0 0
shared pool simulator 865 0.00 0 0
sim partition latch 1 0.00 0 0
simulator hash latch 637 0.00 0 0
simulator lru latch 207 0.00 0 356 0.00
sort extent pool 24 0.00 0 0
space background state object
4 0.00 0 0
latch
space background task latch 811 0.00 0 730 0.00
state object free list 2 0.00 0 0
statistics aggregation 560 0.00 0 0
tablespace key chain 1 0.00 0 0
test excl. parent l0 1 0.00 0 0
test excl. parent2 l0 1 0.00 0 0
third spare latch 1 0.00 0 0
threshold alerts latch 27 0.00 0 0
transaction allocation 14 0.00 0 0
undo global data 1,318 0.00 0 0
virtual circuit buffers 1 0.00 0 0
virtual circuit holder 1 0.00 0 0
virtual circuit queues 1 0.00 0 0
Back to Latch Statistics
Back to Top

Latch Sleep Breakdown

ordered by misses desc

Latch Name Get Requests Misses Sleeps Spin Gets


qmn task queue latch 160 40 40 0
shared pool 15,627 1 1 0
Back to Latch Statistics
Back to Top

Latch Miss Sources

only latches with sleeps are shown


ordered by name, sleeps desc

Latch Name Where NoWait Misses Sleeps Waiter Sleeps


qmn task queue latch kwqmnmvtsks: delay to ready list 0 39 0
qmn task queue latch kwqmnaddtsk: add task 0 1 0
shared pool kghalo 0 1 0
Back to Latch Statistics
Back to Top

Mutex Sleep Summary

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
No data exists for this section of the report.
Back to Latch Statistics
Back to Top

Parent Latch Statistics

No data exists for this section of the report.


Back to Latch Statistics
Back to Top

Child Latch Statistics

No data exists for this section of the report.


Back to Latch Statistics
Back to Top

Segment Statistics
Segments by Logical Reads
Segments by Physical Reads
Segments by Physical Read Requests
Segments by UnOptimized Reads
Segments by Optimized Reads
Segments by Direct Physical Reads
Segments by Physical Writes
Segments by Physical Write Requests
Segments by Direct Physical Writes
Segments by Table Scans
Segments by DB Blocks Changes
Segments by Row Lock Waits
Segments by ITL Waits
Segments by Buffer Busy Waits
Back to Top

Segments by Logical Reads

Total Logical Reads: 12,108


Captured Segments account for 117.3% of Total

Owner Tablespace Name Object Name Subobject Name Obj. Type Logical Reads %Total
SYS SYSTEM I_HH_OBJ#_INTCOL# INDEX 2,896 23.92
SYS SYSTEM I_CCOL2 INDEX 1,520 12.55
SYS SYSTEM I_COL_USAGE$ INDEX 1,344 11.10
SYS SYSTEM CCOL$ TABLE 1,200 9.91
SYS SYSTEM HIST_HEAD$ TABLE 752 6.21
Back to Segment Statistics
Back to Top

Segments by Physical Reads

Total Physical Reads: 408


Captured Segments account for 67.2% of Total

Owne Tablespace Physical %Tota


Object Name Subobject Name Obj. Type
r Name Reads l
SYS SYSAUX WRH$_SQL_PLAN TABLE 51 12.50

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
WRH$_SQL_PLAN_P
SYS SYSAUX INDEX 42 10.29
K
SYS SYSAUX WRH$_SQLTEXT TABLE 21 5.15
SYS SYSTEM COL_USAGE$ TABLE 10 2.45
WRH$_SQLSTAT_IN WRH$_SQLSTA_26329275 INDEX
SYS SYSAUX 9 2.21
DEX 5_0 PARTITION
Back to Segment Statistics
Back to Top

Segments by Physical Read Requests

Total Physical Read Requests: 408


Captured Segments account for 67.2% of Total

Own Tablespace Phys Read %Tot


Object Name Subobject Name Obj. Type
er Name Requests al
SYS SYSAUX WRH$_SQL_PLAN TABLE 51 12.50
WRH$_SQL_PLAN_
SYS SYSAUX INDEX 42 10.29
PK
SYS SYSAUX WRH$_SQLTEXT TABLE 21 5.15
SYS SYSTEM COL_USAGE$ TABLE 10 2.45
WRH$_SQLSTAT_IN WRH$_SQLSTA_2632927 INDEX
SYS SYSAUX 9 2.21
DEX 55_0 PARTITION
Back to Segment Statistics
Back to Top

Segments by UnOptimized Reads

Total UnOptimized Read Requests: 408


Captured Segments account for 67.2% of Total

Own Tablespace UnOptimized %Tot


Object Name Subobject Name Obj. Type
er Name Reads al
SYS SYSAUX WRH$_SQL_PLAN TABLE 51 12.50
WRH$_SQL_PLAN_P
SYS SYSAUX INDEX 42 10.29
K
SYS SYSAUX WRH$_SQLTEXT TABLE 21 5.15
SYS SYSTEM COL_USAGE$ TABLE 10 2.45
WRH$_SQLSTAT_IN WRH$_SQLSTA_2632927 INDEX
SYS SYSAUX 9 2.21
DEX 55_0 PARTITION
Back to Segment Statistics
Back to Top

Segments by Optimized Reads

No data exists for this section of the report.


Back to Segment Statistics
Back to Top

Segments by Direct Physical Reads

No data exists for this section of the report.


Back to Segment Statistics
Back to Top

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Segments by Physical Writes

Total Physical Writes: 390


Captured Segments account for 52.8% of Total

Owne Tablespace Physical %Tota


Object Name Subobject Name Obj. Type
r Name Writes l
SYS SYSAUX WRH$_SQL_PLAN TABLE 18 4.62
WRH$_SQL_PLAN_
SYS SYSAUX INDEX 17 4.36
PK
WRH$_SQLSTA_26329275 TABLE
SYS SYSAUX WRH$_SQLSTAT 16 4.10
5_0 PARTITION
WRH$_LATCH_263292755 TABLE
SYS SYSAUX WRH$_LATCH 15 3.85
_0 PARTITION
WRH$_SYSSTAT_P WRH$_SYSSTA_26329275 INDEX
SYS SYSAUX 15 3.85
K 5_0 PARTITION
Back to Segment Statistics
Back to Top

Segments by Physical Write Requests

Total Physical Write Requestss: 215


Captured Segments account for 42.8% of Total

Own Tablespace Phys Write %Tot


Object Name Subobject Name Obj. Type
er Name Requests al
SYS SYSAUX SMON_SCN_TIME TABLE 7 3.26
SYS SYSAUX WRH$_SQL_PLAN_PK INDEX 6 2.79
SYS_LOB0000006198C0
SYS SYSAUX LOB 5 2.33
0038$$
WRH$_LATCH_2632927 TABLE
SYS SYSAUX WRH$_LATCH 4 1.86
55_0 PARTITION
WRH$_PARAME_26329 TABLE
SYS SYSAUX WRH$_PARAMETER 4 1.86
2755_0 PARTITION
Back to Segment Statistics
Back to Top

Segments by Direct Physical Writes

Total Direct Physical Writes: 5


Captured Segments account for 100.0% of Total

Owner Tablespace Name Object Name Subobject Name Obj. Type Direct Writes %Total
SYS SYSAUX SYS_LOB0000006198C00038$$ LOB 5 100.00
Back to Segment Statistics
Back to Top

Segments by Table Scans

Total Table Scans: 1


Captured Segments account for 100.0% of Total

Owner Tablespace Name Object Name Subobject Name Obj. Type Table Scans %Total
SYS SYSTEM I_OBJ2 INDEX 1 100.00

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Back to Segment Statistics
Back to Top

Segments by DB Blocks Changes

% of Capture shows % of DB Block Changes for each top segment compared


with total DB Block Changes for all segments captured by the Snapshot

Own Tablespace DB Block % of


Object Name Subobject Name Obj. Type
er Name Changes Capture
SYS SYSTEM COL_USAGE$ TABLE 656 42.27
SYS SYSAUX WRH$_SQL_PLAN TABLE 192 12.37
WRH$_SQL_PLAN_
SYS SYSAUX INDEX 192 12.37
PK
SYS SYSTEM MON_MODS$ TABLE 80 5.15
WRH$_SQLSTAT_I WRH$_SQLSTA_263292 INDEX
SYS SYSAUX 64 4.12
NDEX 755_0 PARTITION
Back to Segment Statistics
Back to Top

Segments by Row Lock Waits

No data exists for this section of the report.


Back to Segment Statistics
Back to Top

Segments by ITL Waits

No data exists for this section of the report.


Back to Segment Statistics
Back to Top

Segments by Buffer Busy Waits

No data exists for this section of the report.


Back to Segment Statistics
Back to Top

Dictionary Cache Stats

"Pct Misses" should be very low (< 2% in most cases)


"Final Usage" is the number of cache entries being used

Cache Get Requests Pct Miss Scan Reqs Pct Miss Mod Reqs Final Usage
dc_awr_control 21 0.00 0 2 1
dc_free_extents 3 0.00 0 0 1
dc_global_oids 53 0.00 0 0 22
dc_histogram_data 310 11.29 0 0 326
dc_histogram_defs 2,450 38.61 0 0 3,098
dc_objects 1,255 8.69 0 0 1,061
dc_rollback_segments 117 0.00 0 0 12
dc_segments 486 19.96 0 5 378
dc_sequences 1 100.00 0 1 6
dc_tablespaces 655 0.00 0 0 6

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
dc_users 504 0.00 0 0 38
global database name 731 0.00 0 0 1
outstanding_alerts 2 0.00 0 0 3

Back to Top

Library Cache Activity

"Pct Misses" should be very low

Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invali- dations
BODY 2 50.00 13 7.69 0 0
CLUSTER 14 0.00 8 0.00 0 0
EDITION 2 0.00 2 0.00 0 0
INDEX 20 55.00 20 55.00 0 0
SCHEMA 208 0.00 0 0 0
SQL AREA 986 44.22 4,897 8.54 0 0
TABLE/PROCEDURE 744 15.86 1,297 29.76 1 0

Back to Top

Memory Statistics
Memory Dynamic Components
Memory Resize Operations Summary
Memory Resize Ops
Process Memory Summary
SGA Memory Summary
SGA breakdown difference
Back to Top

Memory Dynamic Components

Min/Max sizes since instance startup


Oper Types/Modes: INItializing,GROw,SHRink,STAtic/IMMediate,DEFerred
ordered by Component

Begin Snap Size Current Size Min Size Max Size Oper Last Op
Component
(Mb) (Mb) (Mb) (Mb) Count Typ/Mod
ASM Buffer Cache 0.00 0.00 0.00 0.00 0 STA/
DEFAULT 16K buffer
0.00 0.00 0.00 0.00 0 STA/
cache
DEFAULT 2K buffer
0.00 0.00 0.00 0.00 0 STA/
cache
DEFAULT 32K buffer
0.00 0.00 0.00 0.00 0 STA/
cache
DEFAULT 4K buffer
0.00 0.00 0.00 0.00 0 STA/
cache
DEFAULT 8K buffer
0.00 0.00 0.00 0.00 0 STA/
cache
DEFAULT buffer cache 748.00 748.00 748.00 748.00 0 INI/IMM
KEEP buffer cache 0.00 0.00 0.00 0.00 0 STA/
PGA Target 208.00 208.00 208.00 208.00 0 STA/

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
RECYCLE buffer
0.00 0.00 0.00 0.00 0 STA/
cache
SGA Target 1,024.00 1,024.00 1,024.00 1,024.00 0 STA/
Shared IO Pool 0.00 0.00 0.00 0.00 0 STA/
java pool 4.00 4.00 4.00 4.00 0 STA/
large pool 4.00 4.00 0.00 4.00 0 GRO/IMM
shared pool 260.00 260.00 260.00 260.00 0 STA/
streams pool 0.00 0.00 0.00 0.00 0 STA/
Back to Memory Statistics
Back to Top

Memory Resize Operations Summary

No data exists for this section of the report.


Back to Memory Statistics
Back to Top

Memory Resize Ops

No data exists for this section of the report.


Back to Memory Statistics
Back to Top

Process Memory Summary

B: Begin Snap E: End Snap


All rows below contain absolute values (i.e. not diffed over the interval)
Max Alloc is Maximum PGA Allocation size at snapshot time
Hist Max Alloc is the Historical Max Allocation for still-connected processes
ordered by Begin/End snapshot, Alloc (MB) desc

Catego Alloc Used Avg Alloc Std Dev Alloc Max Alloc Hist Max Alloc Num Num
ry (MB) (MB) (MB) (MB) (MB) (MB) Proc Alloc
B Other 98.87 3.41 6.04 28 28 29 29
Freeabl
9.63 0.00 1.38 1.16 4 7 7
e
SQL 0.59 0.25 0.05 0.05 0 2 12 10
PL/SQL 0.28 0.22 0.01 0.03 0 0 27 27
E Other 96.53 3.58 6.23 28 28 27 27
Freeabl
5.75 0.00 1.15 0.80 2 5 5
e
SQL 0.59 0.53 0.06 0.16 1 2 10 7
PL/SQL 0.27 0.22 0.01 0.03 0 0 25 25
Back to Memory Statistics
Back to Top

SGA Memory Summary

SGA regions Begin Size (Bytes) End Size (Bytes) (if different)
Database Buffers 784,334,848
Fixed Size 1,341,312
Redo Buffers 4,636,672

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Variable Size 281,020,544
Back to Memory Statistics
Back to Top

SGA breakdown difference

ordered by Pool, Name


N/A value for Begin MB or End MB indicates the size of that Pool/Name was insignificant, or zero in that snapshot

Pool Name Begin MB End MB % Diff


java free memory 4.00 4.00 0.00
large PX msg pool 0.47 0.47 0.00
large free memory 3.53 3.53 0.00
shared CCUR 2.72 3.53 29.83
shared FileOpenBlock 3.79 3.79 0.00
shared KCB Table Scan Buffer 3.80 3.80 0.00
shared KGLH0 2.72 3.32 21.97
shared KGLS 4.80 6.56 36.73
shared KGLSG 3.52 3.52 0.00
shared KSFD SGA I/O b 3.79 3.79 0.00
shared PCUR 2.82
shared SQLA 8.04 10.63 32.29
shared db_block_hash_buckets 2.97 2.97 0.00
shared dbwriter coalesce buffer 3.79 3.79 0.00
shared event statistics per sess 2.64 2.64 0.00
shared free memory 164.89 157.49 -4.49
shared row cache 3.62 3.62 0.00
shared write state object 3.38 3.38 0.00
buffer_cache 748.00 748.00 0.00
fixed_sga 1.28 1.28 0.00
log_buffer 4.42 4.42 0.00
Back to Memory Statistics
Back to Top

Streams Statistics
Streams CPU/IO Usage
Streams Capture
Streams Capture Rate
Streams Apply
Streams Apply Rate
Buffered Queues
Buffered Queue Subscribers
Rule Set
Persistent Queues
Persistent Queues Rate
Persistent Queue Subscribers
Back to Top

Streams CPU/IO Usage

No data exists for this section of the report.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Back to Streams Statistics
Back to Top

Streams Capture

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Streams Capture Rate

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Streams Apply

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Streams Apply Rate

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Buffered Queues

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Buffered Queue Subscribers

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Rule Set

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Persistent Queues

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Persistent Queues Rate

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Persistent Queue Subscribers

No data exists for this section of the report.


Back to Streams Statistics
Back to Top

Resource Limit Stats

No data exists for this section of the report.

Back to Top

Shared Server Statistics


Shared Servers Activity
Shared Servers Rates
Shared Servers Utilization
Shared Servers Common Queue
Shared Servers Dispatchers
Back to Top

Shared Servers Activity

Values represent averages for all samples

Avg Total Avg Active Avg Total Avg Active Avg Total Avg Active
Connections Connections Shared Srvrs Shared Srvrs Dispatchers Dispatchers
0 0 1 0 1 0
Back to Shared Server Statistics
Back to Top

Shared Servers Rates

Disp
Common Queue Disp Queue Server Server Common Server Total Server
Queue
Per Sec Per Sec Msgs/Sec KB/Sec Queue Total Msgs Total(KB)
Total
0 0 0 0.00 0 0 0 0
Back to Shared Server Statistics
Back to Top

Shared Servers Utilization

Statistics are combined for all servers


Incoming and Outgoing Net % are included in %Busy

Total Server Time (s) %Busy %Idle Incoming Net % Outgoing Net %
1,091 0.00 100.00 0.00 0.00
Back to Shared Server Statistics
Back to Top

Shared Servers Common Queue

No data exists for this section of the report.


Back to Shared Server Statistics
Back to Top

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Shared Servers Dispatchers

Ordered by %Busy, descending


Total Queued, Total Queue Wait and Avg Queue Wait are for dispatcher queue
Name suffixes: "(N)" - dispatcher started between begin and end snapshots "(R)" - dispatcher re-started between
begin and end snapshots

Nam Avg Total Disp Time %Bus Total Total Queue Wait Avg Queue Wait
%Idle
e Conns (s) y Queued (s) (ms)
100.0
D000 0.00 1,091 0.00 0 0
0
Back to Shared Server Statistics
Back to Top

init.ora Parameters

if IP/Public/Source at End snap is different a '*' is displayed

Parameter Name Begin value End value (if different)


compatible 11.2.0
control_files /u01/oradata/db11g/cont1.ctl, /u01/oradata/db11g/cont2.ctl
db_block_size 8192
db_domain
db_name DB11G
diagnostic_dest /u01/app/oracle
dispatchers (PROTOCOL=TCP) (SERVICE=ORCLXDB)
open_cursors 300
processes 150
remote_login_passwordfile EXCLUSIVE
sga_target 1073741824
undo_management AUTO
undo_retention 900
undo_tablespace UNDOTBS1

Back to Top

Dynamic Remastering Stats

No data exists for this section of the report.

Back to Top
End of Report

Posted 13th October 2013 by Oracle

JUN Performance Tuning


12 1> See for 11g features
2> follow my website >>oraclept.blogspot.com

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
concentrate on below highlighted topics in the above website:

General:

Oracle Instance Tuning


Tkprof & SQLTrace
SQL Hints
AWR report Interpretation
Full Table scan & Index scan
What's New in Oracle Performance?

Part 1: Performance Planning


Steps in The Oracle Performance Improvement Method
A Sample Decision Process for Performance Conceptual Modeling
Top Ten Mistakes Found in Oracle Systems

Part 2: Optimizing Instance Performance

Configuring a Database for Performance


Important Initialization Parameters With Performance Impact
Configuring Undo Space
Sizing Redo Log Files
Creating Subsequent Tablespaces
Creating Permanent Tablespaces - Automatic Segment-Space Management
Creating Temporary Tablespaces
Creating and Maintaining Tables for Good Performance
Table Compression
Reclaiming Unused Space
Indexing Data
Specifying Memory for Sorting Data
Performance Considerations for Shared Servers
Identifying Contention Using the Dispatcher-Specific Views
Reducing Contention for Dispatcher Processes
Identifying Contention for Shared Servers
Automatic Performance Statistics
AWR
ASH Reports (active session history)
Automatic Performance Diagnostics
ADDM
Memory Configuration and Use
ASMM,
Buffer pool Hit ratios,
keep pool,
recycle pool,
cursor_space_for_time,
cursor_sharing,
sizing the shared pool,
using large pool
configuring automatic pga memory,
sizing the log buffer,
sizing the buffer cache
I/O Configuration and Design
OMF
choosing data block size
Understanding Operating System Resources
Finding System CPU Utilization
1 Checking Memory Management
2 Checking I/O Management
3 Checking Network Management

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
4 Checking Process Management
Instance Tuning Using Performance Views
Instance Tuning Steps
Interpreting Oracle Statistics
Wait Events Statistics --- buffer busy waits ,db file sequential read ,library cache lock
Idle Wait Events
Part 3: Optimizing SQL Statements
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/part4.htm#g996847

Chapter 11, "SQL Tuning Overview"


Chapter 12, "Automatic SQL Tuning"
Chapter 13, "The Query Optimizer"
Chapter 14, "Managing Optimizer Statistics"
Chapter 15, "Using Indexes and Clusters"
Chapter 16, "Using Optimizer Hints"
Chapter 17, "SQL Access Advisor"
Chapter 18, "Using Plan Stability"
Chapter 19, "Using EXPLAIN PLAN"
Chapter 20, "Using Application Tracing Tools"

11
Identifying High-Load SQL
Automatic SQL Tuning Features
Developing Efficient SQL Statements

12
SQL Tuning Advisor
SQL Tuning Sets
SQL Profiles

13 The Query Optimizer

Optimizer Operations
Choosing an Optimizer Goal
Enabling and Controlling Query Optimizer Features
Understanding the Query Optimizer
Understanding Access Paths for the Query Optimizer
Understanding Joins

14 Managing Optimizer Statistics

Understanding Statistics
Automatic Statistics Gathering - GATHER_STATS_JOB
Manual Statistics Gathering - DBMS_STATS Procedures
System Statistics
Managing Statistics
Viewing Statistics

15 Using Indexes and Clusters


15.2 Using Function-based Indexes for Performance
15.3 Using Partitioned Indexes for Performance
15.4 Using Index-Organized Tables for Performance
15.5 Using Bitmap Indexes for Performance
15.6 Using Bitmap Join Indexes for Performance
15.7 Using Domain Indexes for Performance
15.8 Using Clusters for Performance
15.9 Using Hash Clusters for Performance

16 Using Optimizer Hints


open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
16 Using Optimizer Hints

Understanding Optimizer Hints


Specifying Hints
Using Hints with Views

17 SQL Access Advisor

Overview of the SQL Access Advisor in the DBMS_ADVISOR Package


Steps for Using the SQL Access Advisor
Tuning Materialized Views for Fast Refresh and Query Rewrite -------DBMS_ADVISOR.TUNE_MVIEW Procedure

18 Using Plan Stability

----------Storing Outlines
-------Using Plan Stability with Query Optimizer Upgrades ----Moving from RBO to the Query Optimizer

19 Using EXPLAIN PLAN

-------EXPLAIN PLAN Restrictions


------The PLAN_TABLE Output Table

20 Using Application Tracing Tools

--Using the trcsess Utility


--Using the SQL Trace Facility and TKPROF
Posted 12th June 2013 by Oracle

MAY How to read AWR reports


6 The output of the AWR report contains a wealth of information that you can use to tune your database. The output of
the AWR report can be divided into the following sections:

Report Header

This section is self explanatory which provides database name, id, instance if RAC , platform information and snap interval. (database
workload time duration in review).
This report is for instance number 2 of my RAC environment. So if you need to the analysis on RAC environment, you need to do
it separately of all the instances in the RAC to see if all the instances are balanced the way they should be.

DB Name DB Id Instance Inst num Startup Time Release RAC


TestRAC 3626203793 TestRac2 2 17-Aug-11 19:08 11.1.0.6.0 YES

Host Name Platform CPUs Cores Sockets Memory (GB)

T estRAC Linux 64-bit for AMD 8 8 2 31.44

Snap Id Snap Time Sessions Cursors/Session

Begin Snap: 28566 27-Sep-11 01:00:21 130 4.8


End Snap: 28567 27-Sep-11 02:00:43 135 4.5
Elapsed: 60.35 (mins)
DB Time: 15.07 (mins)

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Begin End

Buffer Cache: 5,888M 5,888M Std Block Size: 8K


Shared Pool Size: 8,704M 8,704M Log Buffer: 138,328K

Load Profile
This section provides the snapshot of the database workload occurred during the snapshot interval.

Per Second Per Transaction Per Exec Per Call


DB Time(s): 0.3 0.1 0.00 0.00
DB CPU(s): 0.3 0.1 0.00 0.00
Redo size: 48,933.6 19,916.2
Logical reads: 1,124.4 457.7
Block changes: 195.9 79.7
Physical reads: 80.5 32.8
Physical writes: 4.3 1.8
User calls: 141.4 57.6
Parses: 123.2 50.2
Hard parses: 2.2 0.9
W/A MB processed: 1,940,807.0 789,918.9
Logons: 4.3 1.7
Executes: 127.6 51.9
Rollbacks: 0.0 0.0
Transactions: 2.5

DB time(s):
Its the amount of time oracle has spent performing database user calls. Note it does not include background processes.
DB CPU(s):
Its the amount of CPU time spent on user calls. Same as DB time it does not include background process. The value is in microseconds
Redo size:
For example, the table below shows that an average transaction generates about 19,000 of redo data along with around 48,000 redo per
second.
Logical reads:
Consistent Gets+ DB blocks Gets = Logical reads
Block Changes:
The number of block modified during the sample interval
Physical reads:
No of block request causing I/O operation
Physical writes:
Number of physical writes performed
User calls:
Number of user queries generated
Parses:
The total of all parses; both hard and soft.
Hard Parses:
The parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.
Soft Parses:
Soft parses are not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse; hence it
consumes far fewer resources.
Sorts:
No of sorts performed
Logons:
No of logons during the interval
Executes:
No of SQL Executes
Transactions:
No of transactions per second

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
---------------------------------------------------------------------------

Load Profile
The load profile provides an at-a-glance look at some specific operational statistics. You can compare these statistics
with a baseline snapshot report to determine if database activity is different. Values for these statistics are presented in
two formats. The first is the value per second (for example, how much redo was generated per second) and the second
is the value per transaction (for example, 1,024 bytes of redo were generated per transaction).
Statistics presented in the load profile include such things as:
Redo size - An indication of the amount of DML activity the database is experiencing.

Logical and physical reads - A measure of how many IO's (Physical and logical) that the database is performing.

User calls - Indicates how many user calls have occurred during the snapshot period. This value can give you
some indication if usage has increased.

Parses and hard parses - Provides an indication of the efficiency of SQL re-usage.

Sorts - This number gives you an indication of how much sorting is occurring in the database.

Logons - Indicates how many logons occurred during the snapshot period.

Executes - Indicates how many SQL statements were executed during the snapshot period.

Transactions - Indicates how many transactions occurred during the snapshot period.
Additionally, the load profile section provides the percentage of blocks that were changed per read, the percentage of
recursive calls that occurred, the percentage of transactions that were rolled back and the number of rows sorted per
sort operation.

Instance Efficiency Percentages (Target 100%)

These statistics include several buffer related ratios including the buffer hit percentage and the library hit percentage.
Also, shared pool memory usage statistics are included in this section.

Instance efficiency should be close to 100 %

Buffer Nowait %: 99.99 Redo NoWait %: 100.00


Buffer Hit %: 93.06 In-memory Sort %: 100.00
Library Hit %: 98.67 Soft Parse %: 98.20
Execute to Parse %: 3.40 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 0.01 % Non-Parse CPU: 96.21

Execute to Parse % and Parse CPU to Parse Elapsd %:

If the the value are low like in the above case of 3.40 and 0.01 means that there could be a parsing problem. You may need to look
at bind variable issues or shared pool sizing issue.

Redo NoWait%:

Usually this stats is 99 or greater

In-memory Sort %:
This can tell you how efficient is you sort_area_size, hash_area_size or pga_aggrigate_target are. If you dont have adequate sizes of
sort,hash and pga parameters, then you in-memory sort per cent will go down

Soft parse %:
with 98.20 % for the soft parse meaning that about 1.72 % (100 -soft parse) is happening for hard parsing. You might want to look
at you bind variables issues.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Latch Hit %:
should be close to 100.

% Non-Parse CPU:
Most of our statements were already parsed so we weren't doing a lot of re parsing. Re parsing is high on CPU and should be
avoided.
-------------------------------------------------------------------------------------------------------------------------------

Shared Pool Statistics

Begin End
Memory Usage %: 73.86 75.42
% SQL with executions>1: 92.61 93.44
% Memory for SQL w/exec>1: 94.33 94.98

Memory Usage % is the shared pool usage. So here we have use 73.86 per cent of our shared pool and out of that almost 94
percent is being re-used. if Memory Usage % is too large like 90 % it could mean that your shared pool is tool small and if the
percent is in 50 for example then this could mean that you shared pool is too large

---------------------------------------------------------------------------------------------------------------------------------------
Top 5 Timed Foreground Events

This section provides insight into what events the Oracle database is spending most of it's time on (see wait events).
Each wait event is listed, along with the number of waits, the time waited (in seconds), the average wait per event (in
microseconds) and the associated wait class

Event Waits Time(s) Avg wait (ms) % DB time Wait Class

DB CPU 1,019 112.73


log file sync 25,642 43 2 4.73 Commit
db file scattered read 3,064 40 13 4.43 User I/O
library cache pin 136,267 27 0 2.98 Concurrency
db file sequential read 7,608 24 3 2.71 User I/O

its critical to look into this section. If you turn off the statistic parameter, then the Time(s) wont appear. Wait analysis should be
done with respect to Time(s) as there could be million of waits but if that happens for a second or so then who cares. Therefore,
time is very important component.

So you have several different types of waits. So you may see the different waits on your AWR report. So lets discuss the most
common waits.

df file type waits:

db file sequential read:


Is the wait that comes from the physical side of the database. it related to memory starvation and non selective index
use. sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go
to
db file scattered read:
caused due to full table scans may be because of insufficient indexes or un-avilablity of updated statistics
direct Path writes:
You wont see them unless you are doing some appends or data loads
direct Path reads:
could happen if you are doing a lot of parallel query activity
db file parallel writes / read:
if you are doing a lot of partition activity then expect to see that wait even. it could be a table or index partition

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
db file single write:
if you see this event than probably you have a lot of data files in your database.
direct path read temp or direct path write temp:
this wait event shows Temp file activity (sort,hashes,temp tables, bitmap)
check pga parameter or sort area or hash area parameters. You might want to increase them

buffer type waits

so what's going on in your memory


latch: cache buffer chains:
check hot objects
free buffer waits:
insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be
getting clogged up.
buffer busy waits:
see what is causing them further along in report. most of the time its data block related.
gc buffer busy:
its in the RAC environment. caused may be because of not enough memory on your nodes,overloaded interconnect.
Also look RAC specific section of the report latch:
cache buffers lru chain Freelist issues, hot blocks latch: cache buffer handles Freelist issues, hot blocks
buffer busy - See what is causing them further along in report
no free buffers Insufficient buffers, dbwr contention

Log Type Waits

log file parallel write Look for log file contention


log buffer space Look at increasing log buffer size
log file switch (checkpoint incomplete) May indicate excessive db files or slow IO subsystem
log file switch (archiving needed) Indicates archive files are written too slowly
log file switch completion May need more log files per
log file sync Could indicate excessive commits

GC Events

gccr multi block request Full table or index scans


gc current multi block request Full table or index scans
gccr block 2-way Blocks are busy in another instance, check for block level contention or hot blocks
gccr block 3-way Blocks are busy in another instance, check for block level contention or hot blocks
gccr block busy Blocks are busy in another instance, check for block level contention or hot blocks
gccr block congested cr block congestion, check for hot blocks or busy interconnect
gccr block lost Indicates interconnect issues and contention
gc current block 2-way Blocks are busy in another instance, check for block level contention or hot blocks
gc current block 3-way Blocks are busy in another instance, check for block level contention or hot blocks
gc current block busy Block is already involved in GC operation, shows hot blocks or congestion
gc current block congested current block congestion, check for hot blocks or busy interconnect
gc current block lost - Indicates interconnect issues and contention

Undo Events

undo segment extension If excessive, tune undo


latch: In memory undo latch If excessive could be bug, check for your version, may have to turn off in memory
undo
wait for a undo record Usually only during recovery of large transactions, look at turning off parallel undo recovery.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
What Next?

Determine wait events of concern


Drill down to specific sections of report for deeper analysis
Use custom scripts, ADDM and Ash to investigate issues

======================================================================
======================================================================

RAC Statistics
If you are running on a RAC cluster, then the AWRRPT.SQL report will provide various RAC statistics including
statistics on the number of RAC instances, as well as global cache and enqueue related performance statistics. Here is
an example of the RAC statistics part of the report:

RAC Statistics DB/Inst: A109/a1092 Snaps: 2009-2010

Begin End
----- -----
Number of Instances: 2 2

Global Cache Load Profile


~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Global Cache blocks received: 0.11 0.52
Global Cache blocks served: 0.14 0.68
GCS/GES messages received: 0.88 4.23
GCS/GES messages sent: 0.85 4.12
DBWR Fusion writes: 0.01 0.04
Estd Interconnect traffic (KB) 2.31

Global Cache Efficiency Percentages (Target local+remote 100%)


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 99.47
Buffer access - remote cache %: 0.53
Buffer access - disk %: 0.00

Global Cache and Enqueue Services - Workload Characteristics


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.0

Avg global cache cr block receive time (ms): 0.2


Avg global cache current block receive time (ms): 0.3

Avg global cache cr block build time (ms): 0.0


Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 1.8
Avg global cache cr block flush time (ms): 4.0

Avg global cache current block pin time (ms): 0.0


Avg global cache current block send time (ms): 0.1
Global cache log flushes for current blocks served %: 0.4
Avg global cache current block flush time (ms): 0.0

Global Cache and Enqueue Services - Messaging Statistics


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): ########
Avg message sent queue time on ksxp (ms): 0.1
Avg message received queue time (ms): 4.6
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0

% of direct sent messages: 45.26


% of indirect sent messages: 31.59
% of flow controlled messages: 23.15
-------------------------------------------------------------

Time Model Statistics


open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Oracle Database 10g time model related statistics are presented next. The time model allows you to see a summary of
where the database is spending it's time. The report will present the various time related statistic (such as DB CPU) and
how much total time was spent in the mode of operation represented by that statistic. Here is an example of the time
model statistic report where we see that we spent 36.2 seconds on DB CPU time, which was a total of 60.4% of the total
DB time. Note that this is a two node RAC system, so the total percentage of overall time available is 200%, not 100%.

Time Model Statistics DB/Inst: A109/a1092 Snaps: 2009-2010


-> Total time in database user-calls (DB Time): 5.5s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name

Statistic Name Time (s) % of DB Time


------------------------------------------ ------------------ ------------
sql execute elapsed time 4.5 82.8
DB CPU 3.5 64.4
connection management call elapsed time 0.1 1.6
parse time elapsed 0.1 1.3
PL/SQL execution elapsed time 0.0 .9
hard parse elapsed time 0.0 .3
sequence load elapsed time 0.0 .1
repeated bind elapsed time 0.0 .0
DB time 5.5 N/A
background elapsed time 33.0 N/A
background cpu time 9.7 N/A
-------------------------------------------------------------

Wait class and Wait Event Statistics


Closely associated with the time model section of the report are the wait class and wait event statistics sections. Within
Oracle, the duration of a large number of operations (e.g. Writing to disk or to the control file) is metered. These are
known as wait events, because each of these operations requires the system to wait for the event to complete. Thus,
the execution of some database operation (e.g. a SQL query) will have a number of wait events associated with it. We
can try to determine which wait events are causing us problems by looking at the wait classes and the wait event
reports generated from AWR.
Wait classes define "buckets" that allow for summation of various wait times. Each wait event is assigned to one of
these buckets (for example System I/O or User I/O). These buckets allow one to quickly determine which subsystem is
likely suspect in performance problems (e.g. the network, or the cluster). Here is an example of the wait class report
section:

Wait Class DB/Inst: A109/a1092 Snaps: 2009-201


0
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc

Avg
%Time Total Wait
Wai wait
ts
Wait Class Waits -outs Time (s) (ms) /t
xn
-------------------- ---------------- ------ ---------------- ------- -------
--
System I/O 8,142 .0 25 3 10
.9
Other 439,596 99.6 3 0 589
.3
User I/O 112 .0 0 3 0
.2
Cluster 443 .0 0 0 0
.6
Concurrency 216 .0 0 0 0
.3
Commit 16 .0 0 2 0
.0
Network 3,526 .0 0 0 4
.7

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
.7
Application 13 .0 0 0 0
.0
-------------------------------------------------------------

In this report the system I/O wait class has the largest number of waits (total of 25 seconds) and an average wait of 3
milliseconds.
Wait events are normal occurrences, but if a particular sub-system is having a problem performing (e.g. the disk sub-
system) this fact will appear in the form of one or more wait events with an excessive duration. The wait event report
then provides some insight into the detailed wait events. Here is an example of the wait event report (we have
eliminated some of the bulk of this report, because it can get quite long). Note that this section is sorted by wait time
(listed in microseconds).

Avg
%Time Total Wait wait Wa
its
Event Waits -outs Time (s) (ms) /
txn
---------------------------- -------------- ------ ----------- ------- ------
---
control file parallel write 1,220 .0 18 15
1.6
control file sequential read 6,508 .0 6 1
8.7
CGS wait for IPC msg 422,253 100.0 1 0 56
6.0
change tracking file synchro 60 .0 1 13
0.1
db file parallel write 291 .0 0 1
0.4
db file sequential read 90 .0 0 4
0.1
reliable message 136 .0 0 1
0.2
log file parallel write 106 .0 0 2
0.1
lms flush message acks 1 .0 0 60
0.0
gc current block 2-way 200 .0 0 0
0.3
change tracking file synchro 59 .0 0 1
0.1

In this example our control file parallel write waits (which occurs during writes to the control file) are taking up 18
seconds total, with an average wait of 15 milliseconds per wait. Additionally we can see that we have 1.6 waits per
transaction (or 15ms * 1.6 per transaction = 24ms).

Operating System Statistics


This part of the report provides some basic insight into OS performance, and OS configuration too. This report may
vary depending on the OS platform that your database is running on. Here is an example from a Linux system:

Statistic Total
-------------------------------- --------------------
BUSY_TIME 128,749
IDLE_TIME 1,314,287
IOWAIT_TIME 18,394
NICE_TIME 54
SYS_TIME 31,633
USER_TIME 96,586
LOAD 0
RSRC_MGR_CPU_WAIT_TIME 0
PHYSICAL_MEMORY_BYTES 3,349,528
NUM_CPUS 4

In this example output, for example, we have 4 CPU's on the box.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
SQL In Need of Tuning
Next in the report we find several different reports that present SQL statements that might be improved by tuning.
There are a number of different reports that sort offending SQL statements by the following criteria:
Elapsed time

CPU time

Buffer gets

Physical reads

Executions

Parse calls

Sharable memory

Version count

Cluster wait time


While these reports might not help tune specific application problems, they can help you find more systemic SQL
problems that you might not find when tuning a specific application module. Here is an example of the Buffer gets
report:

Gets CPU Elapsed


Buffer Gets Executions per Exec %Total Time (s) Time (s) SQL Id
-------------- ------------ ------------ ------ -------- --------- ----------
---
2,163 7 309.0 3.0 0.03 0.04 c7sn076yz7
030
select smontabv.cnt, smontab.time_mp, smontab.scn, smontab.num_mappings, smon
tab.tim_scn_map, smontab.orig_thread from smon_scn_time smontab, (sel
ect max(scn) scnmax, count(*)+sum(NVL2(TIM_SCN_MAP,NUM_MAPPINGS,
0)) cnt from smon_scn_time where thread=0) smontabv where smon

1,442 721 2.0 2.0 0.05 0.05 6ssrk2dqj7


jbx
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= next_date) and
(n
ext_date <= :2)) or ((last_date is null) and (next_date < :3))) and (field1
= :4 or (field1 = 0 and 'Y' = :5)) and (this_date is null) order by next_date
, j
ob

1,348 1 1,348.0 1.9 0.04 0.04 bv1djzzmk9


bv6
Module: TOAD 9.0.0.160
Select table_name from DBA_TABLES where owner = 'CDOL2_01' order by 1

1,227 1 1,227.0 1.7 0.07 0.08 d92h3rjp0y


217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;

896 4 224.0 1.2 0.03 0.03 6hszmvz1wj


hbt
Module: TOAD 9.0.0.160
Select distinct Cons.constraint_name, cons.status, cons.table_name, cons.cons
tra
int_type ,cons.last_change from sys.user_constraints cons where 1=1 a
nd cons.status='DISABLED'

In this report we find a SQL statement that seems to be churning through 309 buffers per execution. While the
execution times are not terrible we might want to look closer into the SQL statement and try to see if we could tune it (in
fact this is Oracle issued SQL that we would not tune anyway).

Instance Activity Stats


This section provides us with a number of various statistics (such as, how many DBWR Checkpoints occurred, or how
many consistent gets occurred during the snapshot). Here is a partial example of the report:

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Statistic Total per Second per Tr
ans
-------------------------------- ------------------ -------------- ----------
---
consistent changes 9 0.0
0.0
consistent gets 70,445 19.5 9
4.4
consistent gets - examination 8,728 2.4 1
1.7
consistent gets direct 0 0.0
0.0
consistent gets from cache 70,445 19.5 9
4.4
cursor authentications 2 0.0
0.0
data blocks consistent reads - u 5 0.0
0.0
db block changes 1,809 0.5
2.4
db block gets 2,197 0.6
3.0
db block gets direct 0 0.0
0.0
db block gets from cache 2,033 0.6
2.7

Tablespace and Data File IO Stats


The tablespace and data file IO stats report provides information on tablespace IO performance. From this report you
can determine if the tablespace datafiles are suffering from sub-standard performance in terms of IO response from the
disk sub-system. Here is a partial example of the tablespace report:

Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
SYSAUX
1 0 0.0 1.0 159 0 13 0.8
UNDOTBS2
1 0 10.0 1.0 98 0 0 0.0
SYSTEM
1 0 10.0 1.0 46 0 0 0.0
AUD
1 0 0.0 1.0 1 0 0 0.0
CDOL2_INDEX
1 0 10.0 1.0 1 0 0 0.0
CDOL_DATA
1 0 10.0 1.0 1 0 0 0.0
DBA_DEF
1 0 10.0 1.0 1 0 0 0.0
UNDOTBS1
1 0 10.0 1.0 1 0 0 0.0
USERS
1 0 10.0 1.0 1 0 0 0.0
USER_DEF
1 0 10.0 1.0 1 0 0 0.0

If the tablespace IO report seems to indicate a tablespace has IO problems, we can then use the file IO stat report
allows us to drill into the datafiles of the tablespace in question and determine what the problem might be. Here is an
example of the File IO stat report:

Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
AUD +ASM01/a109/datafile/aud.296.604081931
1 0 0.0 1.0 1 0 0 0.0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
1 0
0.0 1.0 1 0 0 0.0
CDOL2_INDEX +ASM01/a109/datafile/cdol2_index_001.dbf
1 0
10.0 1.0 1 0 0 0.0
CDOL_DATA +ASM01/a109/datafile/cdol_data_001.dbf
1 0 10.0 1.0 1 0 0 0.0
DBA_DEF +ASM01/a109/datafile/dba_def.294.604081931
1 0 10.0 1.0 1 0 0 0.0
SYSAUX +ASM01/a109/datafile/sysaux.299.604081927
1 0 0.0 1.0 159 0 13 0.8
SYSTEM +ASM01/a109/datafile/system.301.604081919
1 0 10.0 1.0 46 0 0 0.0
UNDOTBS1 +ASM01/a109/datafile/undotbs1.300.604081925
1 0 10.0 1.0 1 0 0 0.0
UNDOTBS2 +ASM01/a109/datafile/undotbs2.292.604081931
1 0 10.0 1.0 98 0 0 0.0
USERS +ASM01/a109/datafile/users.303.604081933
1 0 10.0 1.0 1 0 0 0.0
USER_DEF +ASM01/a109/datafile/user_def.291.604081933
1 0 10.0 1.0 1 0 0 0.0
-------------------------------------------------------------

Buffer Pool Statistics


The buffer pool statistics report follows. It provides a summary of the buffer pool configuration and usage statistics as
seen in this example:

Free Writ Buf


fer
Number of Pool Buffer Physical Physical Buff Comp B
usy
P Buffers Hit% Gets Reads Writes Wait Wait Wa
its
--- ---------- ---- -------------- ------------ ----------- ---- ---- -------
---
D 64,548 100 72,465 0 355 0 0
13
-------------------------------------------------------------

In this case, we have a database where all the buffer pool requests came out of the buffer pool and no physical reads
were required. We also see a few (probably very insignificant in our case) buffer busy waits.

Instance Recovery Stats


The instance recovery stats report provides information related to instance recovery. By analyzing this report, you can
determine roughly how long your database would have required to perform crash recovery during the reporting period.
Here is an example of this report:

-> B: Begin snapshot, E: End snapshot

Targt Estd Log File Log Ckpt Log Ckpt


MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- -----------
-
B 0 19 196 575 183 92160 183 N/
A
E 0 19 186 258 96 92160 96 N/
A
-------------------------------------------------------------

Buffer Pool Advisory


The buffer pool advisory report answers the question, how big should you make your database buffer cache. It
provides an extrapolation of the benefit or detriment that would result if you added or removed memory from the
database buffer cache. These estimates are based on the current size of the buffer cache and the number of logical
and physical IO's encountered during the reporting point. This report can be very helpful in "rightsizing" your buffer
cache. Here is an example of the output of this report:

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 48 .1 5,868 4.9 803,496
D 96 .2 11,736 4.0 669,078
D 144 .3 17,604 3.3 550,831
D 192 .4 23,472 2.8 462,645
D 240 .5 29,340 2.3 379,106
D 288 .5 35,208 1.8 305,342
D 336 .6 41,076 1.4 238,729
D 384 .7 46,944 1.2 200,012
D 432 .8 52,812 1.1 183,694
D 480 .9 58,680 1.0 172,961
D 528 1.0 64,548 1.0 165,649
D 576 1.1 70,416 1.0 161,771
D 624 1.2 76,284 1.0 159,728
D 672 1.3 82,152 1.0 158,502
D 720 1.4 88,020 1.0 157,723
D 768 1.5 93,888 0.9 157,124
D 816 1.5 99,756 0.9 156,874
D 864 1.6 105,624 0.9 156,525
D 912 1.7 111,492 0.9 156,393
D 960 1.8 117,360 0.9 155,388
-------------------------------------------------------------

In this example we currently have 528GB allocated to the SGA (represented by the size factor column with a value of
1.0. It appears that if we were to reduce the memory allocated to the SGA to half of the size of the current SGA (freeing
the memory to the OS for other processes) we would incur an increase of about 1.8 times the number of physical IO's
in the process.

PGA Reports
The PGA reports provide some insight into the health of the PGA. The PGA Aggr Target Stats report provides
information on the configuration of the PGA Aggregate Target parameter during the reporting period.
The PGA Aggregate Target Histogram report provides information on the size of various operations (e.g. sorts). It will
indicate if PGA sort operations occurred completely in memory, or if some of those operations were written out to disk.
Finally the PGA Memory Advisor, much like the buffer pool advisory report, provides some insight into how to properly
size your PGA via the PGA_AGGREGATE_TARGET database parameter. The PGA Memory Advisor report is shown
here:

Estd Extra Estd PGA Estd PGA


PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
44 0.1 289,899.2 7,844.9 97.0 1,124
88 0.3 289,899.2 7,576.9 97.0 1,073
176 0.5 289,899.2 3.3 100.0 0
263 0.8 289,899.2 3.3 100.0 0
351 1.0 289,899.2 3.3 100.0 0
421 1.2 289,899.2 0.0 100.0 0
491 1.4 289,899.2 0.0 100.0 0
562 1.6 289,899.2 0.0 100.0 0
632 1.8 289,899.2 0.0 100.0 0
702 2.0 289,899.2 0.0 100.0 0
1,053 3.0 289,899.2 0.0 100.0 0
1,404 4.0 289,899.2 0.0 100.0 0
2,106 6.0 289,899.2 0.0 100.0 0
2,808 8.0 289,899.2 0.0 100.0 0
-------------------------------------------------------------

Shared Pool Advisory


The shared pool advisory report provides assistance in right sizing the Oracle shared pool. Much like the PGA Memory
Advisor or the Buffer Pool advisory report, it provides some insight into what would happen should you add or remove
memory from the shared pool. This can help you reclaim much needed memory if you have over allocated the shared
pool, and can significantly improve performance if you have not allocated enough memory to the shared pool. Here is

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
an example of the shared pool advisory report:

Est LC Est LC Est LC Est LC


Shared SP Est LC Time Time Load Load Est
LC
Pool Size Size Est LC Saved Saved Time Time
Mem
Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj H
its
---------- ----- -------- ------------ ------- ------ ------- ------ --------
---
192 .4 54 3,044 ####### .8 ####### 382.1 22,444,
274
240 .5 92 5,495 ####### .9 ####### 223.7 22,502,
102
288 .6 139 8,122 ####### .9 53,711 102.5 22,541,
782
336 .7 186 12,988 ####### 1.0 17,597 33.6 22,562,
084
384 .8 233 17,422 ####### 1.0 7,368 14.1 22,569,
402
432 .9 280 23,906 ####### 1.0 3,553 6.8 22,571,
902
480 1.0 327 28,605 ####### 1.0 524 1.0 22,573,
396
528 1.1 374 35,282 ####### 1.0 1 .0 22,574,
164
576 1.2 421 40,835 ####### 1.0 1 .0 22,574,
675
624 1.3 468 46,682 ####### 1.0 1 .0 22,575,
055
672 1.4 515 52,252 ####### 1.0 1 .0 22,575,
256
720 1.5 562 58,181 ####### 1.0 1 .0 22,575,
422
768 1.6 609 64,380 ####### 1.0 1 .0 22,575,
545
816 1.7 656 69,832 ####### 1.0 1 .0 22,575,
620
864 1.8 703 75,168 ####### 1.0 1 .0 22,575,
668
912 1.9 750 78,993 ####### 1.0 1 .0 22,575,
695
960 2.0 797 82,209 ####### 1.0 1 .0 22,575,
719
-------------------------------------------------------------

SGA Target Advisory


The SGA target advisory report is somewhat of a summation of all the advisory reports previously presented in the
AWR report. It helps you determine the impact of changing the settings of the SGA target size in terms of overall
database performance. The report uses a value called DB Time as a measure of the increase or decrease in
performance relative to the memory change made. Also the report will summarize an estimate of physical reads
associated with the listed setting for the SGA. Here is an example of the SGA target advisory report:

SGA Target SGA Size Est DB Est Physical


Size (M) Factor Time (s) Reads
---------- ---------- ------------ ----------------
528 0.5 25,595 769,539
792 0.8 20,053 443,095
1,056 1.0 18,443 165,649
1,320 1.3 18,354 150,476
1,584 1.5 18,345 148,819
1,848 1.8 18,345 148,819
2,112 2.0 18,345 148,819

In this example, our SGA Target size is currently set at 1056MB. We can see from this report that if we increased the
SGA target size to 2112MB, we would see almost no performance improvement (about a 98 second improvement
overall). In this case, we may determine that adding so much memory to the database is not cost effective, and that the
memory can be better used elsewhere.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Memory Advisory
Memory advisory reports for the streams pool and the java pool also appear in the report (assuming you are using the
streams pool). These reports take on the same general format as the other memory advisor reports.

Buffer Wait Statistics


The buffer wait statistics report helps you drill down on specific buffer wait events, and where the waits are occurring. In
the following report we find that the 13 buffer busy waits we saw in the buffer pool statistics report earlier are attributed
to data block waits. We might then want to pursue tuning remedies to these waits if the waits are significant enough.
Here is an example of the buffer wait statistics report:

Class Waits Total Wait Time (s) Avg Time (ms)


------------------ ----------- ------------------- --------------
data block 13 0 1

Enqueue Activity
The Enqueue activity report provides information on enqueues (higher level Oracle locking) that occur. As with other
reports, if you see high levels of wait times in these reports, you might dig further into the nature of the enqueue and
determine the cause of the delays. Here is an example of this report section:

Enqueue Type (Request Reason)


-----------------------------------------------------------------------------
-
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
PS-PX Process Reservation
386 358 28 116 0 .43
US-Undo Segment
276 276 0 228 0 .18
TT-Tablespace
90 90 0 42 0 .71
WF-AWR Flush
12 12 0 7 0 1.43
MW-MWIN Schedule
2 2 0 2 0 5.00
TA-Instance Undo
12 12 0 12 0 .00
UL-User-defined
7 7 0 7 0 .00
CF-Controlfile Transaction
5,737 5,737 0 5 0 .00

Undo Segment Summary


The undo segment summary report provides basic information on the performance of undo tablespaces.

Latch Activity
The latch activity report provides information on Oracle's low level locking mechanism called a latch. From this report
you can determine if Oracle is suffering from latching problems, and if so, which latches are causing the greates
amount of contention on the system. Here is a partial example of the latch activity report (it is quite long):

Pct Avg Wait


Pct
Get Get Slps Time NoWait NoW
ait
Latch Name Requests Miss /Miss (s) Requests M
iss
------------------------ -------------- ------ ------ ------ ------------ ---
---
ASM allocation 122 0.0 N/A 0 0
N/A
ASM map headers 60 0.0 N/A 0 0
N/A
ASM map load waiting lis 11 0.0 N/A 0 0

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
ASM map load waiting lis 11 0.0 N/A 0 0
N/A
ASM map operation freeli 30 0.0 N/A 0 0
N/A
ASM map operation hash t 45,056 0.0 N/A 0 0
N/A
ASM network background l 1,653 0.0 N/A 0 0
N/A
AWR Alerted Metric Eleme 14,330 0.0 N/A 0 0
N/A
Consistent RBA 107 0.0 N/A 0 0
N/A
FAL request queue 75 0.0 N/A 0 0
N/A
FAL subheap alocation 75 0.0 N/A 0 0
N/A
FIB s.o chain latch 14 0.0 N/A 0 0
N/A
FOB s.o list latch 93 0.0 N/A 0 0
N/A
JS broadcast add buf lat 826 0.0 N/A 0 0
N/A
JS broadcast drop buf la 826 0.0 N/A 0 0
N/A

In this example our database does not seem to be experiencing any major latch problems, as the wait times on the
latches are 0, and our get miss pct (Pct Get Miss) is 0 also.
There is also a latch sleep breakdown report which provides some additional detail if a latch is being constantly moved
into the sleep cycle, which can cause additional performance issues.
The latch miss sources report provides a list of latches that encountered sleep conditions. This report can be of further
assistance when trying to analyze which latches are causing problems with your database.

Segments by Logical Reads and Segments by Physical Reads


The segments by logical reads and segments by physical reads reports provide information on the database segments
(tables, indexes) that are receiving the largest number of logical or physical reads. These reports can help you find
objects that are "hot" objects in the database. You may want to review the objects and determine why they are hot, and
if there are any tuning opportunities available on those objects (e.g. partitioning), or on SQL accessing those objects.
For example, if an object is showing up on the physical reads report, it may be that an index is needed on that object.
Here is an example of the segments by logical reads report:

Segments by Logical Reads DB/Inst: A109/a1092 Snaps: 2009-2010


-> Total Logical Reads: 72,642
-> Captured Segments account for 96.1% of Total

Tablespace Subobject Obj. Logical


Owner Name Object Name Name Type Reads %To
tal
---------- ---------- -------------------- ---------- ----- ------------ ----
---
SYS SYSAUX SYS_IOT_TOP_8813 INDEX 52,192 71
.85
SYS SYSTEM SMON_SCN_TIME TABLE 4,704 6
.48
SYS SYSTEM I_JOB_NEXT INDEX 2,432 3
.35
SYS SYSTEM OBJ$ TABLE 1,344 1
.85
SYS SYSTEM TAB$ TABLE 1,008 1
.39
-------------------------------------------------------------

Additional Reports
Several segment related reports appear providing information on:
Segments with ITL waits

Segments with Row lock waits

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Segments with Row lock waits

Segments with buffer busy waits

Segments with global cache buffer waits

Segments with CR Blocks received

Segments with current blocks received


These reports help provide more detailed information on specific segments that might be experiencing performance
problems.
The dictionary cache and library cache statistics reports provide performance information on the various areas in the
data dictionary cache and the library cache.
The process memory summary, SGA memory summary, and the SGA breakdown difference reports provide summary
information on how memory allocated to the database is allocated amongst the various components. Other memory
summary reports may occur if you have certain optional components installed (such as streams).
The database parameter summary report provides a summary of the setting of all the database parameters during the
snapshot report. If the database parameters changed during the period of the report, then the old and new parameters
will appear on the report.

Posted 6th May 2013 by Oracle

MAY AWR - Performance Tuning


5

The Automatic Workload Repository (AWR)


In Oracle Database 10g Oracle replaced statspack with the Advanced Workload Repository (AWR). The job of AWR is to collect database
statistics (by default every hour) and this data is maintained for a week and then purged. You can then run reports against these statistics
to performance tune your database. Other Oracle features such as ADDM and the database advisors use the database statistics to
monitor and analyze the database looking for performance problems.
When you create an Oracle database, AWR is automatically installed and enabled. Statistics collection is automated, and the statistics
collected by AWR are stored in the database. In order to properly collect database statistics, the parameter STATISTICS_LEVEL should be
set to TYPICAL (the default) or ALL. If STATISTICS_LEVEL is set to BASIC then the AWR will be disabled. Oracle 11g will retains eight days
of AWR snapshot information (as opposed to seven in Oracle 10g). As always you can override the default. This value will only be set on
newly created databases. Databases that are upgraded will keep the AWR retention value already set for them.
The Oracle database uses AWR for problem detection and analysis as well as for self-tuning. A number of different statistics are collected
by the AWR including wait events, time model statistics, active session history statistics, various system and session level statistics,
object usage statistics and information on the most resource intensive SQL statements. Other Oracle Database 10g features use the
AWR, including ADDM and the other advisors in Oracle Database 10g.
If you want to explore the AWR repository, feel free to do so. The AWR consists of a number of tables owned by the SYS schema and
typically stored in the SYSAUX tablespace (currently no method exists to move these objects to another tablespace). All AWR table names
start with the identifier WR. Following WR is a mnemonic that identifies the type designation of the table followed by a dollar sign ($). AWR
tables come with three different type designations:
Metadata (WRM$)

Historical Data (WRH$)

AWR tables related to advisor functions (WRI$)

Most of the AWR table names are pretty self explanatory such as WRM$_SNAPSHOT or WRH$_ACTIVE_SESSION_HISTORY. In some
Oracle technical documents you will see the AWR tables also referred to as the select workload repository (SWRF) tables.
Oracle Database 10g also offers several DBA tables that allow you to query the AWR repository. The tables all start with DBA_HIST,
followed by a name that describes the table. These include tables such as DBA_HIST_FILESTATS, DBA_HIST_DATAFILE or
DBA_HIST_SNAPSHOT.

Click the links below for more information on AWR:

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Manually Managing the AWR

AWR Baselines

AWR Metrics

AWR Reporting

Oracle Time Model, Wait Classes, & Metrics

Oracle Active Session History

Managing AWR with Oracle Enterprise Manager

Manually Managing the AWR


While AWR is meant to be automatic, provisions for manual operations impacting the AWR are available. You can modify the snapshot
collection interval and retention criteria, create snapshots and remove snapshots from the AWR. These topics are described in more
detail below.

Manual Snapshot Collection and Retention


Modify the snapshot collection interval using the dbms_workload_repository package. The
procedureDBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS is used in this example to modify the snapshot collection
so that it occurs every fifteen minutes and that retention of snapshot data is fixed at 20160 minutes:

-- This causes the repository to refresh every 15 minutes


-- and retain all data for 2 weeks.
Exec dbms_workload_repository.modify_snapshot_settings
(retention=>20160, interval=> 15);

Setting the interval parameter to 0 will disable all statistics collection.


To view current retention and interval settings of the AWR use the DBA_HIST_WR_CONTROL view. Here is an example of using this view:

SELECT * FROM dba_hist_wr_control;

DBID SNAP_INTERVAL RETENTION


---------- -------------------- --------------------
2139184330 +00000 01:00:00.0 +00007 00:00:00.0

In this example, we see that the snapshot interval is every hour (the default), and the retention is set for 7 days.

Creating or Removing Snapshots


Use the DBMS_WORKLOAD_REPOSITORY package to create or remove snapshots.
TheDBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT procedure creates a manual snapshot in the AWR as seen in this
example:

EXEC dbms_workload_repository.create_snapshot;

You can see what snapshots are currently in the AWR by using the DBA_HIST_SNAPSHOT view as seen in this example:

SELECT snap_id, begin_interval_time, end_interval_time


FROM dba_hist_snapshot
ORDER BY 1;

SNAP_ID END_INTERVAL_TIME
---------- -------------------------
1107 03-OCT-04 01.24.04.449 AM
1108 03-OCT-04 02.00.54.717 AM
1109 03-OCT-04 03.00.23.138 AM
1110 03-OCT-04 10.58.40.235 PM

Each snapshot is assigned a unique snapshot ID that is reflected in the SNAP_ID column. If you have two snapshots, the earlier
snapshot will always have a smaller SNAP_ID than the later snapshot. The END_INTERVAL_TIME column displays the time that the

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
actual snapshot was taken.
Sometimes you might want to drop snapshots manually. TheDBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE procedure
can be used to remove a range of snapshots from the AWR. This procedure takes two parameters, low_snap_id and high_snap_id, as
seen in this example:

EXEC dbms_workload_repository.drop_snapshot_range -
(low_snap_id=>1107, high_snap_id=>1108);

AWR Automated Snapshots


Oracle Database 10g uses a scheduled job, GATHER_STATS_JOB, to collect AWR statistics. This job is created, and enabled
automatically, when you create a new Oracle database under Oracle Database 10g. To see this job, use the DBA_SCHEDULER_JOBS
view as seen in this example:

SELECT a.job_name, a.enabled, c.window_name, c.schedule_name,


c.start_date, c.repeat_interval
FROM dba_scheduler_jobs a,
dba_scheduler_wingroup_members b,
dba_scheduler_windows c
WHERE job_name='GATHER_STATS_JOB'
And a.schedule_name=b.window_group_name
And b.window_name=c.window_name;

You can disable this job using the DBMS_SCHEDULER.DISABLE procedure as seen in this example:

Exec dbms_scheduler.disable('GATHER_STATS_JOB');

And you can enable the job using the DBMS_SCHEDULER.ENSABLE procedure as seen in this example:

Exec dbms_scheduler.enable('GATHER_STATS_JOB');

================================================================================

AWR Baselines

It is frequently a good idea to create a baseline in the AWR. A baseline is defined by a range of snapshots that can
be used to compare to other pairs of snapshots. The Oracle database server will exempt the snapshots assigned
to a specific baseline from the automated purge routine. Thus, the main purpose of a baseline is to preserve
typical runtime statistics in the AWR repository, allowing you to run the AWR snapshot reports on the preserved
baseline snapshots at any time and compare them to recent snapshots contained in the AWR. This allows you to
compare current performance (and configuration) to established baseline performance, which can assist in
determining database performance problems.

Creating Baselines

You use the create_baseline procedure contained in the DBMS_WORKLOAD_REPOSITORY stored PL/SQL
package to create a baseline as seen in this example:

EXEC dbms_workload_repository.create_baseline -
(start_snap_id=>1109, end_snap_id=>1111, -
baseline_name=>'EOM Baseline');

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Baselines can be seen using the DBA_HIST_BASELINE view as seen in this example:

SELECT baseline_id, baseline_name, start_snap_id, end_snap_id


FROM dba_hist_baseline;

BASELINE_ID BASELINE_NAME START_SNAP_ID END_SNAP_ID


----------- --------------- ------------- -----------
1 EOM Baseline 1109 1111

In this case, the column BASELINE_ID identifies each individual baseline that has been defined. The name
assigned to the baseline is listed, as is the beginning and ending snapshot ID's.

Note that if you are going to generate reports using these baselines, you will need to reference the start and end
snap_id's of the baselines. The AWR reports do not reference the baseline_id.

Removing Baselines

You can remove a baseline using the DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE procedure as seen i


n this example that drops the "EOM Baseline" baseline that we just created.

EXEC dbms_workload_repository.drop_baseline
(baseline_name=>'EOM Baseline', Cascade=>FALSE);

Note that the cascade parameter will cause all associated snapshots to be removed if it is set to TRUE, otherwis
e the snapshots will be cleaned up automatically by the AWR automated processes.

==============================================================

AWR Metrics

AWR stores it's data in a number of different tables in the database. We can see these tables by using the
following query:

select owner, view_name


from dba_views
where view_name like 'DBA\_HIST\_%' escape '\';

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
This query returns about 78 tables (depending on the version of the database you are running).

The WRM$ tables (M for Metadata) store metadata information such as the database being examined and the
snapshots taken. The WRH$ tables (H for historical) hold the actual collected statistics. Oracle has built a
number of DBA_HIST_ views that are built from the WRM$ and WRH$ tables. An example of these views is
DBA_HIST_SYSMETRIC_SUMMARY which is built upon the table WRH$_SYSMETRIC_SUMMARY.

If you want to know more information about specific AWR metrics and what they represent, you can look at the
DBA_HIST_METRIC_NAME view which contains a wealth of AWR related information. Here is a description of this
table:

SQL> desc dba_hist_metric_name


Name Null? Type
----------------------------------- -------- ----------------
DBID NOT NULL NUMBER
GROUP_ID NOT NULL NUMBER
GROUP_NAME VARCHAR2(64)
METRIC_ID NOT NULL NUMBER
METRIC_NAME NOT NULL VARCHAR2(64)
METRIC_UNIT NOT NULL VARCHAR2(64)

So what is the DBA_HIST_METRIC_NAME table good for? A common question that gets asked when you are
doing a performance analysis using specific columns in AWR table might be "What is the unit of measure for this
given metric". We can find the answer to this question from DBA_HIST_METRIC_NAME as seen in this example:

select a.metric_name, a.begin_time, a.intsize, a.num_interval,


a.minval, a.maxval, a.average, a.standard_deviation sd, b.metric_unit
from dba_hist_sysmetric_summary a, dba_hist_metric_name b
where a.metric_id = 2075
and a.metric_id=b.metric_id;

METRIC_NAME
------------------
BEGIN_TIME
---------------------------------------------------------------
INTSIZE NUM_INTERVAL MINVAL MAXVAL AVERAGE SD
---------- ------------ ---------- ---------- ---------- ----------
METRIC_UNIT
----------------------------------------------------------------
CPU Usage Per Sec
08-JAN-07
59279 10 0 6.96823552 .726140102 2.19334148
CentiSeconds Per Second

CPU Usage Per Sec


08-JAN-07
59279 10 0 6.96823552 .726140102 2.19334148
CentiSeconds Per Second

AWR collects a number of different database statistics such as:

Active Session History


Memory pool performance
Datafile performance
Wait events
Shared pool performance
Instance and crash recovery
SQL Statement performance
System performance

===========================================================

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
AWR Reporting
ADDM uses AWR for it's analysis and ADDM also uses AWR to store the results of its analysis.

Along with ADDM, you can also put AWR to work by running one of several AWR reports. These reports are much
like the statspack reports prior to Oracle Database 10g. There reports are available in the directory
$ORACLE_HOME/rdbms/admin

AWRRPT.SQL Report
The awrrpt.sql report (also known as the Workload Repository Report) provides you with a differential report of two
AWR snapshots in your Oracle database. It is very similar in feel to the statspack reports that you may have used
in previous versions of Oracle.

When you start the awrrpt.sql report, you will be prompted for a beginning and ending snapshot ID. In addition,
you will need to indicate if you wish the output to be formatted in regular text or as text with HTML tags. If you used
the old statspack report, you know that if there were a lot of statspack snapshots, the list of snapshots to choose
from when you ran the report could get very long. Now the AWR report will prompt you for how many days from
today you wish to see in your list of available snapshots. For example, if you enter 2, then only snapshots from the
last two days will appear as candidates for the report. This can help reduce the screen clutter a great deal if you
are retaining a large number of snapshots.

Here is an example of how you execute an AWR report. We will discuss the output of this report below:

SQL> @awrrpt

Current Instance
~~~~~~~~~~~~~~~~

DB Id DB Name Inst Num Instance


----------- ------------ -------- ------------
1373267642 A109 2 a1092

Specify the Report Type


~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: text

Type Specified: text

Instances in this Workload Repository schema


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Id Inst Num DB Name Instance Host


------------ -------- ------------ ------------ ------------
* 1373267642 2 A109 a1092 chqpvul8148
1373267642 1 A109 a1091 chqpvul8147

Using 1373267642 for database Id


Using 2 for instance number

Specify the number of days of snapshots to choose from


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.

Enter value for num_days: 1

Listing the last day's Completed Snapshots

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
a1092 A109 1976 08 Jan 2007 00:00 1
1977 08 Jan 2007 01:00 1
1978 08 Jan 2007 02:00 1
1979 08 Jan 2007 03:00 1
1980 08 Jan 2007 04:00 1
1981 08 Jan 2007 05:00 1
1982 08 Jan 2007 06:00 1

Specify the Begin and End Snapshot Ids


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1976
Begin Snapshot Id specified: 1976

Enter value for end_snap: 1977


End Snapshot Id specified: 1977

Specify the Report Name


~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_2_1984_1994.txt. To use this name,
press <return> to continue, otherwise enter an alternative.

Enter value for report_name: /tmp/robert

The output of the AWR report contains a wealth of information that you can use to tune your database.

Posted 5th May 2013 by Oracle

MAY DBMS_XPLAN
4

DBMS_XPLAN
source: http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_xplan.htm

The DBMS_XPLAN package provides an easy way to display the output of the EXPLAIN PLAN command in several, predefined
formats. You can also use theDBMS_XPLAN package to display the plan of a statement stored in the Automatic Workload
Repository (AWR) or stored in a SQL tuning set. It further provides a way to display the SQL execution plan and SQL execution
runtime statistics for cached SQL cursors based on the information stored in
the V$SQL_PLAN andV$SQL_PLAN_STATISTICS_ALL fixed views.

Overview
The DBMS_XPLAN package supplies five table functions:
DISPLAY - to format and display the contents of a plan table.
DISPLAY_AWR - to format and display the contents of the execution plan of a stored SQL statement in the AWR.
DISPLAY_CURSOR - to format and display the contents of the execution plan of any loaded cursor.
DISPLAY_SQL_PLAN_BASELINE - to display one or more execution plans for the SQL statement identified by SQL
handle
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
DISPLAY_SQLSET - to format and display the contents of the execution plan of statements stored in a SQL tuning set.

The table func tion DISPLAY_CURSOR requires to have select privileges on the following fixed
views: V$SQL_PLAN, V$SESSION and V$SQL_PLAN_STATISTICS_ALL.
Using the DISPLAY_AWR Function requires the user to have SELECT privileges on DBA_HIST_SQL_PLAN, DBA_HIST_SQLTEXT,
and V$DATABASE.

Using DBMS_XPLAN to obtain the EXPLAIN PLAN of a SQL Statement

Very often we run AWR, ASH and ADDM reports which does highlight the Top SQL statements by disk reads, CPU usage and elapsed time. But an important
piec e of information is missing whic h is the Explain Plan.

Using GUI tools like Enterprise Manager will enable us to drill down to the Explain Plan from an individual SQL statement, but how do we do it from the
c ommand line?

The answer is simply using DBMS_XPLAN.DISPLAY_AWR and provide to it as a parameter the SQL_ID in question (which c an be pic ked up from the AWR or
ASH report).

For example in the ASH report we see this section related to the Top SQL

Top SQL with Top Events DB/Inst: FILESDB/filesdb (Jul 19 13:23 to 13:38)

Sampled #
SQL ID Planhash of Executions % Activity
----------------------- -------------------- -------------------- --------------
Event % Event Top Row Source % RwSrc
------------------------------ ------- --------------------------------- -------
a9j69t1bh6982 2008213504 1 8.33
SQL*Net more data to client 8.33 TABLE ACCESS - FULL 8.33
SELECT x."CUST_ID",x."CUST_FIRST_NAME",x."CUST_LAST_NAME",x."CUST_GENDER",x."CUS
T_YEAR_OF_BIRTH",x."CUST_MARITAL_STATUS",x."CUST_STREET_ADDRESS",x."CUST_POSTAL_

We obtain the SQL_ID whic h is a9j69t1bh6982 and now to view the Explain Plan for this SQL statement we provide it as a parameter to the query as shown
below.

SQL> set linesize 120


SQL> set pagesize 500
SQL> select * from TABLE(dbms_xplan.display_awr('a9j69t1bh6982'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID a9j69t1bh6982
--------------------
SELECT x."CUST_ID",x."CUST_FIRST_NAME",x."CUST_LAST_NAME",x."CUST_GENDER
",x."CUST_YEAR_OF_BIRTH",x."CUST_MARITAL_STATUS",x."CUST_STREET_ADDRESS"
,x."CUST_POSTAL_CODE",x."CUST_CITY",x."CUST_CITY_ID",x."CUST_STATE_PROVI
NCE",x."CUST_STATE_PROVINCE_ID",x."COUNTRY_ID",x."CUST_MAIN_PHONE_NUMBER
",x."CUST_INCOME_LEVEL",x."CUST_CREDIT_LIMIT",x."CUST_EMAIL",x."CUST_TOT
AL",x."CUST_TOTAL_ID",x."CUST_SRC_ID",x."CUST_EFF_FROM",x."CUST_EFF_TO",
x."CUST_VALID" FROM "SH"."CUSTOMERS" x

Plan hash value: 2008213504

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 414 (100)| |
| 1 | TABLE ACCESS FULL| CUSTOMERS | 55500 | 9755K| 414 (1)| 00:00:05 |
-------------------------------------------------------------------------------

We c an see that the query is performing a full table sc an of the Customers table something whic h is not very evident just my reading the AWR or ASH report.

Posted 4th May 2013 by Oracle

MAY Auto Trace Feature and Explain Plan


4
The Autotrace Feature of SQL*Plus
SQL*Plus has an autotrace feature which allows you to automatically display execution plans and helpful statistics for
each statement executed in a SQL*Plus session without having to use the EXPLAIN PLAN statement or query the plan
table. You turn this feature on and off with the following SQL*Plus command:

SET AUTOTRACE OFF|ON|TRACEONLY [EXPLAIN] [STATISTICS]

When you turn on autotrace in SQL*Plus, the default behavior is for SQL*Plus to execute each statement and display
the results in the normal fashion, followed by an execution plan listing and a listing of various server-side resources
used to execute the statement. By using the TRACEONLY keyword, you can have SQL*Plus suppress the query
results. By using the EXPLAIN or STATISTICS keywords, you can have SQL*Plus display just the execution plan without
the resource statistics or just the statistics without the execution plan.
In order to have SQL*Plus display execution plans, you must have privileges on a plan table by the name of plan_table.
In order to have SQL*Plus display the resource statistics, you must have SELECT privileges on v$sesstat, v$statname,
and v$session. There is a script in $ORACLE_HOME/sqlplus/admin called plustrce.sql which creates a role with these
three privileges in it, but this script is not run automatically by the Oracle installer.
The autotrace feature of SQL*Plus makes it extremely easy to generate and view execution plans, with resource
statistics as an added bonus. One key drawback, however, is that the statement being explained must actually be
executed by the database server before SQL*Plus will display the execution plan. This makes the tool unusable in the
situation where you would like to predict how long an operation might take to complete.

-------------------------------------------------------------------------------

Switching on the AUTOTRACE parameter in SQL*Plus causes an explain to be performed on every query.

SQL> SET AUTOTRACE ON


SQL> SELECT *
2 FROM emp e, dept d
3 WHERE e.deptno = d.deptno
4 AND e.ename = 'SMITH';

EMPNO ENAME JOB MGR HIREDATE SAL COMM


---------- ---------- --------- ---------- --------- ---------- ----------
DEPTNO DEPTNO DNAME LOC
---------- ---------- -------------- -------------
7369 SMITH CLERK 7902 17-DEC-80 800
20 20 RESEARCH DALLAS

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 NESTED LOOPS
2 1 TABLE ACCESS (FULL) OF 'EMP'
3 1 TABLE ACCESS (BY INDEX ROWID) OF 'DEPT'
4 3 INDEX (UNIQUE SCAN) OF 'PK_DEPT' (UNIQUE)

Statistics
----------------------------------------------------------
81 recursive calls
4 db block gets
27 consistent gets
0 physical reads
0 redo size
941 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>

This is a relatively easy way to get the execution plan but there is an issue. In order to get the execution plan the statement must
be run to completion. If the query is particularly inefficient and/or returns many rows, this may take a considerable time. At first
glance, using the TRACEONLY option of AUTOTRACE seems to remove this issue, but this option merely suppresses the output of
the query data, it doesn't prevent the statement being run. As such, long running queries will still take a long time to complete, but
they will not present their data. The following example show this in practice.

CREATE OR REPLACE FUNCTION pause_for_secs(p_seconds IN NUMBER) RETURN NUMBER A


BEGIN
DBMS_LOCK.sleep(p_seconds);
RETURN p_seconds;
END;
/

Function created.

SQL> SET TIMING ON


SQL> SET AUTOTRACE ON
SQL> SELECT pause_for_secs(10) FROM DUAL;

PAUSE_FOR_SECS(10)
------------------
10

1 row selected.

Elapsed: 00:00:10.28

Execution Plan
----------------------------------------------------------
Plan hash value: 1550022268

-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
| 0 | SELECT STATEMENT | | 1 | | 2 (0)| 00:00:01 |
| 1 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------

Statistics
----------------------------------------------------------
189 recursive calls
0 db block gets
102 consistent gets
0 physical reads
0 redo size
331 bytes sent via SQL*Net to client
332 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
13 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SET AUTOTRACE TRACEONLY


SQL> SELECT pause_for_secs(10) FROM DUAL;

1 row selected.

Elapsed: 00:00:10.26

Execution Plan
----------------------------------------------------------
Plan hash value: 1550022268

-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (0)| 00:00:01 |
| 1 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
331 bytes sent via SQL*Net to client
332 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>

The query takes the same time to return (about 10 seconds) whether the TRACEONLY option is used or not. If
the TRACEONLY option prevented the query running, you would expect it to return instantly, like an EXPLAIN PLAN.
The solution to this is to use the TRACEONLY EXPLAIN option, which only performs the EXPLAIN PLAN, rather than running the
statement.

EXPLAIN PLAN
The EXPLAIN PLAN method doesn't require the query to be run, greatly reducing the time it takes to get an execution plan for
long-running queries compared to AUTOTRACE. First the query must be explained.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
-------------------------------------------------------------------------------------------------------------

Plan Table

The explain plan process stores data in the PLAN_TABLE. This table can be located in the current schema or a shared
schema and is created using in SQL*Plus as follows.

-- Creating a shared PLAN_TABLE prior to 11g


SQL> CONN sys/password AS SYSDBA
Connected
SQL> @$ORACLE_HOME/rdbms/admin/utlxplan.sql
SQL> GRANT ALL ON sys.plan_table TO public;
SQL> CREATE PUBLIC SYNONYM plan_table FOR sys.plan_table;

In Oracle 11g a shared PLAN_TABLE is created by default, but you can still create a local version of the table using the
"utlxplan.sql" script.
----------------------------------------------------------------------------------------------------------------

DBMS_XPLAN : Display Oracle Execution Plans


The DBMS_XPLAN package is used to format the output of an explain plan. It was introduced in Oracle 9i as a
replacement for the "utlxpls.sql" script or custom queries of the plan table. Subsequent database versions have
increased the functionality of the package.

====================================================================

INTERPRETING QUERY EXECUTION PLANS


http://www.dbspecialists.com/files/presentations/InterpretingQueryExecPlans_whitepaper.pdf

INTRODUCTION
SQL efficiency is central to database efficiency, and the ability to interpret SQL query execution plans is a critical skill of
the application developer and database administrator. In this paper, I review the process of displaying and interpreting
query execution plans. I also discuss how to generate graphical versions of query plans that are much easier to read
than their more common tabular counterparts.

WHAT IS A QUERY EXECUTION PLAN?


SQL is a declarative language, not a procedural language. This means that a SQL query only specifies what data to
retrieve, not how to retrieve it. A database component called the query optimizer decides how best to retrieve the data.
For example, it decides the order in which tables are processed, how to join the tables (nested loops, hash join, merge
join, etc), and which indexes to use if any.
-----------------------------

EXPLAIN PLAN MAY NOT TELL THE RIGHT STORY


The traditionaland least effectiveway of obtaining a query plan is to use EXPLAIN PLAN. Third-party pools such as
Toad use this method behind the scenes.

This method has the incurable defect that it may not produce the same plan that is actually used when the query is
actually executed.

example: http://kerryosborne.oracle-guy.com/2008/10/explain-plan-lies/

Furthermore, this method can only show Oracles estimates of cardinalities and execution times for each step in the
open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Furthermore, this method can only show Oracles estimates of cardinalities and execution times for each step in the
plan but not actual cardinalities and execution times. I only mention this method for completeness but do not
recommend this method except as a teaching tool for beginners.
------------------------

AUTOTRACE MAY NOT TELL THE RIGHT STORY


The second method of obtaining a query plan is to use autotrace in SQL*Plus. This method requires the use of the
PLUSTRACE role and causes a plan to be displayed right after a query is executed in SQL*Plus. However it is not well
known that this method uses EXPLAIN PLAN behind the scenes and can therefore show a different plan that the one
which was actually used during the execution of the query; this can be terribly misleading.
-------------------------

TKPROF ONLY TELLS PART OF THE STORY


A better way to review a query plan is to trace a session and use the tkprof utility to format the trace file.

ALTER SESSION SET statistics_level=ALL;


ALTER SESSION SET tracefile_identifier=TEST;
ALTER SESSION SET sql_trace=TRUE;
While tkprof portrays a true story, it only tells the partial story. What we really need is the estimates side by side with
the actual numbers.
----------------------------

REVIEWING THE COMPLETE PICTURE WITH DBMS_XPLAN


The best way to review a query plan is using DBMS_XPLAN.DISPLAY_CURSOR which displays optimizer estimates side
by side with actual execution metrics. This requires access to data dictionary tables: V$SESSION, V$SQL, and
V$SQL_PLAN_STATISTICS_ALL. In order for Oracle to collect and display row counts and execution metrics for each
step in a query plan, it is critical that STATISTICS_LEVEL be set to ALL; this can be done at the session level or at the
system level. In the example below, we have broken the output into separate tables since it does not fit within the
margins.

================================================================
http://www.dwbiconcepts.com/database/22-database-oracle/26-oracle-query-plan-a-10-minutes-guide.html

Understanding Oracle QUERY PLAN - A 10 minutes


guide

Confused about how to understand Oracle Query Execution Plan? This 10 minutes step by step primer is the first of a two part article that
will teach you exactly the things you must know about Query Plan.

What is Query Execution Plan?


When you fire an SQL query to Oracle, Oracle database internally creates a query execution plan in order to fetch the desired data from the
physical tables. The query execution plan is nothing but a set of methods on how the database will access the data from the tables. This
query execution plan is crucial as different execution plans will need different cost and time for the query execution.

How the Execution Plan is created actually depends on what type of query optimizer is being used in your Oracle database. There are two
different optimizer options Rule based (RBO) and Cost based (CBO) Optimizer. For Oracle 10g, CBO is the default optimizer. Cost
Based optimizer enforces Oracle to generate the optimization plan by taking all the related table statistics into consideration. On the other
hand, RBO uses a fixed set of pre-defined rules to generate the query plan. Obviously such fixed set of rules may not always be able to
create the plan that is most efficient in nature. This is because an efficient plan will depend heavily on the nature and volume of tables
data. Because of this reason, CBO is preferred over RBO.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Understanding Oracle Query Execution Plan
But this article is not for comparing RBO and CBO (In fact, there is not much point in comparing these two). This article will briefly help you
understand,

1. How can we see Query Execution Plan

2. How do we understand (or rather interpret) the execution plan.

So lets begin. I will be using Oracle 10g server and SQL *Plus client to demonstrate all the details.

Oracle Full Table Scan (FTS)


Lets start by creating a simple product table with the following structure,

ID number(10)
NAME varchar2(100)
DESCRIPTION varchar2(255)
SERVICE varchar2(30)
PART_NUM varchar2(50)
LOAD_DATE date

Next I will insert 15,000 records into this newly created table (data taken from one of my existing product table from one of my clients
production environment).

Remember, currently there is no index on the table.

So we start our journey by writing a simple select statement on this table as below,

SQL> explain plan for select * from product;


Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------
Plan hash value: 3917577207
-------------------------------------
| Id | Operation | Name |
-------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL | PRODUCT|
-------------------------------------

Note
-----
- rule based optimizer used (consider using cbo)

Notice that optimizer has decided to use RBO instead of CBO as Oracle does not have any statistics for this table. Lets now build some
statistics for this table by issuing the following command,

SQL> Analyze table product compute statistics;

Now lets do the same experiment once again,

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
SQL> explain plan for select * from product;
Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-----------------------------------------------------
Plan hash value: 3917577207
-----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
-----------------------------------------------------
| 0 | SELECT STATEMENT | | 15856 | 1254K|
| 1 | TABLE ACCESS FULL | PRODUCT | 15856 | 1254K|
-----------------------------------------------------

You can easily see that this time optimizer has used Cost Based Optimizer (CBO) and has also detailed some additional information (e.g.
Rows etc.)

The point to note here is, Oracle is reading the whole table (denoted by TABLE ACCESS FULL) which is very obvious because the select *
statement that is being fired is trying to read everything. So, theres nothing interesting up to this point.

Index Unique Scan


Now lets add a WHERE clause in the query and also create some additional indexes on the table.

SQL> create unique index idx_prod_id on product (id) compute statistics;

Index created.

SQL> explain plan for select id from product where id = 100;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------
Plan hash value: 2424962071

---------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
---------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 |
|* 1 | INDEX UNIQUE SCAN | IDX_PROD_ID | 1 | 4 |
---------------------------------------------------------

So the above statement indicates that CBO is performing Index Unique Scan. This means, in order to fetch the id value as requested,
Oracle is actually reading the index only and not the whole table. Of course this will be faster than FULL TABLE ACCESS operation shown
earlier.

Table Access by Index RowID


Searching the index is a fast and an efficient operation for Oracle and when Oracle finds the desired value it is looking for (in this case
id=100), it can also find out the rowid of the record in product table that has id=100. Oracle can then use this rowid to fetch further
information if requested in query. See below,

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
SQL> explain plan for select * from product where id = 100;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------
Plan hash value: 3995597785

----------------------------------------------------------
| Id | Operation | Name |Rows | Bytes|
----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 81 |
| 1 | TABLE ACCESS BY INDEX ROWID| PRODUCT| 1 | 81 |
|* 2 | INDEX UNIQUE SCAN | IDX_PROD_ID | 1 | |
----------------------------------------------------------

TABLE ACCESS BY INDEX ROWID is the interesting part to check here. Since now we have specified select * for id=100, so Oracle first
use the index to obtain the rowid of the record. And then it selects all the columns by the rowid.

Index Range Scan


But what if we specify a >, or between criteria in the WERE clause instead of equality condition? Like below,

SQL> explain plan for select id from product where id <10

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------
Plan hash value: 1288034875

-------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
-------------------------------------------------------
| 0 | SELECT STATEMENT | | 7 | 28 |
|* 1 | INDEX RANGE SCAN| IDX_PROD_ID | 7 | 28 |
-------------------------------------------------------

So this time CBO goes for an Index Range Scan instead of INDEX UNIQUE SCAN. The same thing will normally happen if we use a
between clause also.

Index Fast Full Scan


Now, lets see another interesting aspect of INDEX scan here by just altering the 10. Before we see the outcome, just remind yourself
that there are 15000 over products with their ids starting from 1 to 15000+. So if we write 10 we are likely to get almost 14990+ records in
return. So does Oracle go for an INDEX RANGE SCAN in this case? Lets see,

SQL> explain plan for select id from product where id>10;

Explained.

SQL> select * from table(dbms_xplan.display);

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
PLAN_TABLE_OUTPUT
------------------------------------------------
Plan hash value: 2179322443

--------------------------------------------------------
| Id | Operation | Name | Rows |Bytes |
--------------------------------------------------------
| 0 | SELECT STATEMENT | | 15849|63396 |
|* 1 | INDEX FAST FULL SCAN| IDX_PROD_ID| 15849|63396 |
---------------------------------------------------------

So, Oracle is actually using a INDEX FAST FULL SCAN to quickly scan through the index and return the records from table. This scan is
"quick" because unlike index full scan or index unique scan, INDEX FAST FULL SCAN utilizes multiple-block input-output (I/O) whereas
the formers utilizes single block I/O.

Note on QUERY PLAN


FTS or Full Table Scan

Whole table is read upto high water mark

Uses multiblock input/output

Buffer from FTS operation is stored in LRU end of buffer cache

Index Unique Scan

Single block input/output

Index Fast Full Scan

Multi block i/o possible

Returned rows may not be in sorted order

Index Full Scan

Single block i/o

Returned rows generally will be in sorted order

So I think we covered the basics of simple SELECT queries running on a single table.

Posted 4th May 2013 by Oracle

MAY SQL Trace , TKPROF


3

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
SQL_TRACE (10046 trace)

SQL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide range of
information and statistics that can be used to tune SQL operations.

Enabling SQL_TRACE

The SQL Trace facility can be enabled/disabled for an individual session or at the instance level. If the
initialisation Parameter SQL_TRACE is set to TRUE in the init.ora of an instance, then all sessions will be traced.
Note that using this initialization parameter to enable the SQL trace facility for the entire instance can have a
severe performance impact.

------------------------------
The quickest way to capture the SQL is being processed by a session is to switch on SQL trace or set the 10046 event for a
representative period of time. The resulting trace files can be read in their raw state or translated using the tkprof utility.

------------------------------------------------

Tracing a SQL session

Start session trace


To start a SQL trace for the current session, execute:

ALTER SESSION SET sql_trace = true;

You can also add an identifier to the trace file name for later identification:

ALTER SESSION SET sql_trace = true;


ALTER SESSION SET tracefile_identifier = mysqltrace;

Stop session trace


To stop SQL tracing for the current session, execute:

ALTER SESSION SET sql_trace = false;

Tracing other user's sessions

DBA's can use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to trace problematic database sessions. Steps:

Get the SID and SERIAL# for the process you want to trace.

SQL> select sid, serial# from sys.v_$session where ...


SID SERIAL#
---------- ----------
8 13607

Enable tracing for your selected process:

SQL> ALTER SYSTEM SET timed_statistics = true;


SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);

Ask user to run just the necessary to demonstrate his problem.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Disable tracing for your selected process:

SQL> execute dbms_system.set_sql_trace_in_session(8,13607, false);

Look for trace file in USER_DUMP_DEST:

$ cd /app/oracle/admin/oradba/udump
$ ls -ltr
total 8
-rw-r----- 1 oracle dba 2764 Mar 30 12:37 ora_9294.trc

Tracing an entire database


To enable SQL tracing for the entire database, execute:

ALTER SYSTEM SET sql_trace = true SCOPE=MEMORY;

To stop, execute:

ALTER SYSTEM SET sql_trace = false SCOPE=MEMORY;

Identifying trace files


Trace output is written to the database's UDUMP directory.
The default name for a trace files is INSTANCE_PID_ora_TRACEID.trc where:
INSTANCE is the name of the Oracle instance,
PID is the operating system process ID (V$PROCESS.OSPID); and
TRACEID is a character string of your choosing.

Size of trace files


The trace file size is limited by the parameter MAX_DUMP_FILE_SIZE. The unit of this parameter, if you don't specify the K or M
option, is in OS block size.
Be sure this parameter is set to a value high enough for your purpose (e.g. some MB). Of course this depends on the amount and
complexitiy of statements which have to be run while tracing. If this value is set too low, possibly the dump file size limit will be
reached before the execution of the crucial statements and the trace file will be closed before the interesting parts can be
recorded in it.
On the other hand, when this parameter is set to UNLIMITED (default value), if the program to be traced is working forth and forth
and the trace mode is not finished, the trace file can grow without limit which means until the associated file system or disk is full.
A DBA can stop the trace of a session using the DBMS_MONITOR (10g and up), DBMS_SYSTEM or DBMS_SUPPORT
package.

Formatting output
Trace output is quite unreadable. However, Oracle provides a utility, called TKProf, that can be used to format trace output.

==========================================================================

TKPROF Usage

TKPROF allows you to analyse a trace file to determine where time is


being spent and what query plans are being used on SQL statements.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
1 - Set TIMED_STATISTICS if required at database level.

2 - Get SQL_TRACE output for the session you wish to monitor

3 - Find the appropriate trace file (In USER_DUMP_DEST, default


$ORACLE_HOME/rdbms/log on Unix).
You can find the most recent trace files on Unix with the command:
ls -ltr
This will list the most recent files LAST

4 - Run tkprof on the trace file thus:

tkprof tracefile outfile [explain=user/password] [options...]

TKPROF Options
~~~~~~~~~~~~~~
print=integer List only the first 'integer' SQL statements.
insert=filename List SQL statements and data inside INSERT statements.
sys=no TKPROF does not list SQL statements run as user SYS.
record=filename Record statements found in the trace file.
sort=option Set of zero or more of the following sort options:

prscnt number of times parse was called


prscpu cpu time parsing
prsela elapsed time parsing
prsdsk number of disk reads during parse
prsqry number of buffers for consistent read during parse
prscu number of buffers for current read during parse
prsmis number of misses in library cache during parse

execnt number of execute was called


execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
exemis number of library cache misses during execute

fchcnt number of times fetch was called


fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched

userid userid of user that parsed the cursor

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TKProf Interpretation

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
TKProf is an e x e cutable that 'parse s' O racle trace file s to produce m ore re adable output.
In the de fault m ode , all the inform ation in TKProf is available from the base trace file .

TKProf Structure

TKProf output for an individual cursor has the following structure:

SELECT NULL FROM DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 3 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 3 0 1
Misses in library cache during parse: 0
Optimizer goal: FIRST_ROWS
Parsing user id: 271
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=21 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00

Overall the structure is:

SQL Statement
Parse/Execute/Fetch statistics and timings
Library Cache information
Row source plan
Events waited for by the statement

Parse/Execute/Fetch statistics and timings This section contains the bulk of the useful timing information for
each statement. This can be used in conjunction with the 'Row source plan' and 'Events waited for by the
statement' to give the full picture. Columns in the Parse/Execute/Fetch table have the following meanings:

call Statistics for each cursor's activity are divided in to 3 areas: Parse/Execute/Fetch. A total is also calculated.

Parse statistics from parsing the cursor. This includes information for plan generation etc.

Execute statistics for the execution phase of a cursor

Fetch statistics for actually fetching the rows

count number of times each individual activity has been performed on this particular cursor

cpu cpu time used by this cursor

elapsed elapsed time for this cursor (includes the cpu time)

disk This indicates the number of blocks read from disk. Generally it would be preferable for blocks to be read from the buffer cache rather than disk.

query This column is incremented if a buffer is read in Consistent mode. A Consistent mode buffer is one that has been generated to give a consistent read snapshot

for a long running transaction.

current This column is incremented if a buffer is found in the buffer cache that is new enough for the current transaction and is in current mode (and it is not a CR buffer).

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
This applies to buffers that have been read in to the cache as well as buffers that already exist in the cache in current mode.

rows Rows retrieved by this step

Library Cache information Tracing a statement records some information regarding library cache usage which is
externalised by TKProf in this section. Most important here is "Misses in library cache during parse:" which shows
whether or not a statement is being re-parsed. If a statement is being shared well then you should see a
minimal number of misses here (1 or 0 preferably). If sharing is not occurring then high values in this field can
indicate that. Row source plan This section displays the access path used at execution time for each statement
along with timing and actual row counts returned by each step in the plan. This can be very useful for a number
of reasons.

Example:

Rows Row Source Operation


------- ---------------------------------------------------
[A] 1 TABLE ACCESS FULL DUAL [B] (cr=3 [C] r=0 [D] w=0 [E] time=21 us [F])

>>Row count [A]- the row counts output in this section are the actual number of rows returned
at each step in the query execution.
These actual counts can be compared with the estimated cardinalities (row counts)from an opti
mizer explain plan.

Any differences may indicate a statistical problem that may result in a poor planchoice.

>>Row Source Operation [B] - Shows the operation executed at this step in the plan.

>>IO Stats - For each step in the plan, [C] is the consistent reads, [D] is the physical read
s and [E] is the writes.

These statistics can be useful in identifying steps that read or write a particularly large p
roportion of the overall data.

>>Timing - [F] shows the cumulative elapsed time for the step and the steps that preceded it.

This section is very useful when looking for the point in an access path that takes all the t
ime.

By looking for the point at where the majority of the time originates it is possible to narro
w down a number of problems.

Events waited for by the statement


This section displays all wait events that a statement has waited for during the tracing.

This section can be very useful when used in conjunction with the statistics and row source i
nformation for tracking down the causes

of problems associated with long wait times.

High numbers of waits or waits with a long total duration may be candidates for investigation

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
dependent on the wait itself.

==============================================================

Potential TKProf Usage Examples

Spotting Relatively High Resource Usage

update ...
where ...

-----------------------------------------------------------------------
| call | count | cpu | elapsed | disk | query | current | rows |
|---------|-------|-----|---------|------|---------|---------|--------|
| Parse | 1 | 7 | 122 | 0 | 0 | 0 | 0 |
| Execute | 1 | 75 | 461 | 5 | [H] 297 | [I] 3 | [J] 1 |
| Fetch | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
-----------------------------------------------------------------------

This statement is a single execute of an update.

[H] shows that this query is visiting 297 buffers to find the rows to update
[I] shows that only 3 buffer are visited performing the update
[J] shows that only 1 row is updated.

Reading 297 buffers to update 1 rows is a lot of work and would tend to indicate that the acc
ess path being used is not particularly

efficient.

Perhaps there is an index missing that would improve the access performance?

Spotting Over Parsing

select ...

-------------------------------------------------------------------------
| call | count | cpu | elapsed | disk | query | current | rows |
|---------|-------|---------|---------|------|--------|---------|-------|
| Parse | [M] 2 | [N] 221 | 329 | 0 | 45 | 0 | 0 |
| Execute | 3 | [O] 9 | [P] 17 | 0 | 0 | 0 | 0 |
| Fetch | 3 | 6 | 8 | 0 | [L] 4 | 0 | [K] 1 |
-------------------------------------------------------------------------

Misses in library cache during parse: 2 [Q]

Here we have a select that we suspect may be a candidate for over parsing.

[K] is shows that the query has returned 1 row.


[L] shows that 4 buffers were read to get this row back.

This is fine.

[M] show that the statement is parsed twice - this is not desirable especially as the parse c
pu usage is a high [N] in comparison to

the execute figures : [O] & [P] (ie the elapsed time for execute is 17 seconds but the statem

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
ent spends over 300 seconds to

determine the access path etc in the parse phase.

[Q] shows that these parses are hard parses. If [Q] was 1 then the statement would have had
1 hard parse followed by a soft parse

(which just looks up the already parsed detail in the library cache).

This is not a particularly bad example in terms of total counts since the query has only been
executed a few times.

However if this pattern is reproduced for each execution this could be a significant issue.

Excessive parsing should be avoided as far as possible by ensuring that code is shared:

using bind variables

make shared pool large enough to hold query definitions in memory long enough to be reused
.

Spotting Queries that Execute too frequently

The following query has a high elapsed time and is a candidate for investigation:

UPDATE ...
SET ...
WHERE COL = :bind1;

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- --------
Parse 0 0.00 0.00 0 0 0 0
Execute 488719 66476.95 66557.80 1 488729 1970566 488719
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- --------
total 488719 66476.95 66557.80 1 488729 1970566 488719

From the above, the update executes 488,719 times and takes in total ~ 65,000 seconds to do t
his.

The majority of the time is spent on CPU.

A single row is updated per execution.

For each row updated ~1 buffer is queried. ~2 million buffers are visited to perform the upda
te.

On average the elapsed time is ~ 0.1 second per execution.

A sub-second execution time would normally be acceptable for most queries,

but if the query is not scaleable and is executed numerous times,

then the time can quickly add up to a large number.

It would appear that in this case the update may be part of a loop where individual values ar
e passsed and 1 row is updated per value.

This structure does not scale with large number of values meaning that it can become ineffici
ent.

One potential solution is to try to 'batch up' the updates so that multiple rows are updated
within the same execution.

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
As Oracle releases have progressed a number of optimizations and enhancements have been made
to improve the handling of 'batch'

operations and to make them more efficient.

In this way, code modifications to replace frequently executed relatively inefficient stateme
nts by more scaleable operations

can have a significant impact.

============================================================================

SQL trace files contain detailed timing information. By default, Oracle does not track timing, so all timing figures in trace files will
show as zero. If you would like to see legitimate timing information, then you need to enable timed statistics. You can do this at the
instance level by setting the following parameter in the instance parameter file and restarting the instance:

timed_statistics = true

You can also dynamically enable or disable timed statistics collection at either the instance or the session level with the following
commands:

ALTER SYSTEM SET timed_statistics = TRUE|FALSE;


ALTER SESSION SET timed_statistics = TRUE|FALSE;

There is no known way to enable timed statistics collection for an individual session from another session (akin to the
SYS.dbms_system.set_sql_trace_in_session built-in).

There is very high overhead associated with enabling SQL trace. Some DBAs believe the performance penalty could be over 25%.
Another concern is that enabling SQL trace causes the generation of potentially large trace files. For these reasons, you should use
SQL trace sparingly. Only trace what you need to trace and think very carefully before enabling SQL trace at the instance level.

On the other hand, there is little, if any, measurable performance penalty in enabling timed statistics collection. Many DBAs run
production databases with timed statistics collection enabled at the system level so that various system statistics (more than just
SQL trace files) will include detailed timing information

Posted 3rd May 2013 by Oracle

MAY Performance Tuning - page2


2

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Source: Link

Server Tuning

iostat (disk)
vmstat (RAM)
netstat(network)
top(CPU) and prstat
SAR

iostat stands for input output statistics and reports statistics for i/o devices such as disk drives .
vmstat gives the statistics for virtual Memory and
netstat gives the network statstics .

TOP

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
TOP command is use to see the process status..But it is not used in solaris 10, in solaris 10 it is replaced by prstat .

top for checking the current system status like process,cpuand memory...

PRSTAT

you can use prstat to identify which processes are consuming the CPU resources. The prstat -s cpu -n 5 command is used to list the five processes
that are consuming the most CPU resources. The -s cpu flag tells prstat to sort the output by CPU usage. The -n 5 flag tells prstat to restrict the
output to the top five processes.

$ prstat -s cpu -n 5

Load Average::

For example, one can interpret a load average of "1.73 0.50 7.98" on a single-CPU system as:

during the last minute, the CPU was overloaded by 73% (1 CPU with 1.73 runnable processes, so that 0.73 processes had
to wait for a turn)

during the last 5 minutes, the CPU was underloaded 50% (no processes had to wait for a turn)

during the last 15 minutes, the CPU was overloaded 698% (1 CPU with 7.98 runnable processes, so that 6.98 processes had
to wait for a turn)

This means that this CPU could have handled all of the work scheduled for the last minute if it were 1.73 times as fast, or
if there were two (the ceiling of 1.73) times as many CPUs, but that over the last five minutes it was twice as fast as
necessary to prevent runnable processes from waiting their turn.

In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73 processes ready to run,
and each one could be scheduled into a CPU

SAR

The SAR suite of utilities originated in Solaris. It became popular and now runs on most flavors of UNIX, including AIX, HP-UX, and Linux.
(System Activity Reporter)

The reason for sar creation was that gathering system activity data from vmstat and iostat is pretty time-consuming. If you try to automate the
gathering of system activity data, and creation of periodic repots you naturally come to creation of a tool like sar.

System Activity Recorder can monitor half-dozen metrics related to overall system performance, for example:

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
cpu utilization (it's pretty effective tool for spotting CPU bottlenecks)
hard disk utilization
terminal IO
number of files open
processes running

see below link for SAR example:

http://docs.oracle.com/cd/E19455-01/805-7229/6j6q8svhc/index.html

Posted 2nd May 2013 by Oracle

MAY Performance Tuning - Page1


1
Top Mistakes Found in Oracle Systems

This section lists the most common mistakes found in Oracle systems. By following the Oracle performance
improvement methodology, you should be able to avoid these mistakes altogether. If you find these mistakes in your
system, then re-engineer the application where the performance effort is worthwhile.

1. Bad Connection Management


The application connects and disconnects for each database interaction. This problem is common with stateless
middleware in application servers. It has impact on performance, and is totally unscalable.

2. Bad Use of Cursors and the Shared Pool


Not using cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL
statements. This has an order of magnitude impact in performance, and it is totally unscalable.
Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications
generating dynamic SQL.

3. Bad SQL
Bad SQL is SQL that uses more resources than appropriate for the application requirement. This can be a decision
support systems (DSS) query that runs for more than 24 hours or a query from an online application that takes more
than a minute. SQL that consumes significant system resources should be investigated for
potential improvement. ADDM identifies high load SQL and the SQL tuning advisor can be used to provide
recommendations for improvement.

4. Use of Nonstandard Initialization Parameters

These might have been implemented based on poor advice or incorrect assumptions. Most systems will give
acceptable performance using only the set of basic parameters.
Optimizer parameters set in the initialization parameter file can override proven optimal execution plans.
For these reasons, schemas, schema statistics, and optimizer settings should be managed together as a group to
ensure consistency of performance.

5. Getting Database I/O Wrong


Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly,
because they configure disks by disk space and not I/O bandwidth.

6. Redo Log Setup Problems


Many sites run with too few redo logs that are too small. Small redo logs cause system checkpoints to continuously put
a high load on the buffer cache and I/O system. If there are too few redo logs, then the archive cannot keep up, and

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
the database will wait for the archive process to catch up.

7. Long Full Table Scans


Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing
indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive and unscalable.

8. High Amounts of Recursive (SYS) SQL

Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent
allocations, taking place. This is unscalable and impacts user response time.
Use locally managed tablespaces to reduce recursive SQL due to extent allocation.

9. Deployment and Migration Errors


In many cases, an application uses too many resources because the schema owning the tables has not been
successfully migrated from the development environment or from an older implementation. Examples of this are missing
indexes or incorrect statistics. These errors can lead to sub-optimal execution plans and poor interactive user
performance. When migrating applications of known performance, export the schema statistics to maintain plan stability
using the DBMS_STATS package.

Although these errors are not directly detected by ADDM, ADDM highlights the
resulting high load SQL.
=======================================================================

Configuring a Database for Performance

Necessary Initialization Parameters Without Performance Impact


Parameter Description
DB_NAME Name of the database. This should match the ORACLE_SID environment variable.
DB_DOMAIN Location of the database in Internet dot notation.
OPEN_CURSORS Limit on the maximum number of cursors (active SQL statements) for each session. The setting is application- dependent;
500 is recommended.
CONTROL_FILES Set to contain at least two files on different disk drives to prevent failures from control file loss.
DB_FILES Set to the maximum number of files that can assigned to the database.

Important Initialization Parameters With Performance Impact


Parameter & Description

COMPATIBLE

Specifies the release with which the Oracle server must maintain compatibility. It lets you take advantage of the maintenance improvements of a new release immediately in
your production systems without testing the new functionality in your environment. If your application was designed for a specific release of Oracle, and you are actually installing a later
release, then you might want to set this parameter to the version of the previous release.

DB_BLOCK_SIZE

Sets the size of the Oracle database blocks stored in the database files and cached in the SGA. The range of values depends on the operating system, but it is typically 8192 for
transaction processing systems and higher values for database warehouse systems.

SGA_TARGET

Specifies the total size of all SGA components. If SGA_TARGET is specified, then the buffer cache (DB_CACHE_SIZE), Java pool (JAVA_POOL_SIZE), large pool (LARGE_POOL_SIZE), and
shared pool (SHARED_POOL_SIZE) memory pools are automatically sized PGA_AGGREGATE_TARGET

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com
Specifies the target aggregate PGA memory available to all server processes attached to the instance

PROCESSES

Sets the maximum number of processes that can be started by that instance. This is the most important primary parameter to set, because many other parameter values are deduced
from
this.

SESSIONS

This is set by default from the value of processes. However, if you are using the shared server, then the deduced value is likely to be insufficient.

UNDO_MANAGEMENT

Specifies which undo space management mode the system should use. AUTO mode is recommended.

UNDO_TABLESPACE
Specifies the undo tablespace to be used when an instance starts up.

Posted 1st May 2013 by Oracle

open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com

Das könnte Ihnen auch gefallen