Sie sind auf Seite 1von 4

Reading the AWR Report

An AWR report is very similar to the STATSPACK report from Oracle9i, and it contains vital elapsed-time
information on what happened during particular snapshot range. The data in an AWR or STATSPACK
report is the delta, or changes, between the accumulated metrics within each snapshot.
The main sections in an AWR report include:
Report Summary: This gives an overall summary of the instance during the snapshot period, and it
contains important aggregate summary information.
Cache Sizes (end): This shows the size of each SGA region after AMM has changed them. This
information can be compared to the original init.ora parameters at the end of the AWR report.
Load Profile: This important section shows important rates expressed in units of per second and
transactions per second.
Instance Efficiency Percentages: With a target of 100%, these are high-level ratios for activity in the
SGA.
Shared Pool Statistics: This is a good summary of changes to the shared pool during the snapshot
period.
Top 5 Timed Events: This is the most important section in the AWR report. It shows the top wait events
and can quickly show the overall database bottleneck.
Wait Events Statistics Section: This section shows a breakdown of the main wait events in the
database including foreground and background database wait events as well as time model, operating
system, service, and wait classes statistics.
Wait Events: This AWR report section provides more detailed wait event information for foreground user
processes which includes Top 5 wait events and many other wait events that occurred during the
snapshot interval.
Background Wait Events: This section is relevant to the background process wait events.
Time Model Statistics: Time mode statistics report how database-processing time is spent. This section
contains detailed timing information on particular components participating in database processing.
Operating System Statistics: The stress on the Oracle server is important, and this section shows the
main external resources including I/O, CPU, memory, and network usage.
Service Statistics: The service statistics section gives information about how particular services
configured in the database are operating.
SQL Section: This section displays top SQL, ordered by important SQL execution metrics.
SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution time
during processing.
SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time during
its processing.
SQL Ordered by Gets: These SQLs performed a high number of logical reads while retrieving data.
SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while
retrieving data.
SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.
SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large
amount of SGA shared pool memory.
SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool for
some reason.
Instance Activity Stats: This section contains statistical information describing how the database
operated during the snapshot period.
Instance Activity Stats (Absolute Values): This section contains statistics that have absolute values
not derived from end and start snapshots.
Instance Activity Stats (Thread Activity): This report section reports a log switch activity statistic.
I/O Section: This section shows the all important I/O activity for the instance and shows I/O activity by
tablespace, data file, and includes buffer pool statistics.
Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and Java
pool.
Buffer Pool Advisory
PGA Aggr Summary: PGA Aggr Target Stats; PGA Aggr Target Histogram; and PGA Memory
Advisory.
Shared Pool Advisory

Java Pool Advisory


Buffer Wait Statistics: This important section shows buffer cache waits statistics.
Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are
special internal structures which provide concurrent access to various database resources.
Undo Segment Summary: This section gives a summary about how undo segments are used by the
database.
Undo Segment Stats: This section shows detailed history information about undo segment activity.
Latch Activity: This section shows details about latch statistics. Latches are a lightweight serialization
mechanism that is used to single-thread access to internal Oracle structures.
Latch Sleep Breakdown, Latch Miss Sources, Parent Latch Statistics, Child Latch Statistics
Segment Section: This report section provides details about hot segments using the following criteria:
Segments by Logical Reads: Includes top segments which experienced high number of logical
reads.
Segments by Physical Reads: Includes top segments which experienced high number of disk
physical reads.
Segments by Buffer Busy Waits: These segments have the largest number of buffer waits caused
by their data blocks.
Segments by Row Lock Waits: Includes segments that had a large number of row locks on their
data.
Segments by ITL Waits: Includes segments that had a large contention for Interested Transaction
List (ITL). The contention for ITL can be reduced by increasing INITRANS storage parameter of the table.
Dictionary Cache Stats: This section exposes details about how the data dictionary cache is operating.
Library Cache Activity: Includes library cache statistics describing how shared library objects are
managed by Oracle.
SGA Memory Summary: This section provides summary information about various SGA regions.
init.ora Parameters: This section shows the original init.ora parameters for the instance during the
snapshot period.

Reading TKPROF
TK*Prof output
The sort of things you should be looking for are:
-For each SQL, check the Elapsed statistic. This shows the elapsed time for each SQL. High values
obviously indicate long-running SQL
-Note the Disk and Query columns. These indicate data retrieval from disk and data retrieval from
memory respectively. If the Disk column is relatively low compared to the Query column, then it could
mean that the SQL has been run several times and the data has been cached. This might not give a true
indication of the performance when the data is not cached. Either have the database bounced by the
DBA, or try the trace again another day.
-The first row of statistics for each SQL is for the Parse step. If a SQL is run many times, it usually does
not need to be re-parsed unless Oracle needs the memory it is taking up, and swaps it out of the shared
pool. If you have SQLs parsed more than once, get the DBA to check whether the database can be tuned
to reduce this.
-A special feature of the Explain Plan used in TK*Prof is that it shows the number of rows read for each
step of the execution plan. This can be useful to track down Range Scan problems where thousands of
rows are read from an index and table, but only a few are returned after the bulk are filtered out.
-In order to run SQL statements, Oracle must perform its own SQL statements to query the data
dictionary, looking at indexes, statistics etc. This is called Recursive SQL. The last two entries in the
TK*Prof output are summaries of the Recursive and Non-Recursive (ie. "normal") SQL. If the recursive
SQL is taking up more than a few seconds, then it is a likely sign that the Shared Pool is too small. Show
the TK*Prof output to the DBA to see if the database can be tuned.
Analysing problem SQL
1. Quick Fixes -- Are the underlying tables and indexes analyzed?

2. Bad Hints -- INDEX hints.


3. Cartesian Products -- Oracle joins all of the rows in one table to all of the rows in another table. If the
number of rows in one of those tables is zero or one, this will not be a problem. Explain plan -- MERGE
JOIN (CARTESIAN)
4. Full Table Scans -Good -- Used on a small table (< 500 rows),
Used in a query returning more than a few percent of the rows in a table(High Volume SQL).
Used in a Sort-Merge or Hash join, but only when these join methods are intended.
Bad -- Used on a large table where indexed or hash cluster access is preferred.
Used on a medium-large table (> 500 rows) as the outer table of a Nested Loop join.
Used on a medium-large table (> 500 rows) in a Nested Sub-Query.
Used in a Sort-Merge or Hash join when a Nested Loop join is preferred.
5. Multiple Table Joins -- If you have a table join (ie. a FROM clause) with five or more tables, and you
have not included a hint for join order (eg. ORDERED or LEADING ), then Oracle may be joining the
tables in the wrong order.
6. Accessing Remote Tables -- Remote tables have a number of performance implications:
If all of the tables from which you are selecting are on the same remote database, then Oracle will get the
remote database to run the query and send back the results. The only problems are the cost of shipping
the results back, and the fact that you cannot see how the remote database is executing the query in
Explain Plan. Make sure that the whole query is being performed remotely - the Explain Plan output
should just have the one step - SELECT STATEMENT REMOTE - fully remote statement
If some tables are local - or you are joining tables over two or more remote databases - then Oracle will
need to ship all of the data to one database before it can perform a join.
If you want to use an Indexed Nested Loop (low volume) join, then the outer (2nd) table cannot be remote
- if it is you will get a full table scan and a hash join. You could use the DRIVING SITE hint to make the
remote database the driving site of the query. Then the inner (1st) table will be sent over there, and an
index can be used on the outer table. The remote database will then send back the results.
7. Triggers
8. Referential Integrity Constraints -Inserting and Updating:
When a table contains foreign key contraints that reference a parent table, inserts and updates on a
foreign key column result in Oracle running a SQL in the background that verifies the existence of the
parent row. Even when you INSERT or UPDATE many rows in a single statement, the parent rows are
still checked row-by-row and key-by-key. For example, inserting 1000 rows into a table with 6 foreign keys
will execute an additional 6000 SQL statements behind the scenes.
There is no way to make Oracle perform these foreign key checks in bulk (eg. insert all rows, and then
verify the foreign keys in one SELECT against the parent table.
One option is to disable the constraint before you perform the bulk-DML, and then re-enable it afterwards.
The re-enable process will validate all rows together rather than individually, but it must validate the entire
table, not just the rows you inserted/updated. Clearly this is a bad option if you are updating just a few
hundred rows. The break-even point depends on the table, the environment, and the load; but it is
commonly faster to disable/enable foreign keys if you are going to insert/update more than 1%-10% of the
total rows in the table.
Deleting:
A Foreign Key constraint can be set up so that when a row is deleted from the parent table, all dependent
rows are deleted from the child table, or the foreign keys in the child table are set to NULL. Even if ON
DELETE CASCADE or ON DELETE SET NULL is not used, Oracle must still check that the child table
contains no dependent rows. If you have a DELETE that is running slowly, check for foreign keys
If you find that your table is referenced by another, then there may be one of several problems:
There is no index on the child table foreign key, forcing Oracle to do a full table scan.
The Foreign Key is declared with ON DELETE CASCADE (or ON DELETE SET NULL), and there are
a lot of rows to delete.
The Foreign Key is declared with ON DELETE CASCADE, and the child table has further tables
referencing its primary key. One of the first two problems may apply to the child-child table.
A child table may have a Delete Trigger.
9. Locking Problems--

If you are running an INSERT/UPDATE/DELETE or a DDL statement (like TRUNCATE, or refreshing a


Materialized View), the problem may not lie with your SQL. You may be locked out by another session or
by the lack of a specific resource.
Whilst the SQL is running, in another session and check the Wait Event. If the Wait Event mentions Lock,
Latch, or Enq (enqueues), then your SQL is probably doing nothing but waiting.
10. Chaining -Oracle stores table rows in blocks. After a row is inserted into a table, Oracle checks how much space is
left in the block. If the amount of space remaining is less than a specified percentage (PCTFREE), then
the row is removed from the free-list and new rows will be inserted into the next available block. The free
space left in each block is used for UPDATEs.
-- NULL Column to NOT NULL Column, -- VARCHAR2 (increase PCTFREE), -- NUMBER and DATE
11. The High Water Mark problem -Oracle tables have an attribute called the High Water Mark. This is the highest numbered block in the
table that has ever contained a row. When an INSERT statement cannot find a free block to populate, it
goes straight to the High Water Mark and allocates a new block.
When rows are deleted, the High Water Mark does not come down. This is not normally a problem,
except for Full Table Scans, because a Full Table Scan must search every block beneath the High Water
Mark for data.
If a table contains 1000 rows, then you would expect a full table scan to be fairly speedy. But if it once
contained 100000 rows, then you would be wrong: the full table scan of the 1000 rows will take roughly
the same amount of time as a 100000 row full table scan.
If the table is continually growing and shrinking (through regular deletes), then you should look for a
different way to manage it. Perhaps it can be partitioned, so that instead of deleting rows you truncate or
drop partitions.
If the table is not going to grow back to full size in the medium term, then it should be dropped and rebuilt
by the DBA to move the High Water Mark back down to an appropriate level.
12. Fragmented Indexes -Since (b-tree) indexes are stored in a sorted order, updating an indexed column usually involves deleting
it from one position in the index and inserting into another place. If this happens often enough, the index
can become riddled with holes. Range scans on the index will read more blocks than they have to
because each block is under-utilised.
This can also happen when many rows are deleted; a common occurrence in a fast-refresh Materialized
View. The only solution is to rebuild the index when it becomes fragmented.
ALTER INDEX my_index REBUILD [PARTITION part_name]

Das könnte Ihnen auch gefallen