Beruflich Dokumente
Kultur Dokumente
What is performance?
Subjective measures
Slow response time on specific queries Overall sluggish database response
Objective measures
CPU consumption I/O consumption Physical I/O block reads from disk Logical I/O block reads from buffer cache Optimizer query cost
Blocks accessed by index start here Blocks accessed by FTS start here if CACHEd
A practical example
Table has 100,000 rows Average row length is 100 bytes All blocks are 90% full (PCTFREE=10) Table is stored in 1,790 blocks Full table scan requires 1,790 logical reads
1789
1790
WHERE id=1345
..
..
Index contains ROWID of row in data block
1789
1790
Data Blocks
Nonunique index
The table contains a code value that could represent a customer type code Nonunique index is created on the code value The table has 1,000 rows corresponding to each of the 100 code values Index height is 2 1 branch block 307 leaf blocks Number of logical reads per access is not obvious
WHERE type=5
..
..
Index leaf blocks Few rows having the same key value reside in the same data block
1789
1790
Data Blocks
..
..
All rows having the same key value reside in a few adjacent data blocks
1789
1790
Data Blocks
How do we determine that the second index will perform better than the first?
Clustering factor
Helps to understand the efficiency of nonunique indexes Clustering factor is available in dba_indexes and dba_ind_partitions in clustering_factor column after statistics are gathered The range for clustering factor is between the number of data blocks and number of table rows The first nonunique index case (worst case)
Clustering factor = 100,000 = # of rows The high clustering factor case
Index summary
Unique Index Height Branch blocks Leaf blocks Distinct keys LR/access Rows/access Clustering Factor 2 1 222 100,000 3 1 1,786 Nonunique Index high CF 2 1 307 100 503 1,000 100,000 Nonunique index low CF 2 1 307 100 53 1,000 1,786
Access summary
Logical Reads required to get a specific number of rows from the table
Percent Rows of Table Retrieved Full Table Scan Unique Index Nonunique Nonunique Index Index High CF Low CF
0.001
1
1
1,000
1,790
1,790
3
3,000
N/A
503
N/A
53
5
10 25
5,000
10,000 25,000
1,790
1,790 1,790
15,000
30,000 75,000
2,515
5,030 12,575
265
530 1,325
50
50,000
1,790 150,000
25,150
2,650
optimizer_mode
ALL_ROWS and CHOOSE (Default) modes are becoming better at determining when to perform full-table scans RULE and FIRST_ROWS favor index access methods over full-table scans Recommendation: Use the default value and correct the few errant queries that result using query hints It is imperative to periodically collect statistics on tables and indexes for the optimizer to determine the optimal execution plan
Result set
5. Repeat steps 2-4 for all rows in large row source
Example join
tax_info table has 1,200 rows, 21 blocks orders table has 600,000 rows, 9,000 data blocks Query returns 4,000 rows
For every row retrieved from orders, 3 logical reads are required to retrieve the row in tax_info by the unique index. 4,000 x 3 = 12,000 logical reads to access tax_info
tax_info appears first in the execution plan so it is the table that is bitmapped One full-table scan (21 Logical Reads) is required to retrieve all rows from tax_info Compare to 12,000 logical reads in the nested loops join case
Server Process
PX Slave Process
PX Slave Process
PX Slave Process
Summary
Full-table scans may be the most efficient access method when between 5% and 35% of rows are retrieved depending upon available indexes and physical structure Full-table scans will usually be the most efficient access method when more than 35% of rows are retrieved Significant performance gains can be realized by employing features that can effectively utilize full-table scans Tune database parameters for optimal full-table scan and feature performance Test parameters and queries before migrating!