Sie sind auf Seite 1von 9

Oracle SQL tuning goals

Oracle SQL tuning is a phenomenally complex subject. Entire books have been written about the nuances
of Oracle SQL tuning; however, there are some general guidelines that every Oracle DBA follows in order
to improve the performance of their systems. Again, see the book "Oracle Tuning: The Definitive
Reference", for complete details.

The goals of SQL tuning focus on improving the execution plan to fetch the rows with the smallest number
of database "touches" (LIO buffer gets and PIO physical reads).

• Remove unnecessary large-table full-table scans—Unnecessary full-table scans cause a huge

amount of unnecessary I/O and can drag-down an entire database. The tuning expert first evaluates
the SQL based on the number of rows returned by the query. If the query returns less than 40
percent of the table rows, it needs tuning. The most common tuning remedy for unnecessary full-
table scans is adding indexes. Standard b-tree indexes can be added to tables, and bitmapped and
function-based indexes can also eliminate full-table scans. In some cases, an unnecessary full-
table scan can be forced to use an index by adding an index hint to the SQL statement.

• Cache small-table full-table scans—In cases where a full-table scan is the fastest access method,
the administrator should ensure that a dedicated data buffer is available for the rows. In Oracle7,
you can issue "alter table xxx cache". In Oracle8 and beyond, the small table can be cached by
forcing it into the KEEP pool.

• Verify optimal index usage—This is especially important when using the rule-based optimizer.
Oracle sometimes has a choice of indexes, and the tuning professional must examine each index
and ensure that Oracle is using the proper index.

• Materialize your aggregations and summaries for static tables - One features of the Oracle 10g
SQLAccess advisor is recommendations for new indexes and suggestions for materialized views.
Materialized views pre-join tables and pre-summarize data, a real silver bullet for data mart
reporting databases where the data is only updated daily. Again, see the book "Oracle Tuning: The
Definitive Reference", for complete details on SQL tuning with materialized views.

These are the goals of SQL tuning in a nutshell. However, they are deceptively simple, and to effectively
meet them, we need to have a through understanding of the internals of Oracle SQL. Let's begin with an
overview of the Oracle SQL optimizers.

Oracle SQL optimizers

One of the first things the Oracle DBA looks at is the default optimizer mode for the database. The Oracle
initialization parameters offer many cost-based optimizer modes as well as the deprecated yet useful rule-
based hint:

The cost-based optimizer uses “statistics” that are collected from the table using the “analyze table”
command. Oracle uses these metrics about the tables in order to intelligently determine the most efficient
way of servicing the SQL query. It is important to recognize that in many cases, the cost-based optimizer
may not make the proper decision in terms of the speed of the query. The cost-based optimizer is constantly
being improved, but there are still many cases in which the rule-based optimizer will result in faster Oracle

A strategic plan for Oracle SQL tuning

Many people ask where they should start when tuning Oracle SQL. Tuning Oracle SQL is like fishing. You
must first fish in the Oracle library cache to extract SQL statements and rank the statements by their

Tuning SQL statements again... 1

amount of activity.

Step 1—Identify high-impact SQL

The SQL statements will be ranked according the number of executions and will be tuned in this order. The
executions column of the v$sqlarea view and the stats$sql_summary or the dba_hist_sql_summary table
can be used to locate the most frequently used SQL. Note that we can display SQL statements by:

• Rows processed—Queries that process a large number of rows will have high I/O and may also
have impact on the TEMP tablespace.

• Buffer gets—High buffer gets may indicate a resource-intensive query.

• Disk reads—High disk reads indicate a query that is causing excessive I/O.

• Memory KB—The memory allocation of a SQL statement is useful for identifying statements that
are doing in-memory table joins.

• CPU secs—This identifies the SQL statements that use the most processor resources.

• Sorts—Sorts can be a huge slowdown, especially if they’re being done on a disk in the TEMP

• Executions—The more frequently executed SQL statements should be tuned first, since they will
have the greatest impact on overall performance.

Step 2—Determine the execution plan for SQL

As each SQL statement is identified, it will be “explained” to determine its existing execution plan. There
are a host of third-party tools on the market that show the execution plan for SQL statements. The most
common way of determining the execution plan for a SQL statement is to use Oracle's explain plan utility.
By using explain plan, the Oracle DBA can ask Oracle to parse the statement and display the execution
class path without actually executing the SQL statement.

To see the output of an explain plan, you must first create a “plan table.” Oracle provides a script in
$ORACLE_HOME/rdbms/admin called utlxplan.sql. Execute utlxplan.sql and create a public synonym for
the plan_table:

sqlplus > @utlxplan

Table created.

sqlplus > create public synonym plan_table for sys.plan_table;

Synonym created.

Most relational databases use an explain utility that takes the SQL statement as input, runs the SQL
optimizer, and outputs the access path information into a plan_table, which can then be interrogated to see
the access methods. Listing 1 runs a complex query against a database.


INTO plan_table
SELECT 'T'||plansnet.terr_code, 'P'||detplan.pac1
|| detplan.pac2 || detplan.pac3, 'P1', sum(plansnet.ytd_d_ly_tm),

Tuning SQL statements again... 2

FROM plansnet, detplan
plansnet.mgc = detplan.mktgpm
detplan.pac1 in ('N33','192','195','201','BAI',
GROUP BY 'T'||plansnet.terr_code, 'P'||detplan.pac1 || detplan.pac2 || detplan.pac3;

This syntax is piped into the SQL optimizer, which will analyze the query and store the plan information in
a row in the plan table identified by RUN1. Please note that the query will not execute; it will only create
the internal access information in the plan table. The plan tables contains the following fields:

• operation—The type of access being performed. Usually table access, table merge, sort, or index

• options—Modifiers to the operation, specifying a full table, a range table, or a join

• object_name—The name of the table being used by the query component

• Process ID—The identifier for the query component

• Parent_ID—The parent of the query component. Note that several query components may have
the same parent.

Now that the plan_table has been created and populated, you may interrogate it to see your output by
running the following query in Listing 2.

plan.sql - displays contents of the explain plan table

SELECT lpad(' ',2*(level-1))||operation operation,
FROM plan_table
statement_id = 'RUN1'
CONNECT BY prior id = parent_id
statement_id = 'RUN1';

Listing 3 shows the output from the plan table shown in Listing 1. This is the execution plan for the
statement and shows the steps and the order in which they will be executed.

SQL> @list_explain_plan

------------------------------ -------------------------------------------------------

From this output, we can see the dreaded TABLE ACCESS FULL on the PLANSNET table. To diagnose

Tuning SQL statements again... 3

the reason for this full-table scan, we return to the SQL and look for any plansnet columns in the WHERE
clause. There, we see that the plansnet column called “mgc” is being used as a join column in the query,
indicating that an index is necessary on plansnet.mgc to alleviate the full-table scan.

While the plan table is useful for determining the access path to the data, it does not tell the entire story.
The configuration of the data is also a consideration. The SQL optimizer is aware of the number of rows in
each table (the cardinality) and the presence of indexes on fields, but it is not aware of data distribution
factors such as the number of expected rows returned from each query component.

Step 3—Tune the SQL statement

For those SQL statements that possess a nonoptimal execution plan, the SQL will be tuned by one of the
following methods:

• Adding SQL “hints” to modify the execution plan

• Re-write SQL with Global Temporary Tables

• Rewriting the SQL in PL/SQL. For certain queries this can result in more than a 20x performance
improvement. The SQL would be replaced with a call to a PL/SQL package that contained a stored
procedure to perform the query.

Using hints to tune Oracle SQL

Among the most common tools for tuning SQL statements are hints. A hint is a directive that is added to the
SQL statement to modify the access path for a SQL query.

Oracle publishes many dozens of SQL hints, and hints become increasingly more complicated through the
various releases of Oracle and on into Oracle.

Note: Hints are only used for de-bugging and you should adjust your optimizer statistics to make the CBO
replicate the hinted SQL. Let’s look at the most common hints to improve tuning:

• Mode hints: first_rows_10, first_rows_100

• Oracle leading and ordered hints Also see how to tune table join order with histograms

• Dynamic sampling: dynamic_sampling

• Oracle SQL undocumented tuning hints - Guru's only

• The cardinality hint

Self-order the table joins - If you find that Oracle is joining the tables together in a sub-optimal order,
you can use the ORDERED hint to force the tables to be joined in the order that they appear in the FROM
clause. See

Try a first_rows hint. Oracle has two cost-based optimizer modes, first_rows and all_rows. The
first_rows mode will execute to begin returning rows as soon as possible, whereas the all_rows mode is
designed to optimize the resources on the entire query before returning rows.

Tuning SQL statements again... 4

SELECT /*+ first_rows */

A case study in SQL tuning

One of the historic problems with SQL involves formulating SQL queries. Simple queries can be written in
many different ways, each variant of the query producing the same result—but with widely different access
methods and query speeds.

For example, a simple query such as “What students received an A last semester?” can be written in three
ways, as shown in below, each returning an identical result.

A standard join:

STUDENT.student_id = REGISTRATION.student_id

A nested query:

student_id =
(SELECT student_id
grade = 'A'

A correlated subquery:

0 <
(SELECT count(*)
grade = 'A'
student_id = STUDENT.student_id

Let’s wind up with a review of the basic components of a SQL query and see how to optimize a query for
remote execution.

Tips for writing more efficient SQL

Space doesn’t permit me to discuss every detail of Oracle tuning, but I can share some general rules for
writing efficient SQL in Oracle regardless of the optimizer that is chosen. These rules may seem simplistic
but following them in a diligent manner will generally relieve more than half of the SQL tuning problems
that are experienced:

• Rewrite complex subqueries with temporary tables - Oracle created the global temporary table
(GTT) and the SQL WITH operator to help divide-and-conquer complex SQL sub-queries
(especially those with with WHERE clause subqueries, SELECT clause scalar subqueries and

Tuning SQL statements again... 5

FROM clause in-line views). Tuning SQL with temporary tables (and materializations in the
WITH clause) can result in amazing performance improvements.

• Use minus instead of EXISTS subqueries - Some say that using the minus operator instead of
NOT IN and NOT Exists will result in a faster execution plan.

• Use SQL analytic functions - The Oracle analytic functions can do multiple aggregations (e.g.
rollup by cube) with a single pass through the tables, making them very fast for reporting SQL.

• Re-write NOT EXISTS and NOT EXISTS subqueries as outer joins - In many cases of NOT
queries (but ONLY where a column is defined as NULL), you can re-write the uncorrelated
subqueries into outer joins with IS NULL tests. Note that this is a non-correlated sub-query, but it
could be re-written as an outer join.

select book_key from book

book_key NOT IN (select book_key from sales);

Below we combine the outer join with a NULL test in the WHERE clause without using a sub-query,
giving a faster execution plan.

select b.book_key from book b, sales s

b.book_key = s.book_key(+)
s.book_key IS NULL;

• Index your NULL values - If you have SQL that frequently tests for NULL, consider creating an
index on NULL values. To get around the optimization of SQL queries that choose NULL column
values (i.e. where emp_name IS NULL), we can create a function-based index using the null value
built-in SQL function to index only on the NULL columns.

• Leave column names alone - Never do a calculation on an indexed column unless you have a
matching function-based index (a.k.a. FBI). Better yet, re-design the schema so that common
where clause predicates do not need transformation with a BIF:

where salary*5 > :myvalue

where substr(ssn,7,4) = "1234"
where to_char(mydate,mon) = "january"

• Avoid the use of NOT IN or HAVING. Instead, a NOT EXISTS subquery may run faster (when

• Avoid the LIKE predicate = Always replace a "like" with an equality, when appropriate.

• Never mix data types - If a WHERE clause column predicate is numeric, do not to use quotes.
For char index columns, always use quotes. There are mixed data type predicates:

where cust_nbr = "123"

where substr(ssn,7,4) = 1234

• Use decode and case - Performing complex aggregations with the “decode” or "case" functions
can minimize the number of times a table has to be selected.

• Don't fear full-table scans - Not all OLTP queries are optimal when they uses indexes. If your
query will return a large percentage of the table rows, a full-table scan may be faster than an index

Tuning SQL statements again... 6

scan. This depends on many factors, including your configuration (values for
db_file_multiblock_read_count, db_block_size), query parallelism and the number of table/index
blocks in the buffer cache.

• Use those aliases - Always use table aliases when referencing columns.

Recognizing Bottlenecks
Effective operation of the Oracle database depends on an efficient and unconstricted flow
of SQL and/or data among user processes, Oracle processes, Oracle shared memory, and
disk structures; Figure 2 illustrates some of these process flows. To understand process
flows within an Oracle instance, consider this short SQL transaction, which is illustrated
in Figure 2:
select * from employees
where employee_id=:1
for update of salary;
update employees
set salary=:2
where employee_id=:1;

The numbered labels in Figure 2 correspond to these activities:

1. The client program (SQL*Plus, Oracle Power Objects, or some other tool) sends
the SELECT statement to the server process.

Tuning SQL statements again... 7

2. The server process looks in the shared pool for a matching SQL statement. If none
is found, the server process will parse the SQL and insert the SQL statement into
the shared pool. Parsing the SQL statement requires CPU and inserting a new
statement into the shared pool requires a latch, an Oracle internal lock that
prevents processes from concurrently updating the same area within the SGA.
3. The server process looks in the buffer cache for the data blocks required. If found,
the data block must be moved on to the most-recently used end of the Least
Recently Used (LRU) list. This too requires a latch.
4. If the block cannot be found in the buffer cache, the server process must fetch it
from the disk file, which will require a disk I/O. A latch must be acquired before
the new block can be moved into the buffer cache.
5. The server process returns the rows retrieved to the client process, which may
involve a network or communications delay.
6. When the client issues the UPDATE statement, the process of parsing the SQL and
retrieving the rows to be updated must occur. The update statement then changes
the relevant blocks in shared memory and also entries in the rollback segment
7. The update statement will also make an entry in the redo log buffer that records
the transaction details.
8. The database writer background process copies modified blocks from the buffer
cache to the database files. The Oracle session performing the update needn't wait
for this to occur.
9. When the COMMIT statement is issued, the logwriter process must copy the
contents of the redo log buffer to the redo log file. The COMMIT statement will not
return control to the Oracle session issuing the commit until this write is
10. If running in ARCHIVELOG mode, the archiver process will copy full redo logs to
the archive destination. A redo log will not be eligible for reuse until it has been
11. At regular intervals, or when a redo log switch occurs, Oracle performs a
checkpoint. A checkpoint requires that all modified blocks in the buffer cache be
written to disk. A redo log file cannot be reused until the checkpoint completes.

The goal in tuning and monitoring the Oracle instance is to ensure that data and
instructions flow smoothly through and among the various processes and that none of
these flows becomes a bottleneck for the system as a whole. Monitoring scripts and tools
can be used to detect any blockages or inefficiencies in each of the processing steps
previously outlined.

Tuning SQL statements again... 8

Tuning SQL statements again... 9