Oracle Tips by
The process of locating opportunities for
indexing is an easy and rewarding activity. Because most SQL
developers do not look to see if indexes are available, there are many
opportunities for speeding up queries by adding indexes. Better still,
adding new indexes is easy and requires no changes to the existing SQL
(unless your are using optimizer plan stability).
In general, searching for indexing
opportunities involves the following steps:
Go fishing the large-table full-table scans
You can check the v$sqlarea view or the stats$sql_summary
table to identify tables that experience large-table full-table
scans. This is done by executing the access.sql script to
explain all of the SQL and generate the full-table scan report. This
report will show you the table names, the number of full-table
scans, and the number of blocks in the table. As a general rule,
tables with full-table scans that have more than 100 blocks warrant
Extract the SQL statement Once a table
name is identified as having large-table full-table scans, you can
extract the SQL that accesses these tables.
Add the index Add the index to the
database in a test environment. Remember, adding an index could
affect the execution plans of many SQL statements, and the index
should not be added into the production environment until the speed
improvement has been verified.
Evaluate the legitimacy of the full-table
scan Next, we test the number of data blocks returned from the
query to determine if the query would run faster with an index.
Test the execution speed If the
large-table full-table scan has a non-indexed column, create an
index on the column and time the SQL using the set timing on
Predict the benefit to the whole database
Once the index is created in the test environment, you can use the
STATSPACK historical SQL table, to see how the new index would
affect the whole database.
The first step in looking for indexing
opportunities is to locate those large tables that experience frequent
full-table scans. Using the access.sql script from Chapter 5,
we can begin by observing the full-table scan report that was produced
by analyzing all of the SQL that was in the library cache. From this
report, we can check the number of rows returned from each SQL
statement to see if the large-table full-table scan is legitimate.
Mon Jan 29
full table scans and counts
Note that "C" indicates in the table is cached.
indicates that the table in in the KEEP pool.
NUM_ROWS C K BLOCKS NBR_FTS
-------------- ------------------------ ------------ - - --------
3,450,209 N 932,120 9,999
434 N 8 7,355
18,067 N 1,104 5,368
476 N K 192 2,087
10 N K 32
20 N 8
Here, we must make the subjective judgment
about what tables warrant further investigation. As you know,
small-table full-table scan tables are placed in the KEEP pool, so we
are only concerned with large tables.
In the preceding example, you see that the
pager and star_image tables consist of more than 1000 data
blocks, and both have more than 5000 full-table scans. These are clear
candidates for further investigation, and our next step is to extract
these SQL statements and evaluate the legitimacy of the full-table
Extract the SQL Statement
Once we have located a table that has
large-table full-table scans, the next step is to extract all SQL
statements that reference the query. If you were using access.sql
to explain all of the SQL in your library cache, you can run a query
against the v$sqlarea view to extract the SQL statements that
reference the table.
sql_text like ‘% pager %’;
The set lines 2000; SQL*Plus command
will allow most SQL statements to be displayed, and you can easily cut
and paste the SQL statements into a file for further evaluation. Now
that we have the SQL, we must evaluate the legitimacy of the SQL
Add the Index
You will next add the index that you think will
aid the query in your test environment. Remember, adding an index can
change the execution plans for many other SQL statements, so this is
always best done in the test environment. The main reason for creating
the index is to obtain the clustering_factor from
dba_indexes for the index.
the costs of a full-table scan and an index range scan, it is
important to know the clustering_factor of our index to the
table. As you know, a low value for clustering_factor (near the
number of blocks in the table) means that more sequential data rows
will be accessed in each block read (Figure 20-1).
Figure 1: An index with
a low clustering_factor
Conversely, a high clustering_factor,
where the value approaches the number of rows in the table (num_rows),
indicates that the rows are not in the same sequence as the index, and
additional I/O will be required for index range scans (Figure 20-2).
Figure 2: An index with
a high clustering_factor
Next, let’s look at how to evaluate the
legitimacy of a full-table scan.
This is an excerpt from "Oracle High-Performance SQL Tuning" by Donald
K. Burleson, published by Oracle Press.