Oracle Tips by Burleson
The LGWR process is tuned by sizing the
LOG_BUFFER parameter and by starting the checkpoint process on
non-Oracle8 versions of Oracle. An additional option on Oracle8 is the
LGWR_IO_SLAVES parameter, which is are manually configured under
Oracle8.0 and automatically configured under Oracle8.1 if
DBWR_IO_SLAVES is set to greater than 1 or Oracle parallel query is
enabled. The default value for this parameter is 4.
Under Oracle8.1, a feature called incremental
checkpointing was provided. By setting the DB_BLOCK_MAX_DIRTY_TARGET
initialization parameter, you limit the number of blocks the recovery
process must read from disk during a recovery process.
DB_BLOCK_MAX_DIRTY setpoints affect the speed of recovery and may have
a detrimental affect on normal runtime performance.
Oracle8 and Oracle8i Tuning Options
With later versions of ORACLE8 and ORACLE8i
came numerous new tuning areas and capabilities: use of histograms,
anti-joins, hash-joins, all of which can be used to improve the
performance of Oracle--not to mention using bitmapped indexes and
partitioned tables and indexes.
Histograms help optimize queries and other
actions against data that is non-uniformly distributed about a mean.
The common term for poorly distributed data is skewed data. In
particular, in earlier versions of ORACLE7, the cost-based optimizer
would go out to lunch if you handed it skewed data. There is a cost
associated with histograms, so they should be used only for badly
skewed data. Histograms are static and must be periodically renewed
just like table statistics.
Histograms should not be used when:
* All predicates on the column use bind
* The column data is uniformly distributed.
* The column is not used in WHERE clauses of
* The column is unique and is used only in
Histograms are created in “bands” of value
ranges. For example, if the data in your test_result tables
measurement column is skewed into six general ranges, then you would
want to create six bands of history:
COMPUTE STATISTICS FOR COLUMNS measurement SIZE 6;
If you know the exact number of keys, and the
value is less than 255, set the size to that value; otherwise, set it
to 255. Histogram statistics are stored in the DBA_, USER_, and ALL_
HISTOGRAMS views. Additional row statistics appear in the
USER_TAB_COLUMNS, ALL_TAB_COLUMNS, and DBA_TAB_COLUMNS views.
New Types of Joins
Two new types of joins became available in
late ORACLE7 and ORACLE8: ANTI-JOIN and HASH-JOIN.
The hash-join has nothing to do with hash
clusters or TABLE ACCESS HASH method. A hash-join compares two tables
in memory. The first table is full table scanned, and a hashing
function is applied to the data in memory. Then the second table is
full table scanned and the hashing function is used to compare the
values. Matching values are returned to the user. The user usually has
nothing to do with this process; it is completely
optimizer-controlled. However, it can be used only by the cost-based
optimizer. Generally, hash-joins will gain something for you only if
you are using parallel query. Typically, the optimizer will use
hash-joins for small tables that can be scanned quickly. To use
hash-joins, the HASH_JOIN_ENABLED initialization parameter must be set
Several HASH parameters affect how hash-joins
are used. These are:
HASH_JOIN_ENABLED. Set to TRUE to use
HASH_AREA_SIZE. Large value reduces
cost of hash-joins, so they are used more frequently (set to half the
square root of the size of the smaller of the two objects, but not
less than 1 megabyte). Suggested range is between 8 and 32 megabytes.
Defaults to twice SORT_AREA_SIZE.
HASH_MULTIBLOCK_IO_COUNT. Large value
reduces cost of hash-joins, so they are used more frequently.
Suggested size is 4.
To use anti-joins, you must set the
initialization parameter ALWAYS_ANTI_JOIN to HASH or MERGE. This
causes the NOT IN clause in queries to always be resolved using a
parallel-hash or parallel-merge anti-join. If the ALWAYS_ANTI_JOIN
parameter is set to anything other than HASH or MERGE, the NOT IN
clause will be evaluated as a correlated subquery. You can force
Oracle to perform a specific query as an ANTI-JOIN by using the
MERGE_AJ or HASH_AJ hints.
Multitier Statement Tuning
More and more, Oracle is being used in
multitier client/server applications. If you don’t take care when
designing the queries used in these client/server applications, your
performance will be terrible. You still want the server to do the
processing of the result set and just pass the result set back to the
client. An improperly designed query can return the entire contents of
the source tables to your PC and expect the PC to process the data,
something you don’t want in most situations. The bane of many networks
is excessive packet traffic soaking up bandwidth. To prevent bandwidth
absorption, you want to encapsulate SQL statements as much as
possible. There are some general rules to follow when designing
applications for client/server applications:
1. Push processing to the server, pull results
2. Use views to prebuild queries.
3. Use MTS only when your number of
connections exceeds 50 to 100 users.
4. Use PL/SQL blocks, stored procedures, and
functions on both client and server.
How we accomplish these steps is generally
easy, although for specific applications it can be complex and in an
ad hoc environment impossible. Let’s examine some general techniques.
Push Processing to the Server, Pull Results
The first rule--push processing to the server,
pull results back--is accomplished using of views and PL/SQL
When Using views, if you issue:
FROM EMP WHERE DEPTNO=10;
in an ad hoc query, chances are the contents
of EMP may get passed back to you to be processed. However, if a
server view is created:
EMP10 AS SELECT * FROM EMP WHERE DPTNO=10;
and then you issue:
you get the same result set, but it is
processed on the server and passed back to you.
When using PL/SQL encapsulation, if you have
several related commands, it will be best to encapsulate them in a
PL/SQL block rather than issue each individual command. A PL/SQL block
is treated as a single statement by NET8, so a single packet set is
used to transfer it to the server, greatly reducing network travel.
Lets look at a status report that selects several statistics into a
temporary table and then generates a report. The script to run this
report looks like this:
SELECT name, value, 1
WHERE name=‘consistent gets’;
INSERT INTO dba_temp
SELECT name, value, 2
WHERE name=‘physical reads’;
INSERT INTO dba_temp
SELECT name, value, 3
WHERE name=‘db block gets’;
INSERT INTO dba_temp
SELECT ‘Hit Ratio’,(a.value+b.value)-c.value/(a.value+b.value)
FROM v$sysstat a, v$sysstat b, v$sysstat c
WHERE a.name=‘consistent gets’ and
b.name=‘db block gets’ and
SELECT * FROM DBA_TEMP;
In it we have five calls to the database, five parses, and five
statements stored in the shared pool. Not very efficient; and the
network round trips can get significant. Let’s see if a PL/SQL routine
to perform this (at least the initial processing) can be written:
REPLACE PROCEDURE hitratio
CURSOR get_param (stat_name varchar2) IS
SELECT value FROM v$sysstat WHERE name=stat_name;
PROCEDURE write_it (stat_name VARCHAR2,p_value NUMBER,
reporder INTEGER) IS
INSERT INTO dba_temp
VALUES (stat_name, p_value, reporder);
FETCH get_param INTO con_gets;
write_it(param, con_gets, 1);
param:=‘db block gets’;
FETCH get_param INTO db_gets;
write_it(param, db_gets, 2);
FETCH get_param INTO p_reads;
write_it(param, p_reads, 3);
write_it(param, h_ratio, 4);
Once the above procedure is compiled on the
server, the previous SQL script becomes:
SELECT * FROM dba_temp;
Now we have reduced the round trips to two;
and since the stored procedure is on the server, we may not even have
to parse the statement. All of the actions between the BEGIN and END
are treated as a single transaction. If we make the call to dba_temp a
call to a view, we can be sure that any processing is done for that
table on the server. There is also a method that uses the UTLFILE
package to output directly to a file on a client, but it would result
in more net round trips in this situation.
More complex processing using variables could
be done using the DBMS_SQL package and dynamic SQL.
Code Depot for Full Scripts
||This is an excerpt
from Mike Ault, bestselling author of "Oracle
10g Grid and Real Application Clusters".
You can buy it direct from the publisher for 30%-off and get
instant access to the code depot of Oracle tuning scripts.