Call (800) 766-1884 for Oracle support & training
Free Oracle Tips


Oracle Consulting Support
Oracle Upgrades
Use New Oracle Features
Oracle Replication Support
Oracle Training
Remote Oracle DBA
System Documentation
Oracle Tips
Oracle Performance
 

Free Oracle Tips


 

HTML Text

BC Oracle tuning

Oracle training

Oracle support

Remote Oracle

 

 

   
 

Tuning the Oracle data buffer Pools


Oracle Tips by Burleson
 

The ability of Oracle to support multiple block sizes did not get a lot of fanfare during the publicity rollout of Oracle. Rather than being touted as an important tool to reduce disk I/O, the multiple block size feature was buried far down on the list of new features of the Oracle database. However, for the Oracle administrator, multiple blocksizes are extremely important and exciting.  For the first time, you are able to customize your data buffer sizes according to the specific needs of your database.

The ability to support multiple block sizes within Oracle opens up a whole new world of disk I/O management.  Prior to Oracle, your entire Oracle database had to have a single block size, and this block size was determined at the time that the database was created. 

With the introduction of Oracle8i, we received the ability to segregate tables and index blocks into three separate data buffers, but all of the buffer caches had to be the same block size.  We had the KEEP pool to store frequently referenced table blocks, the RECYCLE pool to hold blocks from large-table full-table scans, and a DEFAULT pool for miscellaneous object blocks.

With Oracle, we can define tablespaces with block sizes of 2K, 4K, 8K, 16K and 32K; and assign tables and indexes to the best block size to minimize I/O and best manage wasted space in our data buffers.  When we combine the new data buffers for these block sizes, we get a total of seven separate and distinct data buffers to segregate our incoming table and index rows.

The Seven Data Buffer Hit Ratios

As we know, the data buffer hit ratio (DBHR) is a common metric used by Oracle tuning experts to measure the propensity of a row to be in the data buffer.  For example, a data buffer hit ratio of 95% means that 95% of row requests were already present in the data buffer, thereby avoiding an expensive disk I/O. As a general rule, as you increase the size of your data buffers, your DBHR will rise and approach 100%.

In Oracle, we have a separate data buffer hit ratio for all seven data buffer caches.  The goal of the Oracle DBA is to constantly monitor all seven data buffers and adjust the size of the buffers based on each data buffer hit ratio. One exciting feature of Oracle is that we can now dynamically change the number of RAM buffers within any of these data buffer caches.  Oracle now supports ALTER SYSTEM commands that allow you to change the size of these data buffers while Oracle remains available. This feature makes it possible for the DBA to maximize performance based upon current statistics by manually de-allocating RAM from one data buffer and allocating it to another buffer cache.

While the general rule holds true that the more data you can fetch a single I/O, the better your overall buffer hit ratio, we have to take a closer look at the multiple data buffer phenomenon to gather true picture of what's happening under the covers. 

Allocating the data buffer caches

Let’s see firsthand how the multiple data buffers work.  For example, we could define the following buffer cache allocations in our initialization parameters.

db_block_size=32768         -- This is the system-wide
                            -- default block size

db_cache_size=3G            -- This allocates a total of 3 gigabytes
                            -- for all of the 32K data buffers

db_keep_cache_size=1G       -- Here we use 1 gigabyte for the KEEP pool

db_recycle_cache_size=500M  -- Here is 500 meg for the RECYCLE pool
                            -- Hence, the DEFAULT pool is 1,500 meg

-- *****************************************************************
-- The caches below are all additional RAM memory (total=3.1 gig)
-- that are above and beyond the allocation from db_cache_size
-- *****************************************************************

db_2k_cache_size=200M       -- This cache is reserved for random
                            -- block retrieval on tables that
                            -- have small rows.

db_4k_cache_size=500M       -- This 4K buffer will be reserved
                            -- exclusively for tables with a small average
                            -- row length and random access

db_8k_cache_size=800M       -- This is a separate cache for
                            -- segregating I/O for specific tables

db_16k_cache_size=1600M     -- This is a separate cache for
                            -- segregating I/O for specific tables

From this example, what is the total RAM allocated to the data buffer caches? The total RAM required is the sum of all the named buffer caches, plus db_cache_size, in this case, 6,100 megabytes, or 6.1 gigabytes. 

Remember that db_keep_cache_size and db_recycle_cache_size are subtracted from the db_cache_size.  In the example above, the DEFAULT pool is 1.5 gigabytes, after subtracting the allocation for the KEEP and RECYCLE pools. Also, remember that the total size must be less than the value for sga_max_size.

Also note that you cannot create a buffer of the same size as your db_block_size.  In the example, the db_block_size is 31,768, so you cannot allocate a db_32k_cache_size.

Here we have defined seven totally separate data buffers. Let’s review the usage for each data buffer, computing the number of data blocks each buffer can hold (Table 1)

NAME

SIZE

BLOCK SIZE

BLOCK SPACE

KEEP pool

1,000MB

32K

31,250 blocks

RECYCLE pool

500MB

32K

15,625 blocks

DEFAULT pool

1,500MB

32K

46,875 blocks

2K cache

200MB

2K

100,000 blocks

4K cache

500MB

4K

125,000 blocks

8K cache

800MB

8K

100,000 blocks

16K cache

1,600MB

16K

100,000 blocks

Table 9-3: Computing the capacity of each data buffer cache

Our next step is to create tablespaces using each of these block sizes.  Oracle will automatically load a tablespaces’ blocks into the data buffer of the appropriate block size.

For example, we talked about creating the db_2k_cache_size exclusively for tables with small row sizes that are always accessed randomly.  Small block sizes are also good for preventing DML locking problems with bitmap indexes. Hence, we could define a 2K tablespace as follows:

create tablespace
   2k_tablespace
datafile
   ‘/u01/oradata/mysid/2k_file.dbf’
size
   100M
blocksize
   2k
;

Once defined, Oracle will always load blocks from the 2k_tablespace into the db_2k_cache_size data buffer.  Now, all we need to do is to move all appropriate tables into the new tablespace using the Create Table As Select (CTAS) command:

-- First, disable all RI constraints

create table
   new_customer
as select
   *
from
   customer
tablespace
   2k_tablespace
;

rename customer to old_customer;
rename new_customer to customer;

-- finally, transfer all RI constraints and indexes

Now that we see how to create tablespaces with different block sizes, let’s explore some other important considerations.

The KEEP Pool in Oracle

When the KEEP was first introduced in Oracle8i, its purpose was to provide a RAM data buffer to fully-cache blocks from frequently referenced tables and indexes.  For example, when computing the size of the KEEP pool, we must total the number of bytes for all tables that have been marked to reside in the KEEP pool.  This is because we always want the KEEP pool to fully cache all tables that have been assigned to the KEEP pool.

In Oracle, a table must reside in a tablespace of the same block size as the cache assigned to the table.  Using our previous example, we could not issue the following command unless the customer tables resides in a tablespace with a 32K block size.

alter table CUSTOMER storage (buffer_pool KEEP);

Remember, the point of the KEEP pool is to always have a data buffer hit ratio of 100%. Also note that the block size of the KEEP pool is not important. This is because, once loaded, all blocks in the KEEP pool will always remain in RAM memory. In our example, the KEEP pool is a 32K blocksize because we wanted the RECYCLE pool to have a large block size to improve the performance of full-table scans. Remember, the DEFAULT, KEEP, and RECYCLE designations apply to only the default block size, you cannot create the KEEP and RECYCLE as other than the default db_block_size.

Large Blocks and Oracle Indexes

When an SQL statement requests the fetch of a result set from Oracle tables, the SQL is probably retrieving the table by an index. 

Many Oracle tuning experts recommended that a pre-Oracle database be re-defined with a large blocksize. Many people were mystified when a database with a 2K block size was increased to an 8K block size and the entire database ran faster. A common justification for resisting a block size increase was, “This database randomly fetches small rows. I can’t see why moving to a larger block size would improve performance.” So, then, what explains the performance improvement with larger block sizes?

When choosing a block size, many DBAs forget about the index trees and how Oracle indexes are accessed sequentially when doing an index range scan.  An index range scan is commonly seen in nested loop joins, and the vast majority of row access involved indexes.

Because index range scans involve gathering sequential index nodes, placing the indexes in a larger block size reduces disk I/O and improves throughput for the whole database.

So then, why not create our entire Oracle database with large block sizes and forget about multiple block sizes?  The answer is not simple.  In order to fully utilize the RAM memory in the data buffers, you must segregate tables according to their distribution of related data.

  • Small blocksTables with small rows that are accessed in a random fashion should be placed onto tablespaces with small block sizes. With random access and small block sizes, more of the RAM in the data buffer remains available to hold frequently referenced rows from other tables.

  • Large blocksIndexes, row-ordered tables, single-table clusters, and table with frequent full-table scans should reside in tablespaces with large block sizes. This is because a single I/O will fetch many related rows and subsequent requests for the “next” rows will already be in the data buffer.

The goal here is simple; we want to maximize the amount of available RAM memory for the data buffers by setting the block sizes according to the amount of I/O experienced by the table or index.  Random access of small rows suggests small block sizes, while sequential access of related rows suggests large block sizes.

For example, consider a query that accesses 100 random 80-byte rows from Oracle.  Since the accesses are random, we can assume that no two rows exist on the same block, and that 100 block reads are required to access the result set.

If we have 16k blocks, then we would need 16MB (16KB * 100) of RAM space in the db_16k_cache_size data buffer. If we use 2KB blocks, then our 100 I/Os only use 2MB (2KB * 100) in the data buffer.  For this query, we would have saved 14MB of RAM to hold other row data.

Maximizing block space usage

Until RAM memory becomes cheap enough that we can cache our whole database, we need to manage the RAM that we allocate to our data buffers. The allocation of tables and indexes according to block sizes is a balancing act.

If we allocate the data blocks too large, then we waste valuable data buffer space holding row data that Oracle will never reference.  If we allocate the data block too small, Oracle will have to do more disk I/O to satisfy a request.

Here are some general rules for allocating data block sizes:

  • Segregate large-table full-table scansTables that experience large-table full-table scans will benefit from the largest supported block size and should be placed in a tablespace with your largest block size.

  • Set db_recycle_cache_size carefullyIf you are not setting db_cache_size to the largest supported block size for your server, you should not use the db_recycle_cache_size parameter.  Instead, you will want to create a db_32k_cache_size (or whatever your max is), and assign all tables that experience frequent large-table full-table scans to the largest buffer cache in your database.

Remember, the data dictionary cache uses the default block size. You should ensure that the data dictionary (e.g. your SYSTEM tablespace) is always fully cached in a data buffer pool. Remember, the block size of the data dictionary is not as important as ensuring that the data buffer associated with the SYSTEM tablespace has enough RAM to fully-cache all data dictionary blocks.

  • Indexes want large block sizesB-tree indexes will always favor the largest supported blocksize. You want to be able to retrieve as many index nodes as possible in a single I/O, especially for SQL that performs index range scans.  Hence, all indexes should reside in tablespaces with a 32KB block size.

  • Average row lengthThe block size for a table’s tablespace should always be greater than the average row length for the table (dba_tables.avg_row_len).  Note that when it is smaller than the average row length, rows chaining occurs and excessive disk I/O is incurred.

  • Use large blocks for data sortingYour TEMP tablespace will benefit from the largest supported blocksize. This allows disk sorting to happen in large blocks with a minimum of disk I/O.

Summary of block size rules

The intent of this section is to give you an idea of the impact of multiple block sizes and multiple RAM caches.  Once you are aware of the salient issues surrounding the use of block sizes, you can now make intelligent decisions about the proper assignment of block sizes to your tables and indexes. 

However, it is important to note that your tuning changes are never permanent, and you can always move tables from one tablespace to another, experimenting with different block sizes.  For example, if you placed a table into a 2KB tablespace and the I/O increases, you can simply move the table into a tablespace with a larger blocksize. Minimizing I/O by adjusting block sizes is a long iterative process.

Oracle has laid the foundation for a self-tuning data cache, and it is only a matter of time before code is created to dynamically change the size of the data buffer caches according to the demands of the database.  However, you still must use your intelligence to place each table and index into a tablespace with the most appropriate block size to reduce disk I/O.

Mapping data blocks to data buffers

Oracle provides a v$bh view that will show the data block ID for all blocks that reside in the data buffer. This query counts the number of blocks for all tables and index blocks that reside in the buffer cache.

buf_blocks.sql

column object_name      heading “object”     format a40
column number_of_blocks heading “nbr blocks” format 999,999,999,999

select
   o.object_name,
   count(1) number_of_blocks
from
   dba_objects o,
   v$bh        bh
where
   o.object_id  = bh.objd
and
   o.owner not in (‘SYS’,’SYSTEM’)
group by
   o.object_name
order by
   count(1) desc
;

Note:  This script is computationally intensive and may run for more than one hour with large data buffer caches.  Hence, you may want to schedule this query during off-hours.

Here is a sample of the output from this script:

object                                       nbr blocks
--------------------------------------- ----------------
CUSTOMER                                          14,372
ORDER                                              6,373
ITEM                                                 825
BOM                                                    6

This information can be quite useful for determining what data tables are experiencing the most I/OI activity.


This is an excerpt from "Oracle High Performance tuning with STATSPACK" by Oracle Press.


If you like Oracle tuning, you may enjoy the new book "Oracle Tuning: The Definitive Reference", over 900 pages of BC's favorite tuning tips & scripts. 

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.

  
 

Oracle performance tuning software 
 
 
 
 

Oracle performance tuning book

 

 
 
 
Oracle performance Tuning 10g reference poster
 
 
 
Oracle training in Linux commands
 
Oracle training Excel
 
Oracle training & performance tuning books
 

 

   

Copyright © 1996 -  2014 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation. 

Hit Counter