Call (800) 766-1884 for Oracle support & training
Free Oracle Tips

Oracle Consulting Support
Oracle Upgrades
Use New Oracle Features
Oracle Replication Support
Oracle Training
Remote Oracle DBA
System Documentation
Oracle Tips
Oracle Performance

Free Oracle Tips



BC Oracle tuning

Oracle training

Oracle support

Remote Oracle




Partitioning Data for RAC

Oracle Tips by Burleson

The real power of Real Application Clusters centers on its ability to allow many instances to share a common database. However, it's important to recognize that Real Application Clusters is not good for all systems. In environments where each one of the separate instances may share the same data rows, the excessive overhead in transferring common blocks between instances (called pinging) can negate any performance benefit of RAC.

In short, Real Application Clusters is most effective for applications where each instance can have its own control over the majority of the data that is read by that instance. In a sense, each instance is functioning as a separate data buffer cache for different data blocks.

In fact, many organizations that first implemented Real Application Clusters and later find out that they have a highly shared data environment will commonly migrate back to a regular Oracle database to reduce the overhead from the IDLM and enjoy even better performance than using RAC.

For example, if we have different classes of end users in our retail organization, we can direct each one of the end users to separate instances, thereby isolating the data to the specific instances. We may have an order entry department that is primarily concerned with the order table in the database. Hence, all of the order records will wind up being cached in the instance to which the end users are directed. Along the same lines, those end users concerned primarily with customer transactions would wind up caching all of the customer data blocks within its own instances data buffers. Again, to continue the example, an area of the organization dealing primarily with products would access the product table in their own separate database instance, thereby providing a level of isolation between the database as a whole and the component tables that are cached within each instance.

It is in these kinds of systems where Oracle can ensure that there is a minimum of pinging that will benefit the most from Real Application Clusters environment. However, there are other reasons that people adopt Real Application Clusters. Real Application Clusters has a very easy method of failover, whereby a failure of one instance (usually a failure in the background processes or the SGA) will not bring the entire system to a standstill. Rather, RAC has facilities that will allow people to be instantly reconnected to one of the remaining live databases and immediately begin caching their data records in the new SGA.

In summary, we see that there are several important distinctions between a standard Oracle database and the Real Application Clusters environment:

  • All of the instances are sharing a single disk database, and raw partitions must be used on the disk in order to allow the sharing of data between multiple instances.

  • Each object within the Real Application Clusters environment must be defined with multiple freelist groups. The separate freelist group should be assigned for each and every instance planning on accessing the individual table.

Next, let's look at the core of the Oracle8i Parallel server system, the Integrated Distributed Lock Manager, or IDLM.

The Integrated Distributed Lock Manager (Oracle7 thorough Oracle8i)

Many OPS novices are confused by the functions of the Integrated Distributed Lock Manager (IDLM). In Oracle Parallel Server, the limitation of one instance to one database has been lifted, but all of the Oracle instances' SGAs are not running in the same shared memory region. The purpose of the IDLM is to provide locking between instances and to transfer data blocks between multiple data buffers since the data buffers are not shared between the instances.

You can tell if a database is running the IDLM by checking for the LCK process. Just as the RECO process indicates that distributed transactions are enabled, the presence of an LCK background process indicates that the IDLM is active. Traditional distributed systems do not have an IDLM because they are not sharing the same database.

Because you may have many Oracle instances accessing a common database, Oracle has had to come up with a method whereby blocks can be shared between the database instances. In a standard Oracle system, Oracle will always check the data block buffers before doing any I/O against the disk, and in an OPS environment Oracle will first check to see if any of the instances have the desired record within the data buffer.

If OPS discovers that another instance has a data block that is requested by another instance, the Integrated Distributed Lock Manager will ping the other server and transfer the data block memory from one instance to the other, as shown Figure 13-3.

Figure 13-111: Pinging a block within an OPS database

The overhead of the IDLM can often take as many resources as a single database instance, and in databases where there is a high degree of sharing of data blocks between instances, the IDLM pinging can more than outweigh any performance benefit that would be gained by using Oracle Parallel Server.

Before we look at tuning and monitoring OPS, let's take a close look at the proper table storage parameters that are required for OPS.

Configuring the IDLM within the Oracle8i OPS Environment

The configuration of the IDLM and the minimization of pinging are the most important concepts that you need when tuning in OPS database. Just as the Oracle database has initialization parameters, the IDLM also has a configuration file that contains parameters that govern its behavior. This is an example of a configuration file in the OPS system:


L 13-1

node_list = "19,23,34,66"
listener_node_list="19,23, 34, 66"
listener_name_list="L_1, L_2, L_3, L_4" 
oracle_sid_list="HOME_1, HOME_2, HOME_3, HOME_4"
oracle_home = "/oracle/HOME"

1:oraclesid = "HOME_01"
2:oraclesid = "HOME_02"
3:oraclesid = "HOME_03"
4:oraclesid = "HOME_04"

1:lsnr_name = "L_1"
2:lsnr_name = "L_2"
3:lsnr_name = "L_3"
4:lsnr_name = "L_4"

1:init_ora = "/oracle/HOME/dbs/initHOME_1.ora"
2:init_ora = "/oracle/HOME/dbs/initHOME_2.ora"
3:init_ora = "/oracle/HOME/dbs/initHOME_3.ora"
4:init_ora = "/oracle/HOME/dbs/initHOME_4.ora"

svrmgr_path = "/oracle/HOME/bin/svrmgrl"

# Lock Manager information                                            
lkmgr_env  = "LKPORT=1566; LKMGR_LOG=/oracle/HOME/ops/dlmlog"
lkmgr_args = "-f n -l 237073 -r 236381 -p 800"

The top section of the OPS configuration file is quite straightforward, defining the Oracle databases and all the listeners that will be sharing the database. The important information begins at the start of the Lock Manager parameters. As we discussed earlier in this chapter, the IDLM's primary job is to share data buffer blocks between the multiple instances. In order to control the transfer of these data blocks between the separate regions in SGA memory, the IDLM must be able to lock, transfer, and release data blocks in an efficient manner. Again, minimizing pinging is one of the most important jobs of the person tuning the OPS system.

Let's take a look in detail at these lock manager parameters. The lkmgr_args parameter is the one that has the most impact upon the Oracle database. Here we see several flags:

  • -f (fault-tolerant mode) This tells the OPS system whether the database
    will be run in a fault-tolerant mode. In a fault-tolerant mode, the databases will continue to run even if a single instance crashes, and automatic failover will occur for any client connections that were originally directed to the failed database.

  • -l (number of locks) This is the total number of lock buckets to be reserved by the IDLM in order to hold block addresses.

  • -r (number of resources) This is the number of resources that will be used by the IDLM.

  • -p (number of processes) The number of processes parameter directs the number of processes that can be allocated by the IDLM when performing pinging operations.

In summary, these lock manager arguments determine the overall configuration for the IDLM and, most importantly, the overall performance of the OPS systems.

In addition to the standard initialization files in init.ora (prior to Oracle), there's an ancillary file when using Oracle Parallel Server. This ancillary file is called ocommon.ora, and this file contains all of the global parameters that are required by the IDLM.


L 13-2


max_commit_propagation_delay = 0

# parameter for dba locking:
_enable_dba_locking     = true
gc_db_locks             = 0
gc_freelist_groups      = 0
gc_releasable_locks     = 60000
gc_rollback_locks       = 0
gc_rollback_segments    = 256
gc_save_rollback_locks  = 0
gc_segments             = 0
gc_tablespaces          = 0
gc_lck_procs            = 6         

remote_login_passwordfile = exclusive

Next, let's look at how the standard storage parameters for tables and indexes are altered when we run an OPS database.

This is an excerpt from "Oracle High Performance tuning with STATSPACK" by Oracle Press.

If you like Oracle tuning, you may enjoy the new book "Oracle Tuning: The Definitive Reference", over 900 pages of BC's favorite tuning tips & scripts. 

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.


Oracle performance tuning software 

Oracle performance tuning book


Oracle performance Tuning 10g reference poster
Oracle training in Linux commands
Oracle training Excel
Oracle training & performance tuning books



Copyright © 1996 -  2014 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation. 

Hit Counter