Call (800) 766-1884 for Oracle support & training
Free Oracle Tips


Oracle Consulting Support
Oracle Upgrades
Use New Oracle Features
Oracle Replication Support
Oracle Training
Remote Oracle DBA
System Documentation
Oracle Tips
Oracle Performance
 

Free Oracle Tips


 

HTML Text

BC Oracle tuning

Oracle training

Oracle support

Remote Oracle

 

 

   
 

Distributed Database Management

Oracle Tips by Burleson
 

This chapter covers the Oracle distributed database setup, including myriad topics such as Oracle Net, multithreaded server, and real application clusters (RAC), as well as distributed databases. In it, I do not attempt to cover every possible Oracle Net, NET8 or SQLNET option; that is what the Oracle manuals are for. Rather, I provide a basic setup for a standard server and discuss the tuning of what will be a new environment to most DBAs: the Apache Web Server environment.

When I say “distributed database,” I mean either a set of databases that are connected via a standard interface protocol, such as TCP/IP, or a set of instances that may connect to a single set of database files that communicates through a high-speed interconnect.

A distributed database allows users who are physically remote from one another to share the same data. A distributed database makes it possible to have a central set of data that all the company uses, or a disbursed set of databases that share information.

Setup of a Distributed Database

The usual interpretation of a distributed database is as a nonshared database. In a typical distributed database, there are two or more remote (i.e., on separate servers) databases that are connected via a LAN a WAN or the Internet. The distributed databases, if they are Oracle databases, communicate via Oracle Net through database links. If non-Oracle-based databases are included in the distributed database, then these databases are connected via the Oracle heterogeneous database services. Figure 14.1 shows a typical “pure” Oracle distributed database.

Figure 14.1 An Oracle distributed database.

In the setup shown in Figure 14.1 the manuf, sales and price databases each reside on an independent node of a distributed database. By defining database links between the databases in a distributed database tables and other objects in the remote databases become visible to users in the local databases.

If we assume that the listener.ora and tnsnames.ora files (covered in a latter section of this chapter) have been set up so that the alias' for the databases are MANUF, SALES and PRICE then it is a simple matter to setup database links between the separate databases. For example to set up a link from the MANUF database to the PRICE database:

sqlplus john/hungry
connected.
SQL> CREATE DATABASE LINK price
  2  CONNECT TO devon_user IDENTIFIED BY low_privs
  3  USING 'price';

Database Link Created. 

Now we have a path directly into the tables and objects that the devon_user has access to. For example, say we wanted to check on the price of fish and chips and we know that devon_user for some arcane reason tracks his lunches in a table called LUNCH, we also know he always has fish and chips. We can either use the SELECT … FROM table@db_link; format for our SELECT or we can hide the link via a synonym. If the selection is one we will do frequently, I suggest creating a SYNONYM:

CREATE SYNONYM lunch FOR lunch@price;

       Once we have a synonym pointing at the remote database we can issue SELECT statements just like the table was a local table:

SQL> SELECT cost, food_type
  2  FROM lunch@price
  3  WHERE food_type='FISH AND CHIPS'
  4  AND date_of_price=TRUNC(sysdate); 

      COST FOOD_TYPE
---------- ------------------------------------
       1.5 FISH AND CHIPS

If you need to access several remote databases at one time then you may need to bump up the initialization parameter OPEN_LINKS from its default value of 4.

If you will be doing complex INSERT, UPDATE and DELETE operations across the distributed database then you will need to set several initialization parameters that help control distributed transactions. These parameters are:

COMMIT_POINT_STRENGTH -- This determines the commit point site for a distributed transaction. In a two-phase commit the commit point site must be able to commit or the entire distributed transaction will rollback. The highest value becomes the commit point site and the transaction coordinator.

DB_DOMAIN. This sets the location of the database within the processing network. For example, our database name (DB_NAME) may be SALES while our database domain name (DB_DOMAIN) is ATL.GA.US.WIDGET. If a database will ever become part of a distributed structure, set this accordingly.

DB_NAME. Sets up the database name. Usually this is the same as the instance name.

DISTRIBUTED_TRANSACTIONS. Determines the maximum number of distributed transactions, should not be set to more than 75 percent of the TRANSACTIONS parameter. If set to zero prohibits distributed transactions in the specific database. If set to zero the RECO process is not started.

GLOBAL_NAMES. Set to either TRUE or FALSE. If set to TRUE the database link must use the same name as the database it is connecting to, if set to FALSE then any name can be used for the database link. If multiple links will be made to the same database then I suggest setting it to FALSE (for example, a PRICE link to the PRICES schema, a GRAPHICS link to the GRAPHICS schema), if only one link will be made to a database then I suggest setting it to TRUE.

OPEN_LINKS. Specifies the maximum number of concurrent open connections to remote databases in one session. These connections include database links, as well as external procedures and cartridges, each of which uses a separate process.

REMOTE_DEPENDENCIES_MODE. Specifies how Oracle should handle dependencies upon remote PL/SQL stored procedures.

Values:

TIMESTAMP (default). The client running the procedure compares the timestamp recorded on the server-side procedure with the current timestamp of the local procedure and executes the procedure only if the timestamps match.

SIGNATURE. Oracle allows the procedure to execute as long as the signatures are considered safe. This setting allows client PL/SQL applications to be run without recompilation.

Once the parameters are correctly set, Oracle will use the two-phase commit mechanism to control transaction commit and rollback across the distributed database system.   

Classical Client Server with Oracle

In a classical client server setup multiple clients talk to a single server via a network protocol such as TCPIP. In Oracle, the client would contact the port configured as the listener port for the NET8 or Oracle Net listener process (usually port 1521.) The listener process picks up the connection request and passes it either to a dedicated connection or to a dispatcher process that then passes it along to a shared server (if Multi-threaded server option is enabled). The dedicated or shared server process then connects to the database.

Inter- and Intranet Connections

In recent years the internet and company intranets have virtually taken over the domain of the Oracle NET8 and Oracle Net processes. Actually, I should say they have added to the process stack. Let me explain. In an inter or intranet connection an Application Server (usually Apache since that is what is shipped with Oracle) is configured to listen for HTTP protocol connection requests (usually on port 8088) when on is received it is passed on as a connection to the database server (identified in a DAD configuration file). Once the database server receives the connection request it is processed identically as would be a client server connection request.

The addition of the Application server adds another layer of complexity to the management of the Oracle connection process. If not properly setup and tuned the Apache server can be a performance nightmare for the Oracle DBA. Usually the Oracle DBA will be saddled with the care and feeding of the Apache server since it is supplied with the Oracle software.

In dealing with several Apache server installations I have come across a couple of tips that I want to share in regards to Apache service performance. The techniques appear valid on both HP and Sun installs but I have not seen the problems discussed on Windows or Linux based platforms. In general the problems deal with the usage of the KeepAlive variables and how the clients, servers and application servers signal that a process is finished.

Essentially, when the server closes a TCP connection, it sends a packet with the FIN bit set to the client, which then responds with a packet with the ACK bit set. The client then sends a packet with the FIN bit set to the server, which responds with an ACK and the connection is closed. The state that the connection is in during the period between when the server gets the ACK from the client and the server gets the FIN from the client is known as FIN_WAIT_2. At least on HP and Solaris this final FIN from the client doesn't seem to be being sent when the Apache server is used. I discovered this first on an HP client running Oracle Financials.

The server was a HP9000 L2000 with 4 gig of memory and 4 processors. It was running 8.1.6.1 and 11.5.2 of Apps on B11.0.0 U HPUX. When I got here they could run 10 processes against the Oracle DB and would peak out memory and then CPU at 4 hour intervals when they would have to bounce the webserver/forms server and do some zombie slaying to get back memory.

After a couple of false starts I did some web research and found out that Apache clients may have memory leaks (of course they blame it on the OS libraries) and JDBC may as well. The lack of the FIN_WAIT_2 processes from closing and the multiple connection attempts to the clients filled process slots and ate memory until reboot was required.

I edited the httpd.conf file and adjusted MaxRequestsPerChild (which is actually MaxConnectionsPerChild) from its default of 0, which means UNLIMITED, to 2500 (some people have adjusted this down as far as 300). Each time a process connected the memory leak caused the child process to grow until finally all memory disappears (gee sounds like real kids...) This solved the memory leak problem, but what about the FIN_WAIT_2 processes?

The killing of FIN_WAIT_2 depends on either the KeepAlive option in httpd.conf (which defaults to ON or the TCP server value for tcp_keepalive_interval which defaults usually to 7200000 milliseconds (2 hours) I set KeepAlive to OFF and the TCP setting to 45000 milliseconds. The problem was that the quasi-zombies where hanging around for 2 hours holding resources, but the rate at which they where being killed was slower than the buildup rate. By adjusting the timing of the kill process we balanced the create/destroy so that they didn't build up past a certain point.

On HP you can reset the kernel parameters to help control the FIN_WAIT_2 processes  by running the following commands:

ndd -set /dev/tcp tcp_fin_wait_2_timeout 60000
ndd -set /dev/tcp tcp_ip_abort_interval 2000

Then place these entries in your /etc/rc.config/nddconf so they are reset with each reboot.

For Solaris, as root run the following commands to determine and set the interval:

ndd /dev/tcp tcp_keepalive_interval

should tell you what it is, and

ndd -set /dev/tcp tcp_keepalive_interval NNN

will set it.

Times are in milliseconds, and the default is 2 hours (7200000ms). You'll probably also want to add the reset command to an rc init script.

Listing 14.1 is a little script for HP UNIX you may find useful, it tracks down and kills excessive FIN_WAIT_2 processes. You can probably modify it for other UNIX flavors as needed.

LISTING 14.1 Script to detect hung TCP and kill them:

#!/bin/ksh
# Hewlett-Packard Corporation
# This script is UNSUPPORTED. Use at own risk.
# @(#)$Revision: 1.3 $ $Author: scotty $ $Date: 98/08/25 17:55:01 $
#
# This script will query the system for any TCP connections that
# are in the FIN_WAIT_2 state and forcibly disconnect them. It
# uses netstat(1) to find the FIN_WAIT_2 connections and calls
# ndd with the correct hexidecimal representation of the connection
# to close the connection.
#
 
#
# Temporary files used to compare netstat output
#
MYSCRIPTNAME=${0##*/}
TMPFILE1=/var/tmp/$MYSCRIPTNAME.1
TMPFILE2=/var/tmp/$MYSCRIPTNAME.2
 
#
# Create a log file to keep track of connection that were removed
#
LOGFILE=/var/adm/$MYSCRIPTNAME.log
 
function getFinWait2 {

/usr/bin/printf "%.2x%.2x%.2x%.2x%.4x%.2x%.2x%.2x%.2x%.4x\n" $(/usr/bin/netstat -an -f inet | /usr/bin/grep FIN_WAIT_2 | /usr/bin/awk '{print $4,$5}' | /usr/bin/sed 's/\./ /g') > $TMPFILE1
}  

function compareFinWait2 {  

FIRST_TIME=1  

cp $TMPFILE1 $TMPFILE2
getFinWait2

comm -12 $TMPFILE1 $TMPFILE2 | while read CONN
do
if [[ $CONN != "000000000000000000000000" ]]
then  

if [ $FIRST_TIME -eq 1 ]
then
print >> $LOGFILE
date >> $LOGFILE
FIRST_TIME=0
fi

 print "/usr/bin/ndd -set /dev/tcp tcp_discon_by_addr
\"$CONN\""
>> $LOGFILE
/usr/bin/ndd -set /dev/tcp tcp_discon_by_addr $CONN
fi
done  

getFinWait2
}

#
# Main
#
 

touch $TMPFILE1
touch $TMPFILE2  

compareFinWait2

 

See Code Depot for Full Scripts


This is an excerpt from Mike Ault, bestselling author of "Oracle 10g Grid and Real Application Clusters".

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.
  
 

Oracle performance tuning software 
 
 
 
 

Oracle performance tuning book

 

 
 
 
Oracle performance Tuning 10g reference poster
 
 
 
Oracle training in Linux commands
 
Oracle training Excel
 
Oracle training & performance tuning books
 

 

   

Copyright © 1996 -  2014 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation. 

Hit Counter