Call (800) 766-1884 for Oracle support & training
Free Oracle Tips


Oracle Consulting Support
Oracle Upgrades
Use New Oracle Features
Oracle Replication Support
Oracle Training
Remote Oracle DBA
System Documentation
Oracle Tips
Oracle Performance
 

Free Oracle Tips


 

HTML Text

BC Oracle tuning

Oracle training

Oracle support

Remote Oracle

 

 

   
  Oracle Tips by Burleson

Testing

The initial testing was performed using two desktop and two laptop computers configured with the Benchmark Factory agents centrally coordinated by one of the laptops which was also running the Benchmark Factory main program. These desktop test systems where located in Alpharetta, Georgia remotely from the main Oracle10g RAC cluster located in Houston, Texas and all communication was via the web. The specifications for the test bed machines are shown in Table 4.4. 

VIRTUAL STATION ID

STATION ID

NAME

MEMORY

PROCESSPR

# OF CPUs

OS

OS VERSION

OS BUILD

1

3

Test3

654828

Pentium(R)

2

Microsoft Windows 2000

Version: 5.0

2195

2

4

Test4

523760

Pentium(R)

1

Microsoft Windows 2000

Version: 5.0

2195

3

1

Test1

1046000

Pentium(R)

1

Microsoft Windows XP

Version: 5.1

2600

4

2

Test2

556528

Pentium(R)

1

Microsoft Windows 2000

Version: 5.0

2195

Table 4.4: Test bed Configuration

For mobile testing, the Test1 server was utilized.

Load balancing across the cluster was performed by the Oracle10g built-in load balancing algorithms and connections were made through a standard tnsnames connection via OracleNet. This configuration is shown below.

SSD1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 15212))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ssd)
      (INSTANCE_NAME = ssd1)
    )
  )
 
SSD2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 15213))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ssd)
      (INSTANCE_NAME = ssd2)
    )
  )
 
LISTENERS_SSD =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 15212))
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 15213))
  )

SSD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 15212))
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 15213))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ssd)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )
 
Notice that the host is set at localhost and the ports are not the standard 1521 ports, this is because port forwarding was utilized to allow tunneling through the TMS firewall.

The kernel was reconfigured with the suggested Oracle memory and network configuration via a startup file, which is shown below.

#!/bin/bash
#kconfig shell script
#chkconfig: 345 80 80
#description: Oracle Kernel configuration script
# /etc/init.d/kconfig
# Description: Performs kernel config for Oracle
# See how we were called.
case "$1" in
  start)
echo "300 32000 100 128">/proc/sys/kernel/sem
echo "2147483648">/proc/sys/kernel/shmmax
echo "2097152">/proc/sys/kernel/shmall
echo "4096">/proc/sys/kernel/shmmni
echo '262144'>/proc/sys/net/core/rmem_default
echo '262144'>/proc/sys/net/core/rmem_max
echo '262144'>/proc/sys/net/core/wmem_default
echo '262144'>/proc/sys/net/core/wmem_max
echo '4096 65536 4194304'>/proc/sys/net/ipv4/tcp_wmem
echo '4096 87380 4194304'>/proc/sys/net/ipv4/tcp_rmem
echo "1024 65000">/proc/sys/net/ipv4/ip_local_port_range
load_ocfs
mount -t ocfs /dev/sda1 /oracle
mount -t ocfs /dev/sdb1 /oracle2
;;
 stop)
;;
*)
echo "Usage: kconfig {start|stop}"
        exit 1
esac
exit 0

Other than these settings, all other settings were left at their default values. The next listing shows the non-default initialization parameter settings used during the test for the Oracle10g instance.

init.ora Parameters  DB/Inst: SSD/ssd2  Snaps: 1-2
                                                                End value
Parameter Name                Begin value                       (if different)
----------------------------- --------------------------------- --------------
__db_cache_size               805306368
__java_pool_size              4194304
__large_pool_size             4194304
__shared_pool_size            255852544
background_dump_dest          /home/oracle/app/oracle/admin/ssd
cluster_database              TRUE
cluster_database_instances    2
compatible                    10.1.0.2.0
control_files                 /oracle2/oradata/ssd/control01.ct
core_dump_dest                /home/oracle/app/oracle/admin/ssd
db_block_size                 8192
db_domain
db_file_multiblock_read_count 16
db_name                       ssd
dispatchers                   (PROTOCOL=TCP) (SERVICE=ssdXDB)
instance_number               2
job_queue_processes           10
open_cursors                  300

pga_aggregate_target
        356515840

processes                     800
remote_listener               LISTENERS_SSD
remote_login_passwordfile     EXCLUSIVE
sga_target                    1073741824
spfile                        /oracle2/oradata/ssd/spfilessd.or
thread                        2
undo_management               AUTO
undo_tablespace               UNDOTBS2
user_dump_dest                /home/oracle/app/oracle/admin/ssd
          -------------------------------------------------------------

The only parameters altered during testing where sga_max_sizeand sga_target. They were set at 1G, 500M, 250M for the three phases of the testing and Oracle was allowed to use the Automatic Memory Management(AMM) feature to internally alter memory structures as needed.

Many TPC-C tests utilize larger test databases, however, the memory of this test configuration was a total of 4 gigabytes and the test team intended to only utilize 2-3 gigabytes of this memory for the Oracle system to allow for large numbers of users. In addition the test team wanted to demonstrate what occurs with RAID as more and more of the data is moved from memory to disk on the RAID and with SSD in the same situation. This desire to test the affects of full and partial caching resulted in the need for a small footprint database, one which could be completely cached in the virtual memory space provided by the cluster and one that would respond to the reduction of that memory forcing more and more disk activity. Utilization of a TPC-C schema that was many times larger than the available memory would have resulted in not being able to test the affects of full caching and partial caching of the data.

In order to test the effects of full caching and partial caching the test team re-ran the TPC-C tests with configurations of 250 megabyte SGA, 500 megabyte SGA and 1 gigabyte SGA sizes using the automatic memory management settings for sga_targetand sga_max_sizeof the Oracle 10g server on both the RAID and SSD arrays.

The TPC-C benchmark utilizes a basic 9 table schema consisting of a typical OLTP scenario utilizing warehouses, sales, and other point-of-sale type tables. The schema tables are shown in Appendix A. The Benchmark Factory software generates both the schema and the data load. Table 4.5 shows the number of rows in each of the schema tables.

TABLE

OCCURRENCES

C_CUSTOMER

480,000

C_DISTRICT

160

C_HISTORY

622,113

C_ITEM

100,000

C_NEW_ORDER

169,556

C_ORDER

633,769

C_ORDER_LINE

6,340,427

C_STOCK

1,600,000

C_WAREHOUSE

16

Table 4.5: Table Row Counts


The above book excerpt is from:

Oracle RAC & Tuning with Solid State Disk

Expert Secrets for High Performance Clustered Grid Computing

ISBN 0-9761573-5-7

Donald K. Burleson & Mike Ault

http://www.rampant-books.com/book_2005_2_rac_ssd_tuning.htm

  
 

Oracle performance tuning software 
 
 
 
 

Oracle performance tuning book

 

 
 
 
Oracle performance Tuning 10g reference poster
 
 
 
Oracle training in Linux commands
 
Oracle training Excel
 
Oracle training & performance tuning books
 

 

   

Copyright © 1996 -  2014 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation. 

Hit Counter