Call (800) 766-1884 for Oracle support & training
Free Oracle Tips


Oracle Consulting Support
Oracle Upgrades
Use New Oracle Features
Oracle Replication Support
Oracle Training
Remote Oracle DBA
System Documentation
Oracle Tips
Oracle Performance
 

Free Oracle Tips


 

HTML Text

BC Oracle tuning

Oracle training

Oracle support

Remote Oracle

 

 

   
  Oracle Tips by Burleson

Query Processing

The qgenprogram produces 22 example DSS (Decision Support System) queries. The queries use aggregation, sub-queries, order-bys and group bys to simulate the processing in a DSS environment. The queries were placed into a single file and run back-to-back in the tests. STATSPACK and custom scripts were used to monitor the database.

A simple SQLPlus connection to the database was implemented through standard TCPIP connections. In fact, remote connections were utilized from several different locations during the testing time frame to monitor and re-run queries as required. A testing harness for use with the Linux nohuputility was required. This test harness consisted of a simple command to call sqlplus with our pre-built set of queries:

#! /bin/bash
set +x
/home/oracle/bin/sqlplus dss_admin/dss_admin @queries_scsi.lst

This test harness was called run_ssd_queries.sh or run_scsi_queries.sh and was executed using the nohup command:

nohup ./run_ssd_queries.sh &

The use of nohup generates a nohup.out log file which can be used to see the progress of the various queries and also to capture a detailed log of the test events.

While the SSD runs were completed with single launches of the test harness, the SCSI and ATA runs often required restarts as a result of failure. The nohup command, unless instructed to do otherwise, appends new runs to the previous nohup.out.  Thus, even with the stop/start nature of the SCSI and ATA testing a complete record was still obtained.

Now, the actual results from the tests.

SSD Results

In the SSD test runs the following configurations were tested:

  • Base run to load buffers

  • No logging on all tables and no archive log setting

  • No logging on all tables with archive logging

  • Logging and archive logging

Looking at Figure 4.1 notice that the SSD runs showed very constant times for all the various configurations after query 8 was run in the base load run 1. This type of profile recurred after a shutdown startup, as can be seen in the graph in Figure 4.1. The entire set of 7 total runs only required 3 days to process. It should also be noted that these were not run back-to-back but as time allowed. Figure 4.1 shows the comparisons of the various SSD configurations.

Figure 4.1 shows where query number 1, the poorest performing query in its worst run which was run number 7, only required just over 600 seconds to complete. The complete query timings are shown in Table 4.1. Run 6, with archive logging turned on, showed nearly identical query timing results, as compared with previous runs, until the next-to-last three queries.

SSD

 

 

 

 

 

 

 

query

Run1

Run2

Run3

Run4

Run5

Run6

Run7

1

589.94

501.53

502.47

500.54

501.06

507.31

615.21

2

42.24

30.83

30.76

30.44

30.37

30.55

43.18

3

41.55

34.72

35.54

34.83

35.49

35.35

40.86

4

249.30

139.02

139.30

139.42

42.04

136.11

267.70

5

292.28

161.48

162.43

160.54

161.17

158.40

317.08

6

205.12

107.87

109.20

108.88

108.78

107.37

230.25

7

314.09

161.59

162.83

162.01

162.14

156.74

334.97

7a

263.43

144.45

145.12

145.42

144.85

139.88

297.13

8

340.17

272.98

274.20

273.95

273.32

267.27

379.45

9

137.69

137.40

138.78

137.62

138.40

134.25

154.53

10

35.20

35.46

35.72

35.51

35.49

35.28

36.19

11

157.74

159.13

161.64

161.27

160.43

154.48

160.75

12

224.77

215.71

217.33

214.72

215.66

214.58

225.20

13

113.01

113.16

113.79

113.76

113.93

111.85

114.70

13a

246.45

248.79

250.66

249.36

250.39

238.84

249.17

14

57.98

44.32

42.82

41.95

42.04

42.19

54.56

14a

42.49

40.66

40.91

40.50

40.34

50.65

39.86

15

309.07

306.37

308.89

306.73

306.27

305.89

307.92

16

138.37

138.70

140.02

138.65

139.12

565.86

138.11

17

133.08

133.53

134.41

133.71

133.36

355.08

132.71

18

149.93

150.41

150.93

150.86

150.56

319.14

149.90

19

43.58

44.03

44.44

43.41

43.19

47.45

43.35

Total

4127.48

3322.14

3342.19

3324.08

3228.40

4114.52

4332.78

Table 4.1: Raw data from the SSD Query Runs

After a restart, query 7 showed similar performance to run 1, even with archive logging turned on, and the three poorly performing queries, relative to the other SSD runs, returned to “normal” performance levels.

When using the SSD array the use of logging and archive logging had little or no affect on performance. 

Summary of Findings

SSD performance is consistent and the SSD array performed consistently well during the tests. After some initial tweaks to undo tablespace and temporary tablespace sizes, which were subsequently carried over into the SCSI and ATA testing, all SSD test runs were completed without notable incident or issue.

Initial runs after restart of the instance showed poorer performance than subsequent runs. This is expected due to data loading into the Oracle SGA buffers. Changes to the archivelogging status of the database or table, and index level logging when logs were written to the SSD array had no measurable effect on SSD performance.

The next section examines SCSI and ATA performance.


The above book excerpt is from:

Oracle Solid State Disk Tuning

High Performance Oracle tuning with RAM disk

ISBN 0-9744486-5-6  

Donald K. Burleson & Mike Ault

http://www.rampant-books.com/book_2005_1_ssd.htm

  
 

Oracle performance tuning software 
 
 
 
 

Oracle performance tuning book

 

 
 
 
Oracle performance Tuning 10g reference poster
 
 
 
Oracle training in Linux commands
 
Oracle training Excel
 
Oracle training & performance tuning books
 

 

   

Copyright © 1996 -  2014 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation. 

Hit Counter