You are on page 1of 55

PeopleSoft 8 Batch Performance On Oracle Database

Contains:
PeopleSoft Batch Performance Tips Database Tuning Tips SQL Query Tuning Tips Use of Database Features Capturing Traces Prepared by: Jayagopal Theranikal

PeopleSoft 8 Batch Performance On Oracle Database


9/26/2002

Comments on this document can be submitted to redpaper@peoplesoft.com. We encourage you provide feedback on this Red Paper and will ensure that it is updated based on feedback received. When you send information to PeopleSoft, you grant PeopleSoft a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. This material has not been submitted to any formal PeopleSoft test and is published AS IS. It has not been the subject of rigorous review. PeopleSoft assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends upon the customer's ability to evaluate and integrate them into the customer's operational environment.

Table of Contents

TABLE OF CONTENTS ................................................................................................................................... 3 CHAPTER 1 - INTRODUCTION ........................................................................................................................ 5 Structure of this Red Paper Related Materials 5 5

CHAPTER 2 - PEOPLESOFT BATCH PERFORMANCE TIPS............................................................................ 6 Table and Index Statistics 6 Gather Statistics .............................................................................................................................................................................................6 Statistics at Runtime for Temporary Tables ..............................................................................................................................................7 Dedicated Temporary Tables 8 What Are Dedicated Temporary Tables?..................................................................................................................................................8 Sizing the Dedicated Temporary Tables ....................................................................................................................................................9 Create Them as Oracle Global Temporary Tables (GTT) -- Not Advisable for Now ....................................................................9 Tablespace Selection 11 Dictionary-Managed Tablespaces............................................................................................................................................................13 Locally Managed Tablespaces..................................................................................................................................................................13 Temporary Tablespaces .............................................................................................................................................................................15 Index Validation 15 Index Maintenance Tips.............................................................................................................................................................................15 Function-Based Indexes .............................................................................................................................................................................16 Key Compression ........................................................................................................................................................................................17 Stored Outlines 20 What Are Stored Outlines?........................................................................................................................................................................20 When to Use Outlines ................................................................................................................................................................................20 Using Outlines to Swap Execution Plans.................................................................................................................................................20 Table/Index Partitioning 22 What Is Partitioning?..................................................................................................................................................................................22 Partitioning Methods..................................................................................................................................................................................22 Partitioned Indexe s ......................................................................................................................................................................................23 Advantages of Partitioning........................................................................................................................................................................24 Rollback Segments for Batch and Online 24 Online ............................................................................................................................................................................................................24 Batch..............................................................................................................................................................................................................25 Parses vs. Executes 26 Use of Bind Variables..................................................................................................................................................................................26 Histograms 32 What Are Histograms?...............................................................................................................................................................................32 Use of Histograms for PeopleSoft Applications.....................................................................................................................................33 Creating Histograms ....................................................................................................................................................................................36 Choosing the Number of Buckets for a Histogram.................................................................................................................................36 Viewing Histograms ....................................................................................................................................................................................36 Operational Guidelines for Maintaining Histograms in Oracle .............................................................................................................37 3

FAQ on Histograms ....................................................................................................................................................................................37 Batch Server Selection 38 Scenario 1: Process Scheduler and Application Server on one BOX..................................................................................................38 Scenario 2: Process Scheduler and Database Server on one BOX ......................................................................................................39 What is the Recommended Scenario?......................................................................................................................................................40 CHAPTER 3 - CAPTURING TRACES ..............................................................................................................41 Application Engine Trace Online Trace 41 41

Oracle Trace 42 Trace at Instance Level:..............................................................................................................................................................................42 Trace at Session Level:...............................................................................................................................................................................42 Trace for different session : .......................................................................................................................................................................43 TKPROF 43

STATSPACK 43 Installing and Using STATSPACK ..........................................................................................................................................................44 CHAPTER 4 -DATABASE TUNING AND INIT.ORA PARAMETERS ...................................................................46 Recommendations 46 Block Size ......................................................................................................................................................................................................46 Shared Pool Area.........................................................................................................................................................................................47 Data Dictionary Hit Ratio ...........................................................................................................................................................................47 Buffer Busy Waits .......................................................................................................................................................................................47 LRU Latch.....................................................................................................................................................................................................48 Log Buffer.....................................................................................................................................................................................................48 Tablespace I/O.............................................................................................................................................................................................48 Full Table Scans...........................................................................................................................................................................................48 Checkpoints..................................................................................................................................................................................................49 Dynamic Allocation of Extents ..................................................................................................................................................................49 PCTFREE/PCTUSED ...................................................................................................................................................................................49 Rebuilding Indexes ......................................................................................................................................................................................50 Sorting...........................................................................................................................................................................................................50 APPENDIX A SPECIAL NOTICES ................................................................................................................51 APPENDIX B VALIDATION AND FEEDBACK................................................................................................52 Customer Validation Field Validation 52 52

APPENDIX C - REFERENCES .........................................................................................................................53 APPENDIX D REVISION HISTORY...............................................................................................................54 Authors .........................................................................................................................................................................................................54 Reviewers......................................................................................................................................................................................................54 Revision History ..........................................................................................................................................................................................55

9/26/2002

Chapter 1 - Introduction
This Red Paper is a practical guide for technical users, database administrators, and programmers who implement, maintain, or develop applications for a PeopleSoft system. In this Red Paper, we discuss guidelines on how to improve the performance of PeopleSoft 8 Batch processes in the Oracle8i environment and make necessary comments for Oracle 9i environment. Much of the information contained in this document originated within the PeopleSoft Benchmarks and Global Support Center and is therefore based on real-life problems encountered in the field. The issues that appear in this document are the problems that prove to be the most common or troublesome.

STRUCTURE OF THIS RED PAPER


This Red Paper provides guidance to get the best performance of PeopleSoft batch processes in the Oracle database environment. Keep in mind that PeopleSoft updates this document as needed so that it reflects the most current feedback we receive from the field. Therefore, the structure, headings, content, and length of this document are likely to vary with each posted version. To see if the document has been updated since you last downloaded it, compare the date of your version to the date of the version posted on Customer Connection.

RELATED MATERIALS
This paper is not a general introduction to environment tuning and we assume that our readers are experienced IT professionals, with a good understanding of PeopleSofts Internet architecture and Oracle database. To take full advantage of the information covered in this document, we recommend that you have a basic understanding of system administration, basic Internet architecture, relational database concepts/SQL, and how to use PeopleSoft applications. This document is not intended to replace the documentation delivered with the PeopleTools 8 or 8.4 PeopleBooks. We recommend that before you read this document, you read the PeopleSoft application-related information in the PeopleBooks to ensure that you have a well-rounded understanding of PeopleSoft batch process technology. Note: Much of the information in this document eventually gets incorporated into subsequent versions of the PeopleBooks. The fundamental concepts related to performance tuning in PeopleBooks include the PeopleSoft Installation Guide Oracle Tuning chapter. Additionally, we recommend that you read the Oracle8i database administration guide.

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002

Chapter 2 - PeopleSoft Batch Performance Tips

TABLE AND INDEX STATISTICS


Performance of a query with Oracle's CBO (Cost Based Optimizer) depends on an appropriate table and index statistics. Keeping the statistics up to date is crucial for optimum performance. Have set of scripts to update the statistics and run them weekly, monthly, or quarterly depending on the data growth.

Gather Statistics
Oracle8i introduced a new package DBMS_STATS to run the statistics. The DBMS_STATS package provides the ability to generate statistics in parallel by specifying the degree of parallelism. The ability to generate statistics in parallel significantly reduces the time needed to refresh object statistics. Create SQL scripts to gather table-level or schema-level statistics and run them periodically.

Sample DBMS_STATS Command:


SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (OWNNAME , TABNAME => 'PS_CUSTOEMR' , PARTNAME=> NULL , ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); => 'SYSADM'

SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME => 'SYSADM' , ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); SQL> EXECUTE DBMS_STATS.GATHER_DATABASE_STATS (ESTIMATE_PERCENT => 20 , DEGREE => 5 , CASCADE => TRUE); With the value of TRUE for CASCADE parameter the associated indexes will also be analyzed. The default setting for CASCADE is FALSE. Prefer DBMS_STATS over ANALYZE command to get faster table statistics. Note: Specifying the DEGREE will only help the tables (Partitioned or Non-Partitioned) to run in parallel. The index statistics cannot make use of this flag and do not run in parallel.

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002

Statistics at Runtime for Temporary Tables


PeopleSoft uses shared temporary tables or dedicated temporary tables in the batch processes. These temporary tables will have few rows or no rows in the beginning of the process, and again few rows or no rows at the end of the process. Temporary tables get populated during the process and get deleted or truncated at the end or beginning of the process. Keeping the statistics up to date for these tables is somewhat challenging. Beginning with PeopleSoft 8, if the process is written in AE (Application Engine), then there could be %UpdateStats meta-SQL used in the program after the rows are populated. This would make sure the statistics are updated before the selection happens from that table. Note: Commit is required prior to executing this statement. Make sure to commit immediately following the previous step. If not committed as specified, this statement will be skipped by the application engine and will not be executed.

Example
Command in SQL Step of an Application Engine program: %UpdateStats(INTFC_BI_HTMP) This meta-SQL will issue "ANALYZE TABLE PS_INTFC_BI_HTMP ESTIMATE STATISTICS" command to the database at runtime. Note: PeopleSoft stores the default syntax for the ANALYZE command in a table PSDDLMODEL. Use the supplied script (DDLORA.DMS) to change the default setting or to add a required SAMPLE ROWS/PERCENT for the ESTIMATE clause. Make sure the temporary table statistics have been handled as shown above. If you find any temporary table that was not updated during the run time, then plan to use manual method of updating the statistics.

Turn off %UpdateStats


Having the update statistics at the runtime incurs some overhead. In fact, it is not necessary to run the statistics for each run. If the volumes are same for each run then the statistics can be maintained for the temporary tables instead of analyzing the tables for each run. 1. 2. 3. Run the application engine program for the desired volume. Turn off the %UpdateStats command for the next runs, so the next runs do not capture the statistics again. If required these statistics can be exported for future use with the DBMS_STATS.EXPORT_TABLE_STATS package. Be sure to remove the temporary tables from the list of tables that get analyzed weekly or monthly. Make necessary changes to the script to handle this.

4.

Note: If the schema-level statistics are run using the DBMS_STATS.GATHER_SCHEMA_STATS, then the previously captured statistics will be erased. In such cases, you may wish to turn on the %UpdateStats again or import the statistics for those tables from previously saved statistics using DBMS_STATS.IMPORT_TABLE_STATS command. Update Statistics can be turned off in two ways.

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002 1. Program level: Identify the steps that issue %UpdateStats and inactivate them. These steps can be identified by the AE trace. This is a program-specific setting. Installation Level: Once the batch-process runs are stabilized and the temporary-table statistics are captured for all the batch process, then the installation-level setting can be applied to turn off the %UpdateStats. Following parameter should be set in the Process Scheduler configuration to achieve this:

2.

psprcs.cfg: ;------------------------------------------------------------------------; DbFlags Bitfield ; ; Bit Flag ; -----; 1 - Ignore metaSQL to update database statistics(shared with COBOL) DbFlags=1

DEDICATED TEMPORARY TABLES


What Are Dedicated Temporary Tables?
Batch processes written in Application Engine use Dedicated Temporary Tables for better processing. This technique minimizes potential locking issues and improves processing time. These temporary tables are conceptually similar to Oracles Global Temporary Tables but they are of type permanent. These are regular Oracle tables but flagged as temporary in the PeopleSoft dictionary tables. Required temporary tables are linked to the AE program and the required number of instances is also specified for each AE program.

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002 Following is the property window for the AE program Bill Finalization (BIIF0001): Instance count specified above is the limit on number of temporary tables instances that can be used when multiple instance of the program is run. If the number of programs run are more than the specified count (10 in this example), then the additional processes will be abandoned or the base temporary tables will be used depending on the Runtime radio button selection in the above window.

Sizing the Dedicated Temporary Tables


Proper sizing of these temporary tables help improve the processing time. Few tips to consider: 1. Create these temporary tables in separate Tablespace and spread the data files on multiple disks for minimizing the I/O. Hardware disk striping may be another way to spread the I/O. Plan on creating them as Locally Managed Tablespace with fixed-extent size ( E.g.: 1M or 2M). In some cases, the truncate command issued from AE program is converted into DELETE statement. This happens when there is no commit before the truncate step. Identify such tables and manually truncate them at regular intervals to release the buffer block and maximize the performance.

2. 3.

Create Them as Oracle Global Temporary Tables (GTT) -- Not Advisable for
Now What Are Global Temporary Tables?
Oracle8i introduced global temporary tables, which can be used as temporary processing tables for any batch process. Instances of a global temporary table will be created at the runtime in the user's Temporary Tablespace. These tables are session-specific. Tables are dropped once session is closed. During the table creation time, it gives the option to preserve or delete the rows after the commit. Some advantages of using the Oracle Global Temporary Tables in place of Dedicated Temporary Tables. 1. 2. 3. 4. Reduction in redo. Faster full scansHigh Water Mark is always set to zero at the start of the process. Faster TruncatesSpace management occurs inside the temporary segment Easier table managementNo need to create the entire temporary table instances up front. Base table definition is stored once.

Some disadvantages with these Global Temporary Tables as of Oracle 8.1.7. 1. Table statistics run on these tables do not have any effect. Optimizer treats them as no statistics only. This will impact the access paths and execution times. These tables are dynamically created in users Temporary Tablespace. Temporary table sizing should be done properly to avoid any runtime errors due to lack of extents.

2.

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002

Can Global Temporary Tables be used in place of Dedicated T emporary Tables?


As of now PeopleSoft does not provide script or utility to create the Global Temporary Tables, and there is no direct method to specify the dedicated temporary tables as Global Temporary Tables. But, the indirect method (explained below) allows you to use Global Temporary Tables in place of Dedicated Temporary Tables. Per our in-house tests, the use of Global Temporary Tables does not help the performance other than improved truncate time. This is mainly due to the fact that the Global Temporary Tables do not support Table Statistics. PeopleSoft application programs depend on the table statistics for better performance. Lack of statistics will impact the execution times. So, until Oracle comes up with table statistics for Global Temporary Tables, it is not recommended to use them in place of Dedicated Temporary Tables. Another important caution while using the Global Temporary Tables is Application Engine's ability to restart. As the Global Temporary Tables lose the data when the session ends, there is no way to restart the program. The indirect method to use Global Temporary Tables if you want to experiment, is explained here: 1. 2. 3. 4. 5. In the AE program's properties window, click on the Temporary Tables tab. Set the Instance Count to 0. Select Continue radio button for Run Time settings. Generate script to create the temporary tables. Change the script to create the tables as global Temporary Tables. Make necessary changes to support the syntax. Create the Global Temporary Tables with the above script. When the multiple runs of the same program occur, the AE looks for temporary tables instances and fails due to the setting done above. Then it continues by using the base temporary table. This in turn uses Global Temporary Table instance at runtime. The experiment suggested can be run in demo database before using in production environment. Misusing them may cause loss of data. It is advisable to do this experiment with the help of experienced DBAs.

6. 7.

8.

9.

10. Use them with caution.

Copyright PeopleSoft Corporation 2001. All rights reserved.

10

9/26/2002

TABLESPACE SELECTION
As of Oracle8i there are various types of tablespaces to use. Tablespaces can be created in multiple ways, but each type is good for a specific purpose. It is more confusing to choose a right type for each requirement. Though there are multiple options available to create a tablespace, only certain combinations of those options are valid. The following illustration and table gives a recommended use of various combinations.

Copyright PeopleSoft Corporation 2001. All rights reserved.

11

9/26/2002

Tablespace Types

Datafile Based

Tempfile Based

Temporary Type

Permanent Type

Temporary Type

Dictionary Managed

Locally Managed

Locally Managed

User Defined Extent Size

Auto Allocate

Uniform Extent Size

Copyright PeopleSoft Corporation 2001. All rights reserved.

12

9/26/2002

Sample Name TS_PERM_DICT

Tablespace Type

PeopleSoft Objects

Datafile Based, Regular SYSTEM Tablespace in Oracle Tablespace, Dictionary Managed 8i. Oracle 9.2 onwards SYSTEM Tablespace can be created as Locally Managed. Tablespace, Locally Managed, Auto Allocate All the Data Tables and Indexes Rollback Tablespace, Temporary Tables, Data Tables and Indexes NOT RECOMMENDED TO USE Users Default Temporary Tablespace

TS_PERM_LOC_AUTO Datafile Based, Regular

TS_PERM_LOC_UNI

Datafile Based, Regular Tablespace, Locally Managed, Uniform Extent

TS_PERM_DICT_TEMP Datafile Based, Regular


Tablespace, Dictionary Managed, TEMPORARY Type

TS_TEMP_LOC_UNI

Datafile Based, Temporary Tablespace, Locally Managed, Uniform Extent

Read Oracle documentation for the detailed understanding of each option.

Dictionary-Managed Tablespaces
These are the regular tablespaces and are datafile-based. Extent management is done at the dictionary level. Used defined extent management is allowed for each object created under such tablespace. Sample syntax:

CREATE TABLESPACE TS_PERM_DICT size 100M Datafile '/perm/ora/ts_perm_dict.dbf' Default storage (INITIAL 250K, NEXT 500K, PCTINCREASE 0)

Locally Managed Tablespaces


These are the newly introduced types in Oracle 8i. Extent management is done at the datafile/temporary file level using the bitmaps. Storage-clause specification is not required in these tablespaces.

Copyright PeopleSoft Corporation 2001. All rights reserved.

13

9/26/2002

Advantages of Locally Managed


Reduced recursive space management. Reduced contention on data dictionary tables. No rollback generated. No coalescing required.

Space Management
Free extents recorded in bitmap (so some part of the tablespace is set aside for bitmap) Each bit corresponds to a block or group of blocks Bit value indicates free or unused Common views used are DBA_EXTENTS and DBA_FREE_SPACE

Locally Managed - AUTO ALLOCATE


With this option, the extent size is managed by the system depending on the table volume. This should be a preferable method if the tablespace holds the tables with variable sizes.

CREATE TABLESPACE TS_PERM_LOC_AUTO size 100M Datafile '/perm/ora/ts_perm_loc_auto.dbf' EXTENT MANAGEMENT LOCAL AUTO ALLOCATE;
With this option, extent size allocation is done by the system. It will not be possible to predict the extent size for each table and will be difficult to do capacity planning. If you want predictable extent sizes then you shouldn't use AUTOALLOCATE.

Locally Managed - UNIFORM EXTENT


With this option, the size of each extent would be fixed to the specified size. Make sure to specify appropriate size to avoid the table creation with more number of extents.

CREATE TABLESPACE TS_PERM_LOC_UNI size 100M Datafile '/perm/ora/ts_perm_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K;
Uniform extent gives best predictability and consistency. Having the consistent extent size eliminates wastage of tablespace as "holes". It will be easier for the DBA to do capacity planning. Use this as a preferred method of extent management for all the tablespaces. Proper planning should be done to determine the optimum extent size. Plan on creating different category tablespaces such as small, medium, and large with different uniform extent sizes. Place the tables in appropriate tablespace depending on its size.

Copyright PeopleSoft Corporation 2001. All rights reserved.

14

9/26/2002

Temporary Tablespaces
Every database user should be assigned a default temporary tablespaces to handle the data sorts. In Oracle8i it is possible to assign a regular tablespace as a temporary tablespace, it is advisable to use one of the following types for better management of temporary segments. Starting from Oracle9i regular tablespace cannot be assigned as the temporary tablespace, as it is flagged as an error when the tablespace assigned is not a true Oracle temporary tablespace.

Datafile -Based
These are regular tablespaces with an additional setting as TEMPORARY at the end of the command. These temporary tablespaces should only be used for temporary segments. This will make sure the permanent objects are not created by accident also.

CREATE TABLESPACE TS_PERM_DICT_TEMP size 100M Datafile '/perm/ora/ts_perm_dict_temp.dbf' Default storage (INITIAL 250K, NEXT 500K, PCTINCREASE 0) TEMPORARY; Tempfile -Based
Oracle introduced this new type that used tempfile instead of datafile. This should be a preferred method for any Temporary Tablespace. This will give better extent management and space management than the datafile based ones. In this type of tablespace, only the Locally Managed with UNIFORM EXTENT management is allowed.

CREATE TEMPORARY TABLESPACE TS_TEMP_LOC_UNI size 100M tempfile '/temp/ora/ts_temp_loc_uni.dbf' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 500K;

Use this as a preferred method for the temporary tablespace.

INDEX VALIDATION
PeopleSoft-supplied indexes are of a generic nature. Depending on the customer's business needs and data composition the need for indexes varies. The following tips will help the DBA manage indexes efficiently.

Index Maintenance Tips


1. Run the Oracle trace/TKPROF report for a process and check the access paths to determine the usage of indexes. Depending on the data distribution used, you may want to consider index order on the composite indexes to make the high-selectivity column as the leading column of the index. E.g.: Change the index with the column order (BUSINESS_UNIT, INVOICE) to (INVOICE, BUSINESS_UNIT)
Copyright PeopleSoft Corporation 2001. All rights reserved.

2.

15

9/26/2002 Caution: Sufficient research and testing by an experienced DBA is required prior to making any such changes in a production environment. Result of a poor choice could be fatal to performance. As of Oracle 9i the new index scan INDEX SKIP SCAN will help to use the INVOICE column even if the column is the second one in the order. It may not be necessary to flip the index order in such cases. 3. 4. Consider adding additional indexes depending on your processing needs. Review the index-recommendation document supplied by the product to see if any of the suggestions apply to your installation. Examine the available indexes and remove any of the unused indexes to boost the performance of INSERT/UPDATE/DELETE operations. Sometimes, the unused index in a batch process may be useful for an online page. Do thorough analysis before deleting the index. It may impact another program. Indexes tend to fragment more frequently than tables. Rebuild the indexes frequently to boost the index performance.

5.

6.

Function-Based Indexes
A function-based index is an index on an expression, such as an arithmetic expression, or an expression containing a package function. Test case: Table PS_CUSTOMER has an index PS0CUSTOMER with NAME1 as leading column: SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'Adventure%'; SQL> SETID CUST_ID NAME1

----- --------------- ----------------SHARE 1008 Adventure 54

Uses index PS0CUSTOMER and return the result faster. SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE NAME1 LIKE 'ADVENTURE%'; SQL> No rows selected Uses index PS0CUSTOMER and return the result faster. But, gives no rows.

If data is stored in mixed case such as the above example, the only way to get the result is using the function UPPER.
SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; 16

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002 SQL> SETID CUST_ID NAME1

----- --------------- ----------------SHARE 1008 Adventure 54

Does not use the PS0CUSTOMER index and takes longer time to return.

In such cases, the use of function-based indexes are useful.


CREATE INDEX ON PSFCUSTOMER ON PS_CUSTOMER (UPPER(NAME1)); SELECT SETID,CUST_ID,NAME1 FROM PS_CUSTOMER WHERE UPPER(NAME1) LIKE 'ADVENTURE%'; SQL> SETID CUST_ID NAME1

----- --------------- ----------------SHARE 1008 Adventure 54

Uses PSFCUSTOMER index and returns the query faster.

Key Compression
Beginning with Oracle8i, there is a new index COMPRESS option to enable key compression, which eliminates repeated occurrence of key-column values and may substantially reduce storage. This is applicable for btree index and IOT.

How does key compression work?


The compression is realized by splitting the index key in two parts: the prefix and suffix part. Use integers to specify the prefix length (number of prefix columns to compress). If you indicate the COMPRESS option without range, Oracle will take all the columns minus the last one for compression. The maximum columns allowed: For non-unique index, compression can be applied to all the columns. For unique index, compression can be applied to all the columns minus one. The prefix part is chosen as a common part, whereas the suffix is considered as a unique key. Each prefix part will then be shared between all the suffix parts. It offers the means to load more keys in each block, thus increases performance with access by index by limiting the number of accessed blocks.

Negative Impact
The key compression is made each block by each block, and only at the leaf level. The performance by index scan will decrease because Oracle must translate the <prefix, suffix> part in corresponding key.

Copyright PeopleSoft Corporation 2001. All rights reserved.

17

9/26/2002 Oracle compresses only single-column indexes that are non-unique or unique indexes of at least two columns. You cannot specify COMPRESS for a bitmap index.

Example showing Index space savings of about 18%:


The following example shows basic steps to create a COMPRESS index and comparison with regular index. Savings depend on the data composition and varies for each installation.

SQL> Select count(*) from ps_customer; COUNT(*) ---------300165 Elapsed: 00:00:00.53 SQL> SQL> set echo off =========================== CREATE REGULAR UNIQUE INDEX =========================== SQL> SQL> drop index ps_customer; Index dropped. Elapsed: 00:00:00.61 SQL> SQL> create unique index ps_customer on ps_customer(setid,cust_id)tablespace psindex; Index created. Elapsed: 00:00:14.82 SQL> SQL> select index_name, uniqueness, compression from user_indexes where index_name = 'PS_CUSTOMER'; INDEX_NAME UNIQUENESS COMPRESSION --------------- --------------------------- -----------------------PS_CUSTOMER UNIQUE DISABLED Elapsed: 00:00:00.14 SQL> SQL> analyze index ps_customer validate structure; Index analyzed. Elapsed: 00:00:00.69 SQL> SQL> select name,used_space from index_stats where name like 'PS_CUSTOMER'; NAME USED_SPACE 18

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002 --------------- ---------PS_CUSTOMER 9640794 Elapsed: 00:00:00.10 SQL> SQL> select cust_id,setid,name1 from ps_customer where setid='SHARE' and cust_id='GA_000000000002'; CUST_ID SETID NAME1 ------------------ -------- --------------GA_000000000002 SHARE GA Customer 1 Elapsed: 00:00:00.05 SQL> SQL> set echo off =================================== CREATE KEY COMPRESSION UNIQUE INDEX =================================== SQL> SQL> drop index ps_customer; Index dropped. Elapsed: 00:00:00.56 SQL> create unique index ps_customer on ps_customer(setid, cust_id) compress 1 tablespace psindex; Index created. Elapsed: 00:00:12.81 SQL> select index_name, uniqueness, compression from user_indexes where index_name = 'PS_CUSTOMER'; INDEX_NAME UNIQUENESS COMPRESSION --------------- --------------------------- -----------------------PS_CUSTOMER UNIQUE ENABLED Elapsed: 00:00:00.14 SQL> analyze index ps_customer validate structure; Index analyzed. Elapsed: 00:00:00.80 SQL> select name,used_space from index_stats where name like 'PS_CUSTOMER'; NAME USED_SPACE --------------- ---------PS_CUSTOMER 7833191 Elapsed: 00:00:00.09 SQL> select cust_id,setid,name1 from ps_customer where setid='SHARE' and cust_id='GA_000000000002'; CUST_ID SETID NAME1 ------------------ -------- --------------GA_000000000002 SHARE GA Customer 1

Copyright PeopleSoft Corporation 2001. All rights reserved.

19

9/26/2002 Elapsed: 00:00:00.05

STORED OUTLINES
What Are Stored Outlines?
Oracle introduced outlines in Oracle8i to allow you to have a pre-defined execution plan for a SQL statement. Consistency can then be provided without changing the actual SQL. An outline is nothing more than a stored execution plan that Oracle uses rather than computing a new plan based on current table statistics. Before you can use outlines, you must record some. You can record outlines for a single statement, for all statements issued by a single session, or for all statements issued to an instance.

When to Use Outlines


For a scenarios where the poorly performing SQL cannot be tuned without adding a HINT and its not possible to modify the query to add the HINT as in PSQUERY, outlines come in handy. Identify a problem query and tune the statement outside with the help of Hints. Using the outlines, the execution plan for the original query can be replaced with the tuned query without modifying the query.

Using Outlines to Swap Execution Plans


The use of stored outlines within this document is to swap execution plans for SQL that a person cant modify. For example, since we cannot add hints in PSQUERY, this doc explains how we can give that query the execution plan with hints. This is not supported by Oracle but is documented and proven to work in Oracle Doc ID: Note:92202.1.

DBA Tasks
alter system set use_stored_outlines = true; grant create any outline to <tuner userid>; grant alter any outline to <tuner userid>; grant drop any outline to <tuner userid>; grant alter system to <tuner userid>; grant select, update, delete on outln.ol$ to <tuner userid>; grant select, update, delete on outln.ol$hints to <tuner userid>;

Copyright PeopleSoft Corporation 2001. All rights reserved.

20

9/26/2002

Tuner Tasks
A) View existing outlines select ol_name from outln.ol$ order by timestamp; 1) Capture the outline of a SQL statement alter system set create_stored_outlines = true; -- run the SQL statement (e.g. PS Query with a 2-tier connection) alter system set create_stored_outlines = false; Note: The SQL statement does not have to run to completion before turning off the creation of stored outlines. Only the parsing of the statement must complete to get the outline. It is recommended to kill the SQL statement after parsing is complete, to avoid taxing the database and creating an unmanageable number of outlines for other SQL statements running in the system at the time. 2) Isolate the outline of the SQL statement from the outlines of other running statements select ol_name, sql_text from outln.ol$ where ol_name like 'SYS%'; -- manually scan the rows of newly created outlines for the SQL statement alter outline <system-generated outline name for the SQL statement> rename to <query name>_ORIG; select 'drop outline ' || ol_name || ';' from outln.ol$ where ol_name like 'SYS%'; -- run the output of the statement above to drop the outlines of other SQL statements that were running during the outline creation phase 3) Manually create an outline for the tuned SQL statement create outline <query name> on <tuned SQL statement>;

4) Swap the execution plan of the original outline with the tuned outline select ol_name, hintcount from outln.ol$ where ol_name in (<tuned outline name>', '<original outline name>'); update outln.ol$ set ol_name = 'TO_DEL', hintcount = <hintcount of original outline returned above> where ol_name = '<tuned outline name>'; update outln.ol$hints set ol_name = 'TO_DEL' where ol_name = '<original outline name>'; drop outline TO_DEL; update outln.ol$ set ol_name = '<tuned outline name>', hintcount = <hintcount of tuned statement> where ol_name = '<original outline name>';
Copyright PeopleSoft Corporation 2001. All rights reserved.

21

9/26/2002

5) Test the outline alter system flush shared_pool; select hash_value from outln.ol$ where ol_name = '<outline name>'; -- run the SQL statement (e.g. PS Query in 2-, 3-, or 4-tier)

select a.outline_category, a.hash_value, a.first_load_time, a.loads, a.executions, a.optimizer_cost, b.username from v$sql a, all_users b where a.parsing_user_id = b.user_id and a.hash_value = '<hash value returned above>' order by first_load_time desc; -- (non-NULL outline_category returned above means outline is being used) 6) Copy the outline to another envi ronment set long 10000; copy to <userid>/<password>@<dbname> insert outln.ol$ using select * from outln.ol$ where ol_name = '<outline name>'; copy to <userid>/<password>@<dbname> insert outln.ol$hints using select * from outln.ol$hints where ol_name = '<outline name>';

TABLE/INDEX PARTITIONING
What Is Partitioning?
Partitioning addresses the key problem of supporting very large tables and indexes by allowing you to decompose them into smaller and more manageable pieces called partitions. Once partitions are defined, SQL statements can access and manipulate the partitions rather than entire tables or indexes.

Partitioning Methods
There are three basic methods available

Copyright PeopleSoft Corporation 2001. All rights reserved.

22

9/26/2002

Range Partitioning
Data can be divided on the basis of ranges of column values. Eg: PS_LEDGER by FISCAL_YEAR PS_GP_RSLT_ACUM by EMPLID

CREATE TABLE PS_GP_RSLT_ACUM (EMPLID, CAL_RUNID, .......) STORAGE (INITIAL 500M NEXT 500M) PARTITION BY RANGE (EMPLID) (PARTITION GPACUM1 VALUES LESS THAN (GP0101) TABLESPACE PSTABLE, PARTITION GPACUM2 VALUES LESS THAN (GP0201) TABLESPACE PSTABLE, .... ..... PARTITION GPACUM8 VALUES LESS THAN (GP0801) TABLESPACE PSTABLE)

Hash Partitioning
Data will be distributed evenly through the hashing function. It will be useful for the table where there is not an appropriate range to be used.

Composite Partitioning
It is a combination of range and hash partitioning. It uses range partitioning to distribute the data and divides the data into sub-partitions within each range using hash partitioning.

Partitioned Indexes
In addition to table partitioning, indexes on partitioned tables can also be partitioned. Oracle supports two types of index partitioning.

LOCAL Index
A local index is equipartitioned with its underlying table. That is, the index has the same number of partitions and partition keys as the base table. Eg: CREATE UNIQUE INDEX PS_GP_RSLT_ACUM ON PS_GP_RSLT_ACUM (EMPLID, CAL_RUN_ID, ....) STORAGE (INITIAL 500M NEXT 500M ) LOCAL(PARTITION GPACUM1 TABLESPACE PSINDEX, PARTITION GPACUM2 TABLESPACE PSINDEX, .....,
Copyright PeopleSoft Corporation 2001. All rights reserved.

23

9/26/2002 PARTITION GPACUM8 TABLESPACE PSINDEX)

GLOBAL Index
A global index may or may not be partitioned. If it is partitioned, it should not be equipartitioned with the base table.

Advantages of Partitioning
Partitioning improves the availability and manageability of large tables and helps DBAs to perform administrative tasks on a partition without effecting other partitioning. It also helps the SQL statements to deal with a fewer number of rows scanned and improve performance. When running PeopleSoft batch processes in parallel, you can reduce I/O contention by isolating each job stream in its own partition on large, high-volume transaction tables and carefully managing the placement of the partitioned datafiles. You are also likely to see huge performance gains on queries that perform full table scans. When the table involved is properly partitioned, the query will only need to perform a full scan on a single partition rather than the entire table.

ROLLBACK SEGMENTS FOR BATCH AND ONLINE


Managing Rollback Segments is always challenging. Due to the varying size requirements for online and batch operations, it is necessary to manage two sets of rollback segments.

Thumb Rule Online Batch Have many small rollback segments Have few large rollback segments

The preceding rule while very valid, may not be practical for the DBA to implement in the environment, where online and batch happens at the same time. One may create many small rollback segments and few large rollback segments in the database, and a specific large rollback segment can be allocated using the "SET TRANSACTION USE ROLLBACK SEGMENT RBSLARGE" for a batch process. Practical problem could be to truly dedicate the large rollback segment to the batch process only. Other online transactions may also use the large segment. Only way to dedicate the large segments to batch process is to run the process when no online transactions are running. So, a DBA should make a fair assessment of the requirement to run the batch and online processes simultaneously and size the Rollback Segments accordingly. The following are a few generic guidelines:

Online
If the batch processes are not run when the online transactions are running, then the following setup may be useful. Example:
Copyright PeopleSoft Corporation 2001. All rights reserved.

24

9/26/2002 RB01 RB02 RB03 RB04 RB05 RB06 RBL1 RBL2 - Online - Online - Online - Online - Online - Online - Offline - Offline

RB01 - RB06 are smaller rollback segments. RBL1 - RBL2 are larger rollback segments. If the online transactions are run along with batch processes, then the following setup may be useful. Example: RB01 RB02 RB03 RB04 RB05 RB06 RB07 RB08 - Online - Online - Online - Online - Online - Online - Online - Online

RB01 - RB08 are medium sizes rollback segments to support both online and batch processes.

Batch
If the batch process can be run when no online transactions are running, then the dedicating the large rollback segment to the process will help; but, may not be practical when the multiple jobs of the same process is run. The better option in such cases is to bring the required large rollback segments online and make other small rollback segments offline before running the batch processes. Following examples will give some guidelines to specify the large rollback segment to the process.

SQR/COBOL
If the batch process is of SQR or COBOL then the program can be changed to add the following command at the beginning of the process. "SET TRANSACTION USE ROLLBACK SEGMENT RBLARGE;"

Example: The following code bit should be called in the beginning of an SQR or
Copyright PeopleSoft Corporation 2001. All rights reserved.

25

9/26/2002 after a transaction COMMIT or ROLLBACK. ! -------------------! - BEGIN CODE BIT ! -------------------begin-procedure get-large-rollback begin-sql SET TRANSACTION USE ROLLBACK SEGMENT RBS_LARGE end-sql end-procedure get-large-rollback ! -------------------! - END CODE BIT ! --------------------

Application Engine
If the batch program is written in Application Engine, then the specific rollback segment can be allocated by adding a step at the beginning of the process with PeopleCode action. Specify the following code line to achieve that. %SQLEXEC("SET TRANSACTION USE ROLLBACK SEGMENT RBLARGE;");

PARSES VS. EXECUTES


When a SQL statement is issued which does not exist in the shared pool then this has to be parsed fully. Oracle has to allocate memory for the statement from the shared pool, check the statement syntactically and semantically etc,. This is referred to as a hard parse and is very expensive in both terms of CPU used and in the number of latch gets performed. Hard parsing happens when the Oracle server parses a query and cannot find an exact match for the query in the library cache. This occurs due to inefficient sharing of SQL statements and can be improved by using bind variables instead of literal in queries. Some times hard parsing causes excessive CPU usage.

The number of hard parses can be identified in a PeopleSoft Application Engine trace (128). In Oracle Trace output such statements are shown as individual statements and each statement parses once. It is somewhat difficult to identify the SQL that are hard parsed due to literal instead of bind variables.

Use of Bind Variables


Number of hard parses can be reduced to one per multiple executes of the same SQL statement by sending the statement with bind variables instead of literal.

Copyright PeopleSoft Corporation 2001. All rights reserved.

26

9/26/2002 Most of the PeopleSoft programs written in Application Engine, SQR, and COBOL have taken care to address this issue. In some situations there are some steps in AE process that are not using bind variables. This happens when certain kind of statements cannot handle bind variables in some platforms. As Oracle deals with bind variables efficiently, such statements can typically be made to use bind variables. The following section gives some guidelines to follow to use the bind variables.

Application Engine -- Reuse Flag


PeopleSoft Application Engine programs use bind variables in the SQL statements, but these variables are just PeopleSoft specific. When the statement is passed to the database it sends the statement with literal values. The only way to tell the application engine program to send the bind variables is by specifying the Re-Used flag for that statement that needs to use the bind variable. Example: Statement in PC_PRICING.BL6100.10000001 UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = %Sql(PC_COM_LIT_CHAR,%NEXT(LAST_RESOURCE_ID),1,20,20) WHERE PROCESS_INSTANCE = %ProcessInstance AND BUSINESS_UNIT = %Bind(BUSINESS_UNIT) AND PROJECT_ID = %Bind(PROJECT_ID) AND ACTIVITY_ID = %Bind(ACTIVITY_ID) AND RESOURCE_ID = %Bind(RESOURCE_ID) AND LINE_NO = %Bind(LINE_NO) Statement without Re-Use flag:

AE Trace -- 16.46.00 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000498 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1' AND ACTIVITY_ID = 'ACTIVITYA1' AND RESOURCE_ID = 'VUS004VA10114050' AND LINE_NO = 1 / -- Row(s) affected: 1

C o m p i l e SQL Statement BL6100.10000001.S Count 252 Time 0.6

E x e c u t e Count 252 Time 1.5

F e t c h Count 0 Time 0.0

Total Time 2.1

Oracle Trace Output

Copyright PeopleSoft Corporation 2001. All rights reserved.

27

9/26/2002 ******************************************************************************** UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000561 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1021' AND ACTIVITY_ID = 'ACTIVITYA2042' AND RESOURCE_ID = 'VUS004VA10210124050' AND LINE_NO = 1

call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2

cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.00 0.00 0 0 0 0.01 0.01 0 2 5 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------0.01 0.01 0 2 5

rows ---------0 1 0 ---------1

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------1 2 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735)

Rows ------0 1 2

Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE) ******************************************************************************** Statement with Re-Use flag:

AE Trace -- 16.57.57 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420
Copyright PeopleSoft Corporation 2001. All rights reserved.

28

9/26/2002 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6 / -- Bind variables: -1) 10000751 -2) US004 -3) PRICINGA1 -4) ACTIVITYA1 -5) VUS004VA10114050 -6) 1 -- Row(s) affected: 1

C o m p i l e SQL Statement BL6100.10000001.S Count 1 Time 0.0

E x e c u t e Count 252 Time 0.4

F e t c h Count 0 Time 0.0

Total Time 0.4

Oracle Trace Output ******************************************************************************** UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6

call count ------- -----Parse 1 Execute 252 Fetch 0 ------- -----total 253

cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.00 0.00 0 0 0 0.22 0.22 0 509 1284 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------0.22 0.22 0 509 1284

rows ---------0 252 0 ---------252

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 21 (PROJ84) Rows ------252 504 Row Source Operation --------------------------------------------------UPDATE PS_PC_RATE_RUN_TAO INDEX RANGE SCAN (object id 16735)

Rows ------0 252 504

Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_PC_RATE_RUN_TAO' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PS_PC_RATE_RUN_TAO' (UNIQUE)

********************************************************************************

Copyright PeopleSoft Corporation 2001. All rights reserved.

29

9/26/2002

SQR/COBOL -- CURSOR_SHARING
Most of the SQR and COBOL programs are written to use bind variables. If you find any programs that are not using bind variables and not able to modify the code, then the CURSOR_SHARING option is a way to go. Oracle introduced this new parameter CURSOR_SHARING as of Oracle8i. By default its value is set to EXACT. That means, the database looks for an exact match of the SQL statement while parsing. Another value that can be set for this parameter is FORCE. With this setting the database looks for a similar statement excluding the values that are passed to the SQL statement. It replaces the values with the system bind variables and treats them as single statement and parses once. How to Set the CURSOR_SHARING Values The parameter can be set at the instance level or at the session level. Instance Level: Set the following parameter in the init<dbname>.ora file and restart the database. CURSOR_SHARING = FORCE Session Level: Following syntax can be used to set the value at the session level. ALTER SESSION SET CURSOR_SHARING = 'FORCE'; Setting value at the instance level will force to use the bind variables for every statement that run in the database instance. It may give improvement due to reduced parsing, but may not be required if the application programs are written to handle the bind values. At the same time, there could be performance impacts in other programs because of this value as the histograms are no more useful. Setting the value at the session level is more appropriate. If you identify the program (SQR/COBOL) that is not using the bind variables and need to force them to use the binds at the database level, then adding the ALTER SESSION command at the beginning of the program should be a better option. If you are not willing to change the application program then implementing the session level command through a trigger will give you more flexibility. Session Level (using trigger): Following sample trigger code can be used to implement the session-level option. CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN (NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET CURSOR_SHARING=FORCE'; END; / Note: Make sure to give ALTER SESSION privilege to MYDB to make this trigger work. Example: Sql Statement issued from SQR/COBOL program: SELECT . FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE.
Copyright PeopleSoft Corporation 2001. All rights reserved.

30

9/26/2002 NOT EXISTS (SELECT 'X' FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = 'US008' AND PZI.INV_ITEM_ID = 'PI000021' AND ..) ORDER BY .. The above statement uses a literal values in the where clause thereby causing a hard parse for each execute. Every hard parse has some amount of performance overhead. Minimizing them will boost the performance. This statement gets executed for every combination of BUSINESS_UNIT and INV_ITEM_ID. Per the data composition used in this benchmark there were about 13,035 unique combinations of BUSINESS_UNIT and INV_ITEM_ID and about 19,580 total executes. Oracle TKPROF Output with CURSOR_SHARING=FORCE SELECT FROM PS_PHYSICAL_INV PI, PS_STOR_LOC_INV SLI WHERE .. NOT EXISTS (SELECT :SYS_B_09 FROM PS_PICKZON_INV_VW PZI WHERE PZI.BUSINESS_UNIT = :SYS_B_10 AND PZI.INV_ITEM_ID = :SYS_B_11 AND ..) ORDER BY .. Pros and Cons of CURSOR_SHARING

By setting the above parameter at the database level the overall processing time reduced significantly. Overall statistics with no bind variables:
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call ------Parse Execute Fetch ------total count -----26389 404647 517618 -----948654 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------98.27 99.54 0 1074 0 51.09 50.11 1757 242929 371000 47.85 47.43 3027 1455101 235446 -------- ---------- ---------- ---------- ---------197.21 197.08 4784 1699104 606446 rows ---------0 78376 189454 ---------267830

Misses in library cache during parse: 13190 Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 27118 Execute 33788 Fetch 54988 ------- -----total 115894 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------5.35 5.06 0 49 1 2.42 2.22 0 5577 235 2.44 2.57 1 97241 0 -------- ---------- ---------- ---------- ---------10.21 9.85 1 102867 236 rows ---------0 229 47621 ---------47850

Misses in library cache during parse: 65

Overall statistics with bind variables:


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count ------- -----cpu elapsed disk query current -------- ---------- ---------- ---------- ---------rows ----------

Copyright PeopleSoft Corporation 2001. All rights reserved.

31

9/26/2002
Parse Execute Fetch ------total 26389 404647 517618 -----948654 15.44 15.69 0 0 0 44.02 43.51 173 231362 333538 45.47 43.02 2784 1439571 235104 -------- ---------- ---------- ---------- ---------104.93 102.22 2957 1670933 568642 0 78376 189454 ---------267830

Misses in library cache during parse: 64 Misses in library cache during execute: 1

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count ------- -----Parse 356 Execute 357 Fetch 667 ------- -----total 1380 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.08 0.10 0 0 0 0.47 0.48 0 5568 228 0.00 0.01 0 1333 0 -------- ---------- ---------- ---------- ---------0.55 0.59 0 6901 228 rows ---------0 228 552 ---------780

Misses in library cache during parse: 1

From the above trace statistics, it can be seen that the number of library cache misses decreased with the use of bind variables. Original Timing 197 Sec Time with CURSOR_SHARING option 102 Sec %Gain

48%

Parameter: SESSION_CACHED_CURSOR
For the processes that use bind variables, but do it with a cursor open/close or a (soft) parse for a SQL statement, the Oracle parameter SESSION_CACHED CURSOR will give some scalability improvements. This will be mainly useful for the repeating statements issued through PeopleCode using the SQLExec command. The parameter SESSION_CACHED_CURSORS is a numeric parameter, which can be set at instance level or at session level using the command: ALTER SESSION SET session_cached_cursors = NN; The value NN determines how many 'cached' cursors there can be in your session. To get placed in the session cache the same statement has to be parsed 3 times within the same cursor. A pointer to the shared cursor is then added to your session cache. If all session cache cursors are in use then the least recently used entry is discarded. Depending on the available memory, the value between 10 and 50 can give show some performance gains.

HISTOGRAMS
What Are Histograms?
Cost-based optimization uses data-value histograms to get accurate estimates of the distribution of column data. A histogram partitions the values in the column into bands, so that all column values in a band fall within the same
Copyright PeopleSoft Corporation 2001. All rights reserved.

32

9/26/2002 range. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with non-uniform data distributions. Oracle uses height-balanced histograms (as opposed to width-balanced). Width-balanced histograms divide the data into a fixed number of equal-width ranges and then count the number of values falling into each range. Height-balanced histograms place approximately the same number of values into each range so that the endpoints of the range are determined by how many values are in that range.

Use of Histograms for PeopleSoft Applications


Histograms can affect performance and should be used only when they substantially improve query plans. In general, you should create histograms on columns that are frequently used in WHERE clauses of queries and have a highly skewed data distribution. For many applications, it is appropriate to create histograms for all indexed columns because indexed columns typically are the columns most often used in WHERE clauses. Histograms are persistent objects, so there is a maintenance and space cost for using them. You should compute histograms only for columns that you know have highly skewed data distribution. For uniformly distributed data, costbased optimization can make fairly accurate guesses about the cost of executing a particular statement without the use of histograms. Histograms, like all other optimizer statistics, are static. They are useful only when they reflect the current data distribution of a given column. (The data in the column can change as long as the distribution remains constant.) If the data distribution of a column changes frequently, you must recompute its histogram frequently. Histograms are not useful for columns with the following characteristics: All predicates on the column use bind variables. The column data is uniformly distributed. The column is not used in WHERE clauses of queries. The column is unique and is used only with equality predicates.

Columns such as PROCESS_INSTANCE, ORD_STATUS are likely candidate that gets benefited from histograms.

sample: Sample query that used histogram statistics to boost the performance

Problem Statement:
We observed that the trace files were showing full table scans for most of the queries involving the tables PS_BI_HDR, PS_BI_LINE, and PS_BI_LINE_DST. Queries with full table scans on big tables are almost always a relatively costly process. The following is the sample of a SQL statement that we found to be inefficient due to a FULL Table Scan on PS_BI_LINE, a large-volume key table.
********************************************************************************

UPDATE PS_BI_LINE SET CURRENCY_CD_XEU = 'EUR', . WHERE INVOICE IN (SELECT DISTINCT INVOICE FROM PS_BI_CURRCONV_TMP WHERE PROCESS_INSTANCE = 3698 AND INVOICE = PS_BI_LINE.INVOICE
Copyright PeopleSoft Corporation 2001. All rights reserved.

33

9/26/2002 AND BUSINESS_UNIT = PS_BI_LINE.BUSINESS_UNIT AND PROCESS_FLG = 'S') AND BUSINESS_UNIT = 'FCUSA' AND PROCESS_INSTANCE = 3698
call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.01 0.01 0 0 0 303.75 667.66 739444 1630166 340095 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------303.76 667.67 739444 1630166 340095 rows ---------0 300000 0 ---------300000

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 18 (FSTNAL) Rows ------0 0 300000 6000000 300000 300000 Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_BI_LINE' FILTER TABLE ACCESS GOAL: ANALYZED (FULL) OF 'PS_BI_LINE' TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'PS_BI_CURRCONV_TMP' INDEX (RANGE SCAN) OF 'PSABI_CURRCONV_TMP' (UNIQUE)

********************************************************************************

Recommendation:
This particular SQL statement had to process 100,000 invoices. There were 600,000 rows that qualified to be updated in the PS_BI_LINE table. Using an index to access the table would definitely help the performance of the SQL statement. The existing index, PSDBI_LINE is a good candidate to use. It has the columns as follows: PROCESS_INSTANCE, BUSINESS_UNIT, INVOICE. Since the index has the PROCESS_INSTANCE as its leading column, it is safe to assume that the index was created for the batch performance. In Oracle Rule Base Optimization, the index will be favored to access the table. Unfortunately, it is not readily the case in the Cost Base optimization. The Cost Based optimizer would favor the full table scan, which, in this case, is not intended. A full table scan will still is chosen by the optimizer even if the usual ANALYZE is run against the index. This is due to the fact that the ANALYZE command would make the assumption that the distinct values in the PROCESS_INSTANCE have equal statistical weights. For example, if there were no BICURCNV process executing, the value of the PROCESS_INSTANCE in each row of PS_BI_LINE table is zero. If a BICURCNV process is run, there will be two distinct values in the PROCESS_INSTANCE column. They are zero, for the majority of the rows in the table and an assigned process instance number for those rows that will be processed by BICURCNV. Then, if the usual ANALYZE command is run, the database will assume that 50 percent of the rows in the table contain the number zero and the other 50 percent has the assigned process instance number. Unfortunately, this is a gross assumption for the database to make. Since it would be the case, the Cost-Based optimizer will favor the use of FullTable Scan instead of the Index Scan on PSDBI_LINE. In order to correct this discrepancy we added the FOR COLUMNS option in the ANALYZE command. In effect, we built the data distribution information or histogram about the PROCESS_INSTANCE column. As a result, the Cost-Based optimizer was able to make a more informed decision to use the PSDBI_LINE index.

Copyright PeopleSoft Corporation 2001. All rights reserved.

34

9/26/2002 In order to take advantage of these histograms, be sure to create on column PROCESS_INSTANCE of all the tables with high volume. The following execution plan shows the improved access path and timings.
********************************************************************************

UPDATE PS_BI_LINE SET CURRENCY_CD_XEU = 'EUR', . WHERE INVOICE IN (SELECT INVOICE FROM PS_BI_CURRCONV_TMP WHERE PROCESS_INSTANCE = 3694 AND INVOICE = PS_BI_LINE.INVOICE AND BUSINESS_UNIT = PS_BI_LINE.BUSINESS_UNIT AND PROCESS_FLG = 'S') AND BUSINESS_UNIT = 'FCUSA' AND PROCESS_INSTANCE = 3694
call count ------- -----Parse 1 Execute 1 Fetch 0 ------- -----total 2 cpu elapsed disk query current -------- ---------- ---------- ---------- ---------0.02 0.02 0 0 0 121.28 238.28 42701 203395 340093 0.00 0.00 0 0 0 -------- ---------- ---------- ---------- ---------121.30 238.30 42701 203395 340093 rows ---------0 300000 0 ---------300000

Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 18 (FSTNAL) Rows ------0 0 300001 100000 Execution Plan --------------------------------------------------UPDATE STATEMENT GOAL: CHOOSE UPDATE OF 'PS_BI_LINE' INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PSDBI_LINE' (NON-UNIQUE) INDEX (RANGE SCAN) OF 'PSABI_CURRCONV_TMP' (UNIQUE)

********************************************************************************

Please note that the access path shown above was a result of incorporating the literal value of the PROCESS_INSTANCE. If the Re-Use is checked, the value of the %BIND(PROCESS_INSTANCE) will be in the form of a bind variable. Having the bind variable for the PROCESS_INSTANCE column will not produce the execution plan that favors the use of the PSDBI_LINE index. In order to make the AE program to pass a resolved literal value for the PROCESS_INSTANCE column even when the re-use flag is checked, the statement should be written as follows. WHERE PROCESS_INSTANCE = %ProcessInstance OR WHERE PROCESS_INSTANCE = &BIND(PROCESS_INSTANCE, STATIC) The additional parameter, STATIC will resolve the literal value of the PROCESS_INSTANCE before sending the query to the database. Same thing can be achieved with %ProcessInstance also. For additional information on this parameter, please refer to PeopleTools documentation on &BIND, %ProcessInstance.

Result:
By creating the histograms on the PROCESS_INSTANCE for the PS_BI_LINE table, the SQL statement showed good performance improvement. Without histogram in seconds With histogram in seconds %Gain

Copyright PeopleSoft Corporation 2001. All rights reserved.

35

9/26/2002 667 238 64%

Creating Histograms
Create histograms on columns that are frequently used in WHERE clauses of queries and that have highly skewed data distributions. To do this, use the GATHER_TABLE_STATS procedure of the DBMS_STATS package. For example, to create a 10-bucket histogram on the SAL column of the EMP table, issue this statement: EXECUTE DBMS_STATS.GATHER_TABLE_STATS ('scott','emp', METHOD_OPT => 'FOR COLUMNS SIZE 10 sal'); The SIZE keyword declares the maximum number of buckets for the histogram. You would create a histogram on the SAL column if there were an unusually high number of employees with the same salary and few employees with other salaries. You can also collect histograms for a single partition of a table. Column statistics appear in the data dictionary views: USER_TAB_COLUMNS, ALL_TAB_COLUMNS, and DBA_TAB_COLUMNS. Histograms appear in the data dictionary views USER_HISTOGRAMS, DBA_HISTOGRAMS, and ALL_HISTOGRAMS.

Choosing the Number of Buckets for a Histogram


The default number of buckets for a histogram is 75. This value provides an appropriate level of detail for most data distributions. However, since the number of buckets in the histogram, also referred to as the sampling rate, and the data distribution all affect a histogram's usefulness, you may need to experiment with different numbers of buckets to obtain optimal results. If the number of frequently occurring distinct values in a column is relatively small, set the number of buckets to be greater than the number of frequently occurring distinct values.

Viewing Histograms
You can find information about existing histograms in the database using these data dictionary views: USER_HISTOGRAMS ALL_HISTOGRAMS DBA_HISTOGRAMS

Find the number of buckets in each column's histogram in: USER_TAB_COLUMNS ALL_TAB_COLUMNS DBA_TAB_COLUMNS

Copyright PeopleSoft Corporation 2001. All rights reserved.

36

9/26/2002

Operational Guidelines for Maintaining Histograms in Oracle


Create the histograms with the following command ANALYZE TABLE <Table Name> ESTIMATE STATISTICS FOR COLUMNS PROCESS_INSTANCE To maintain the histogram, be sure to create histograms immediately after analyzing the table. Caution: When the ANALYZE command is performed on the table, the histogram information is lost. Therefore, the ANALYZE <table> must be immediately followed by the ANALYZE FOR COLUMNS PROCESS_INSTANCE The following FAQ can be used as your reference to maintain histograms:

FAQ on Histograms
1. What are the steps necessary in creating the histogram for the PROCESS_INSTANCE column of the PS_BI_LINE table? Run the ANALYZE commands in the following order: ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS FOR COLUMNS PROCESS_INSTANCE 2. How should I create histograms if the table statistics already exist? Run the ANALYZE command as follow:

ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS FOR COLUMNS PROCESS_INSTANCE 3. Can histograms exist without having table statistics? Yes, but it will not be effective without having statistics on the underlying table. 4. How do I delete histograms and keep the table statistics in place? Run the ANALYZE command as follow: ANALYZE TABLE PS_BI_LINE ESTIMATE STATISTICS 5. How do I delete the statistics on an entire table including histograms? First of all, unless you have compelling reasons to delete the statistics, do not run the ANALYZE command below. ANALYZE TABLE PS_BI_LINE DELETE STATISTICS 6. What happens if the table statistics are run after creating histograms? Analyzing the table after creating histograms would erase all the previously created histograms and just create the table statistics. 7. How often should I run the histogram?

Copyright PeopleSoft Corporation 2001. All rights reserved.

37

9/26/2002 To maintain histogram information on a specific column like PROCESS_INSTANCE, the ANALYZE FOR COLUMNS command must be run as often as the ANALYZE <table> is being run. See FAQ #1 for details. 8. What is the overhead of running histograms? The overhead incurs when running the histogram is very much the same as the overhead when running the typical ANALYZE command for a table. As a rule of thumb, any ANALYZE command should be run during the maintenance window. 9. What is a good source to learn more about the Oracle histogram? For more information on the subject, the Oracle Tuning Manual provides the details on histogram.

BATCH SERVER SELECTION


Process Scheduler executes PeopleSoft batch processes. For the installations where Application Server and Database Server are two different boxes, it is necessary to choose the server for process scheduler process to run. Per the PeopleSoft architecture, process scheduler (Batch Server) can be run from Application server or Database server.

Scenario 1: Process Scheduler and Application Server on one BOX

Scenario 1

Copyright PeopleSoft Corporation 2001. All rights reserved.

38

9/26/2002

SERVER1

SERVER2 Database Server

Peoplesoft Application Server

TCP/IP

Process Scheduler TCP/IP

Oracle DB

Running the Process Scheduler on the Application server will uses TCP/IP connection to connect to the database. As the batch process may involve extensive SQL processing, this TCP/IP can be a big overhead and may impact processing times. Impact is more evident in a process where excessive row-by-row processing is done. For the processes where majority of SQL statements are of set based, the impact due to TCP/IP overhead may not be that big. Have a dedicated network connection between the batch server and the database to minimize the overhead.

Scenario 2: Process Scheduler and Database Server on one BOX


Scenario 2

SERVER1 TCP/IP Peoplesoft Application Server

SERVER2 Database Server Oracle DB Use Local Connection Process Scheduler

Running the Process Scheduler on the database server will eliminate the TCP/IP overhead and improve the processing time. At the same time it does use the additional server memory.

Copyright PeopleSoft Corporation 2001. All rights reserved.

39

9/26/2002 Set the following value in the process scheduler configuration file "psprcs.cfg" to use the direct connection instead of TCP/IP UseLocalOracleDB=1 This kind of setup is useful for the programs that do excessive row-by-row processing.

What is the Recommended Scenario?


Considering the performance impact due to TCP/IP for the row-by-row processing, Scenario 2 is recommended where the connection overhead is eliminated. At the same time, it may not be possible to run the extensive batch processes on the database server due to the limited availability of server resources. Make fair judgment depending on your environment and usage. A balanced approach may be to setup both the scenarios in your environment and use the specific scenario depending on the time of the run and the complexity of the process. All the nightly jobs can be run using Scenario 2.

Copyright PeopleSoft Corporation 2001. All rights reserved.

40

9/26/2002

Chapter 3 - Capturing Traces


Following are the recommendation to capture the traces to identify problems. Be sure to set the values back to zero after capturing the trace. Note: Running the production environment with these setting will cause performance issues due to the overhead introduced with tracing.

APPLICATION ENGINE TRACE


psprcs.cfg ;------------------------------------------------------------------------; AE Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace STEP execution sequence to AET file ; 2 - Trace Application SQL statements to AET file ; 4 - Trace Dedicated Temp Table Allocation to AET file ; 8 - not yet allocated ; 16 - not yet allocated ; 32 - not yet allocated ; 64 - not yet allocated ; 128 - Timings Report to AET file ; 256 - Method/BuiltIn detail instead of summary in AET Timings Report ; 512 - not yet allocated ; 1024 - Timings Report to tables ; 2048 - DB optimizer trace to file ; 4096 - DB optimizer trace to tables ;TraceAE=(1+2+128+2048) TraceAE=2179

ONLINE TRACE
psappsrv.cfg ;------------------------------------------------------------------------; SQL Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb

Copyright PeopleSoft Corporation 2001. All rights reserved.

41

9/26/2002 ; 32 - Set Select Buffers (identifies the attributes of columns ; to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 4096 - Manager information ; 8192 - Mapcore information ; Dynamic change allowed for TraceSql and TraceSqlMask TraceSql=0 TraceSqlMask=12319 ;------------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; ----------------; 1 - Trace entire program ; 2 - List the program ; 4 - Show assignments to variables ; 8 - Show fetched values ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program ; Dynamic change allowed for TracePC and TracePCMask TracePC=0 TracePCMask=0

ORACLE TRACE
The following are the trace settings to capture the Oracle Trace

Trace at Instance Level:


Init<database_name>.ora SQL_TRACE = TRUE TIMED_STATISTICS = TRUE

Trace at Session Level:


ALTER SESSION SET SQL_TRACE = TRUE; It is required to set TIMED_STATISTICS = TRUE in addition to the above trace setting. If the TIMED_STATISTICS value is not set at the instance level in the init.ora parameter, then this must also be set for each session along with the SQL_TRACE value.
Copyright PeopleSoft Corporation 2001. All rights reserved.

42

9/26/2002 Session Level (using trigger): CREATE OR REPLACE TRIGGER MYDB.SET_TRACE_INS6000 BEFORE UPDATE OF RUNSTATUS ON MYDB.PSPRCSRQST FOR EACH ROW WHEN (NEW.RUNSTATUS = 7 AND OLD.RUNSTATUS != 7 AND NEW.PRCSTYPE = 'SQR REPORT' AND NEW.PRCSNAME = 'INS6000' ) BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE=TRUE'; END; /

Trace for different session :


In most cases, it may be required to set the trace for the program that is currently executing. In such cases, following package can be executed from SQL prompt by passing the SID, Serial # of the session you are required to trace. To get SID and Serial#: select sid, serial#, username from v$session; To turn on the trace: exec sys.dbms_system.set_sql_trace_in_session( sid, serial#, TRUE ) ; To turn off the trace: exec sys.dbms_system.set_sql_trace_in_session( sid, serial#, FALSE) ; Make sure to run " GRANT EXECUTE ON DBMS_SYSTEM to <user/role>; " before running this command.

TKPROF
TKPROF Capture the Oracle trace and run the TKPROF with following sort options.

tkprof <trace_input> <trace_output> sys=no explain=<user_id>/<password> sort=exeela,fchela,prscpu,execpu,fchcpu

STATSPACK
What Is STATSPACK?
Tuning a database is not all that easy. It can take multiple iterations to get to the stable environment. Oracle has provided a tool called STATSPACK to gather the database information for a given period and give a report on database health. Statspack is an useful tool provided by Oracle for reactive tuning.

Copyright PeopleSoft Corporation 2001. All rights reserved.

43

9/26/2002 Statspack differs fundamentally from the well-known BSTAT/ESTAT tuning scripts in that it collects more information and stores the performance-statistics data permanently in Oracle tables, which can be used for later reporting and analysis. STATSPACK is a set of SQL scripts, PL/SQL stored procedures and packages for collecting performance statistics. It's available starting from Oracle 8.1.6. It provides more information than UTLBSTAT/UTLESTAT utilities, plus it automates some operations.

Installing and Using STATSPACK


Installation
1. 2. Check if you have TOOLS tablespace on your database, otherwise create it (minimum size is 35M). Run SQL*Plus and connect as SYSDBA: connect / as sysdba To install STATSPACK run the following script: On 8.1.6 on Unix: @?/rdbms/admin/statscre On 8.1.6 on NT: @%ORACLE_HOME%\rdbms\admin\statscre On 8.1.7 and 9I on Unix: @?/rdbms/admin/spcreate On 8.1.7 and 9I on NT: @%ORACLE_HOME%\rdbms\admin\spcreate

3.

Collect statistics
1. Run SQL*Plus and connect as perfstat (default password is perfstat): connect perfstat/perfstat

2.

To collect statistics run the following command: execute statspack.snap;

Each time the above command is issued the database information is recorded along with the time. So, it is required to issue this command twice, before the start of the process and after the completion of the process in order to capture the information between the two snaps.

Generate Report
1. Run SQL*Plus and connect as perfstat (default password is perfstat): connect / as perfstat To generate a report, run the following script: 44

2.

Copyright PeopleSoft Corporation 2001. All rights reserved.

9/26/2002 On 8.1.6 on Unix: @?/rdbms/admin/statsrep On 8.1.6 on NT: @%ORACLE_HOME%\rdbms\admin\statsrep On 8.1.7 and 9I on Unix: @?/rdbms/admin/spreport On 8.1.7 and 9I on NT: @%ORACLE_HOME%\rdbms\admin\spreport

You need to specify the start and end snap IDs to get the report.

Uninstall
1. Run SQL*Plus and connect as SYSDBA: connect / as sysdba To uninstall STATSPACK run the following script: On 8.1.6 on Unix: @?/rdbms/admin/statsdrp On 8.1.6 on NT: @%ORACLE_HOME%\rdbms\admin\statsdrp On 8.1.7 and 9I on Unix: @?/rdbms/admin/spdrop On 8.1.7 and 9I on NT: @%ORACLE_HOME%\rdbms\admin\spdrop

2.

Clean old statistics


1. Run SQL*Plus and connect as perfstat (default password is perfstat): connect perfstat/perfstat It will work only on 8.1.7 and 9i. To clean old statistics, run the following command: On 8.1.7 and 9I on Unix: @?/rdbms/admin/sppurge On 8.1.7 and 9I on NT: @%ORACLE_HOME%\rdbms\admin\sppurge

2.

Copyright PeopleSoft Corporation 2001. All rights reserved.

45

9/26/2002

Chapter 4 -Database Tuning and init.ora Parameters

RECOMMENDATIONS
Block Size
Thorough analysis should be done before choosing an appropriate block size at the time of database creation. There could be significant performance impact depending on the size selected. When you're creating an Oracle database, you can go back and change just about any parameter EXCEPT your DB_BLOCK_SIZE. The only way to change this is to delete everything and start over. Because of the importance of this parameter, you should choose one that best suits your needs before you start.

Size Considerations
Small Block Size (2K to 8K) Pros:

1) Reduces block contention. 2) Is good for small number of rows. 3) Is good for random access.
Cons:

1) Has relatively large overhead. 2) Has small number of rows per block. 3) Can cause more index blocks to be read.
Larger Block Size (16K) Pros:

1) Less overhead.
Copyright PeopleSoft Corporation 2001. All rights reserved.

46

9/26/2002

2) Good for sequential access. 3) Good for very large rows. 4) Better performance of index reads.
Cons:

1) Increases block contention. 2) Uses more space in the buffer cache.

Recommended Block Size


General recommendation for PeopleSoft applications is not less than 8K. Probably 8K for Online (OLTP) and 16K for batch only situations (DSS). If you are running Online and Batch on the same database then set the value to 8K. Do not set the value less than 8K.

Shared Pool Area


Check GETHITRATIO in V$LIBRARYCACHE (should be in the high 90s): select gethitratio from v$librarycache where namespace = SQL AREA; Find out which statement users are running: select sql_text, users_executing, executions, loads from v$sqlarea; select * from v$sqltext where sql_text like select * from scott.s_dept where id = %; Consider increasing SHARED_POOL_SIZE to improve ratio.

Data Dictionary Hit Ratio


Keep the ratio of the sum of GETMISSES to the sum of GETS less than 15%. select parameter, gets, getmisses from v$rowcache; select 1-(sum(getmisses)/sum(gets)) from v$rowcache; Consider increasing SHARED_POOL_SIZE to improve ratio.

Buffer Busy Waits


select name, value from v$sysstat where name = free buffer inspected; Consider increasing DB_BLOCK_BUFFERS if this shows high or increasing values. This statistic is the number of buffers skipped to find a free buffer. select event, total_waits from v$system_event where event in (free buffer waits, buffer busy waits); Buffer busy waits means that a process has been waiting for a buffer to become available.

Copyright PeopleSoft Corporation 2001. All rights reserved.

47

9/26/2002 Free buffer waits occurs after a server cannot find a free buffer or when the dirty queue is full. Keep in mind that these statistics and events could also indicate that the DBWn process needs tuning.

LRU Latch
Determine the get percentage for the LRU latch: select name, sleeps/gets LRU Hit% from v$latch where name = cache buffers lru chain; If the hit percentage for the LRU latch is less than 99%, increase the number of LRU latches by setting the parameter DB_BLOCK_LRU_LATCHES. Remember, the maximum number of latches is the lower of the number of CPUs x 2 x 3 and number of buffers/50.

Log Buffer
There should be no log buffer space waits. select sid, event, seconds_in_wait, state from v$session_wait where event = log buffer space; If some time was spent waiting for space in the redo log buffer, consider increasing LOG_BUFFER, or moving the log files to faster disks such as striped disks. The redo buffer allocation retries value should be near 0; the number should be less than 1% of redo entries. select name, value from v$sysstat where name in (redo buffer allocation retries, redo entries); If necessary, increase LOG_BUFFER (until the ratio is stable) or improve the checkpointing or archiving process. Keep in mind that a modest increase can significantly enhance throughput, and the LOG_BUFFER size must be a multiple of the operating system block size.

Tablespace I/O
Reserve the SYSTEM tablespace for data dictionary objects. Create locally managed tablespaces to avoid space management issues. Split tables and indexes into separate tablespaces. Create separate rollback tablespaces. Store very large database objects in their own tablespace. Create one or more temporary tablespaces.

Full Table Scans


Investigate the need for full table scans.
Copyright PeopleSoft Corporation 2001. All rights reserved.

48

9/26/2002 Specify DB_FILE_MULTIBLOCK_READ_COUNT (8 is default). Monitor long-running full table scans with v$session_longops view. select sid, serial#, opname, to_char(start_time, HH24:MI:SS) as START, (sofar/totalwork)*100 as PERCENT_COMPLETE from v$session_longops; select name, value from v$sysstat where name like %table scans%;

Checkpoints
Size the online redo log files to cut down the number of checkpoints. Add online redo log groups to increase the time before LGWR starts to overwrite. Regulate checkpoints with the initialization parameters: FAST_START_IO_TARGET, LOG_CHECKPOINT_INTERVAL, LOG_CHECKPOINT_TIMEOUT, DB_BLOCK_MAX_DIRTY_TARGET, LOG_CHECKPOINTS_TO_ALERT

Dynamic Allocation of Extents


To display segments with less than 10% free blocks: select owner, table_name, blocks, empty_blocks from dba_tables where empty_blocks / (blocks+empty_blocks) < .1; To avoid dynamic allocation: alter table hr.emp allocate extent; Create locally managed tablespaces!

PCTFREE/PCTUSED
PCTFREE: 1) Default 10. 2) Zero if no update activity. 3) PCTFREE = 100 x upd/(upd + ins). PCTUSED: 1) Default 40. 2) Set if rows deleted. 3) PCTUSED = 100 PCTFREE 100 x rows x (ins + upd)/blocksize. Note: upd is the average amount added by updates, in bytes; ins is the average initial row length at insert; rows is the number of rows to be deleted before free list maintenance occurs. Watch out for migration and chaining! analyze table sales.order_hist compute statistics; select num_rows, chain_cnt from dba_tables where table_name = ORDER_HIST; analyze table sales.order_hist list chained rows;

Copyright PeopleSoft Corporation 2001. All rights reserved.

49

9/26/2002 select owner_name, table_name, head_rowid from chained_rows where table_name = ORDER_HIST; (For Oracle 8i, use alter table move, instead of the technique utilizing the previous two commands.)

Rebuilding Indexes
analyze index acct_no_idx validate structure; select (del_lf_rows_len/lf_rows_len) * 100 as index_usage from index_stats; (index_usage represents the percentage of rows deleted. If > 10%, consider rebuilding.) alter index acct_no_idx rebuild;

Sorting
Set SORT_AREA_SIZE and SORT_MULTIBLOCK_READ_COUNT (forces the sort to read a larger section of each run into memory during a merge pass) appropriately. 2 to 3 Megs for SORT_AREA_SIZE for a data warehouse is not implausible. Avoid sort operations whenever possible. Reduce swapping and paging by ensuring that sorting is done in memory where possible. Reduce space allocation calls: allocate temporary space appropriately. select disk.value Disk, mem.value Mem, (disk.value/mem.value) * 100 Ratio from v$sysstat mem, v$sysstat disk where mem.name = sorts (memory) and disk.name = sorts (disk); The Ratio of disk sorts to memory sorts should be less than 5%. Adjust SORT_AREA_SIZE if necessary. select tablespace_name, current_users, total_extents, used_extents, extent_hits, max_used_blocks, max_sort_blocks from v$sort_segment;

Copyright PeopleSoft Corporation 2001. All rights reserved.

50

9/26/2002

Appendix A Special Notices


All material contained in this documentation is proprietary and confidential to PeopleSoft, Inc., is protected by copyright laws, and subject to the nondisclosure provisions of the applicable PeopleSoft agreement. No part of this documentation may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including, but not limited to, electronic, graphic, mechanical, photocopying, recording, or otherwise without the prior written permission of PeopleSoft, Inc. This documentation is subject to change without notice, and PeopleSoft, Inc. does not warrant that the material contained in this documentation is free of errors. Any errors found in this document should be reported to PeopleSoft, Inc. in writing. The copyrighted software that accompanies this documentation is licensed for use only in strict accordance with the applicable license agreement, which should be read carefully as it governs the terms of use of the software and this documentation, including the disclosure thereof. See Customer Connection or PeopleBooks for more information about what publications are considered to be product documentation. PeopleSoft, the PeopleSoft logo, PeopleTools, PS/nVision, PeopleCode, PeopleBooks, and Vantive are registered trademarks, and PeopleTalk and "People power the internet." are trademarks of PeopleSoft, Inc. All other company and product names may be trademarks of their respective owners. The information contained herein is subject to change without notice. Information in this book was developed in conjunction with use of the product specified, and is limited in application to those specific hardware and software products and levels. PeopleSoft may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. The information contained in this document has not been submitted to any formal PeopleSoft test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While PeopleSoft may have reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites.

Copyright PeopleSoft Corporation 2001. All rights reserved.

51

9/26/2002

Appendix B Validation and Feedback


This section documents that real-world validation that this Red Paper has received.

CUSTOMER VALIDATION
PeopleSoft is working with PeopleSoft customers to get feedback and validation on this document. Lessons learned from these customer experiences will be posted here.

FIELD VALIDATION
PeopleSoft Consulting has provided feedback and validation on this document. Additional lessons learned from field experience will be posted here.

Copyright PeopleSoft Corporation 2001. All rights reserved.

52

9/26/2002

Appendix C - References
1. 2. 3. 4. 5. 6. 7. 8. 9. PeopleSoft Installation Guide - Oracle Tuning chapter http://technet.Oracle.com http://www.Oracle.com/oramag/ http://metalink.Oracle.com http://www.ixora.com.au http://www.dbasupport.com http://www.dba-village.com http://www.lazydba.com http://www.orafaq.com

10. http://www.Oracletuning.com

Copyright PeopleSoft Corporation 2001. All rights reserved.

53

9/26/2002

Appendix D Revision History


Authors
Jayagopal Theranikal, Performance Engineer - Has more than 10 years of Oracle database experience and more than 2 years of PeopleSoft application-tuning experience. Worked on SCM application tuning and benchmarks in Performance & Benchmarks group. Contributors: Naveen Athulutu - PSC Arvin Kan - PSC Puspal Hore - PSC

Reviewers
The following people reviewed this Red Paper: Jerry Zarate - PeopleTools John Whitehead - Performance & Benchmarks Vadali Subrahmanyeswar - Performance & Benchmarks

Copyright PeopleSoft Corporation 2001. All rights reserved.

54

9/26/2002 Vishnu Badikol - Performance & Benchmarks

Revision History
1. 2. 07/17/02: Created document.

Copyright PeopleSoft Corporation 2001. All rights reserved.

55