Beruflich Dokumente
Kultur Dokumente
Technical Best Practices ..................................................................... 5 Conventions used in this whitepaper .............................................. 5 Introduction ......................................................................................... 6 Background of Oracle Utilities Application Framework ....................... 6 Installation Best Practices ................................................................... 8 Read the Installation Guide ............................................................. 8 Ensure the prerequisites are installed ............................................. 8 Environment Practices .................................................................... 9 Using multiple administrators ........................................................ 10 Checking Java Installation ................................................... 10 .............................................. 11 Checking COBOL Installation ................ 12 Additional Oracle WebLogic Installation settings .......................................... 13 COBOL License Errors in Batch ................................................ 13 Location of Installation Logs ......................................... 13 XML Parser Errors in installation AppViewer cannot Co-Exist in Archive Mode ...................... 14 ........................ 15 Implementing Secure Protocols (https/t3s) General Best Practices ..................................................................... 17 Limiting production Access ........................................................... 17 Regular Collection and Reporting of Performance Metrics ........... 18 Respecting Record Ownership ..................................................... 18 Backup of Logs ............................................................................. 19 Post Process Logs ........................................................................ 19 Check Logs For Errors .................................................................. 20 Optimize Operating System Settings ............................................ 20 Optimize connection pools ............................................................ 21 Read the available manuals .......................................................... 23 Technical Documentation Set Whitepapers available ................... 25 Implementing Industry Processes ................................................. 26 Using Automated Test Tools ......................................................... 27
Custom Environment Variables or JAR files ................................. 28 Help and AppViewer can be used standalone .............................. 29 Re-Register only when necessary ................................................ 29 Secure default userids .................................................................. 30 Consider different userids for different modes of access .............. 30 Dont double audit ......................................................................... 31 Use Identical Machines ................................................................. 31 Regularly restart machines ........................................................... 31 Avoid using direct SQL to manipulate data ................................... 32 Minimize Portal Zones not used .................................................... 32 Routine Tasks for Operations ....................................................... 33 Typical Business Day .................................................................... 33 Login Id versus Userid .................................................................. 34 Hardware Architecture Best Practices .......................................... 35 Failover Best Practices ................................................................. 39 Online and Batch tracing and Support Utilities .................... 41 ................................. 41 General Troubleshooting Techniques Data Management Best Practices ..................................................... 43 Respecting Data Diversity ............................................................. 43 Archiving ....................................................................................... 44 Data Retention Guidelines ............................................................ 46 Removal of Staging Records ........................................................ 47 Partitioning .................................................................................... 49 Compression ................................................................................. 50 Database Clustering ...................................................................... 51 Backup and Recovery ................................................................... 51 Writing Files Greater than 4GB ..................................................... 52 Client Computer Best Practices ........................................................ 53 Make sure the machine meets at least the minimum specification 53 Internet Explorer Caching Settings ............................................... 53
Clearing Internet Explorer Cache .................................................. 54 Optimal Network Card Settings ..................................................... 54 Network Best Practices ..................................................................... 54 Network bandwidth ........................................................................ 54 Ensure legitimate Network Traffic ................................................. 55 Regularly check network latency ................................................... 55 Web Application Server Best Practices ............................................. 56 Make sure that the access.log is being created ............................ 56 Examine Memory Footprint ........................................................... 57 Optimize Garbage Collection ........................................................ 58 Turn off Debug .............................................................................. 58 Load balancers .............................................................................. 58 Preload or Not? ............................................................................. 59 Native or Product provided utilities? .............................................. 60 Hardware or software proxy .......................................................... 61 What is the number of Web Application instances do I need? ...... 61 Configuring the Client Thread Pool Size ....................................... 62 Defining external LDAP to the Web Application Server ................ 64 Synchronizing LDAP for security ................................................... 66 Appropriate use of AppViewer ...................................................... 68 Fine Grained JVM Options ............................................................ 69 Customizing the server context ..................................................... 70 Clustering or Managed? ................................................................ 71 Allocate port numbers appropriately ............................................. 73 Monitoring and Managing the Web Application Server using JMX 74 Enabling autodeployment for Oracle WebLogic console ..... 76 ....... 76 Password Management solution for Oracle WebLogic ................... 77 Error configuring Oracle WebLogic credentials ....................................................... 78 Corrupted SPLApp.war .............................................. 78 Web Application Server Logs
IBM WebSphere Specific Advice ......................................... 79 Business Application Server Best Practices ..................................... 82 Distributed or local installation ...................................................... 82 Number of Child JVMS .................................................................. 82 COBOL Memory management ...................................................... 83 Cache Management ...................................................................... 84 Monitoring and Managing the Business Application Server using JMX 85 Database Connection Management .............................................. 88 XPath Memory Management ............................................... 88 Database Best Practices ................................................................... 89 Regularly Calculate Database Statistics ....................................... 90 Ensure I/O is spread evenly across available devices .................. 91 Use the Correct NLS settings (Oracle) .......................................... 91 Monitoring database connections ........................................ 92 Consider changing Bit Map Tree parameter ................................. 92 OraGenSec command line Parameters ........................................ 93 SetEnvId command line Parameters ............................................. 93 Building the Data Model ................................................................ 94
Framework V2.1 based products and above. Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V2.2 based products and above. Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V4.0 based products and above. Advice or instructions marked with this icon apply to Oracle Utilities Application Framework V4.1 based products and above.
Introduction
Implementation of the product at any site introduces new practices into the IT group to maintain the health of the system and provide the expected service levels demanded by the business. While configuration of the product is important to the success of the implementation (and subsequence maintenance), adopting new practices can help ensure that the system will operate within acceptable tolerances and support the business goals. This white paper outlines some common practices that have been implemented at sites, around the globe, that have proven beneficial to that site. They are documented here so that other sites may consider adopting similar practices and potentially deriving benefit from them as well. The recommendations in this document are based upon experiences from various sites and internal studies, which have benefited from implementing the practices outlined in the document.
UI Management The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details. Tools The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration.
The figure below summarizes some of the facilities that the Oracle Utilities Application Framework provides:
Meta Data
Layout Personalization Scripting Roles Rules Language Localization Business
UI Management
Zones Portal Language Locale BPA Scripting UI Maps
Integration
XAI Web Services Staging
Tools
Scheduler Dictionary Conversion To Do Security Auditing Algorithm Scripting
Technology
Multi-DB XML Services J2EE AJAX SOA
Services
Business
Objects
Maintenance
Objects
DB Structure
There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions.
Common methods of configuration The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility Cross product reuse As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement.
Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy markets. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary).
Note: For customers who are upgrading, the installation of product and its related third party software is designed so that more than one version of product can co-exist.
Environment Practices
Note: There is a more detailed discussion of effective Environment Management in the Environment Management document of the Software Configuration Management series of whitepaper. Refer to that document for further advice. When installing product at a site, each copy of product is regarded as an environment to perform a particular task or group of tasks. Typically, without planning this can lead to a larger than anticipated number of environments. This can have a possible negative flow on effect by increasing overall maintenance effort and increasing resource usage (hardware and people), which may in turn cause delays in implementations. Customers to minimize the impact of environments on their implementations have used the following advice: At the start of the implementation decide the number of environments to use. Keep this to a minimum and consider sharing environments between tasks. Another technique associated with this is to specify an end date for each environment. This is the date the environment can be removed from the implementation. This can force rethinks on the number of environments that are to be used at an implementation and may force sharing. For each environment, consider the impact on the hardware and maintenance effort including the following: The time and resources it takes to install the environment. The time and resources it takes to keep the environment up to date including application of single fixes, rollups/service packs and upgrades. Do not forget application and management of customization builds. The time and resources to maintain the ConfigLab and Archiving facilities for multiple environments, if used at an implementation. This includes the setup and regular migrations that will be performed. Note: ConfigLab and Archiving only apply to certain Oracle Utilities Application Framework products The time and resources it takes to backup and restore environments on a regular basis. In some implementations, having different backup schemes for environments based upon tasks and update frequency for that environment, i.e. more updated = more frequent backup, may provide some savings. The time and resources to manage the disk space for each environment including regular cleanups.
Environments may be setup so that the database can be reduced to a single database instance with each environment having a different schema/owner. This will reduce the memory footprint of the DBMS on the machine but may reduce availability of the database instance is shut down (all environments are
affected). For non-production, most customers create a database instance, for Oracle sites, for each environment and one database subsystem, for DB2/UDB sites, customers for each environment.
#> $JAVA_HOME/bin/java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode)
10
AIX:
#> $JAVA_HOME/bin/java -version java version "1.6.0" Java(TM) SE Runtime Environment (build pap6460sr7ifix20100220_01(SR7+IZ70326)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr7-20100219_54049 (JIT enabled, AOT enabled) J9VM - 20100219_054049 JIT - r9_20091123_13891 GC - 20100216_AA) JCL - 20091202_01
Windows:
C:\> %JAVA_HOME%\bin\java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing)
HP-UX:
#> $JAVA_HOME/bin/java -version java version "1.6.0.10" Java(TM) SE Runtime Environment (build 1.6.0.10jinteg_11_mar_2011_09_19-b00) Java HotSpot(TM) Server VM (build 19.1-b02-jinteg:2011mar1107:33, mixed mode)
Note: Verify the java version number and operating mode (32/64 bit) against the Quick Installation Guide provided with the product.
cobsje -J $JAVA_HOME
Note: This command should be executed AFTER executing the splenviron[.sh] utility to initialize the environment variables used by the utilities and place the COBOL runtime in the PATH. If the license is NOT installed the response should be similar to the text below:
Error - No license key detected. Application Server requires a license key in order to execute. Please refer to your application supplier.
11
Well this message indicates that there is an issue dealing with the license key on the server. If this message appears to remedy the situation it is recommended that the COBOL runtime be re-installed and re-initialized the license key using apptrack as per the Installation Guide for the product. If the license key is installed correctly the cobjse utility will return a message similar to the following:
#> cobsje -J $JAVA_HOME Java version = 1.6.0_20 Java vendor = Sun Microsystems Inc. Java OS name = SunOS Java OS arch = sparcv9 Java OS version = 5.10
Additionally the 64 Bit version of COBOL is required to be used for 64 bit platforms as indicated in the Installation Guide for the product. To verify that the COBOL runtime is 64 bit the f
cob v
This should return the output similar to the following:
12
cobjrun64: com.splwg.base.api.batch.ThreadPoolWorker.main ended due to an exception Exception in thread "main" com.splwg.shared.common.LoggedException: The following stacked messages were reported as the LoggedException was rethrown: com.splwg.base.support.context.ContextFactory.createDefaultCo ntext(ContextFactory.java:569): error initializing test context
To resolve this issue refer to the instructions in the Quick Installation Guide about installing the COBOL license.
install_<product>_<environment>.log Where:
<product> Product code of the product component you are installing. For example,
FW = Oracle Utilities Application Framework
13
installed already on the machine (as it contains the Oracle Client in the installation). The value is stored in the ENVIRON.INI as the value for parameter ORACLE_CLIENT_HOME. Note: For Windows Server environments, both 32 bit client MUST be installed for use with the installation utilities. This is even if the 64 Bit Oracle Database software is installed on the same machine. If the Oracle Client or ORACLE_HOME is invalid then the following error will be returned by the installation utilities (and other installs):
Can't locate XML/Parser.pm in @INC (@INC contains: BEGIN failed--compilation aborted at data/bin/perllib/SPL/splXMLParser.pm line 3. Compilation failed in require at data/bin/perllib/SPL/splExternal.pm line 10. BEGIN failed--compilation aborted at data/bin/perllib/SPL/splExternal.pm line 10. Compilation failed in require at install.plx line 25. BEGIN failed--compilation aborted at install.plx line 25. Error: install.plx didn't finish successfully. Exiting.
Ensure that the ORACLE_CLIENT_HOME includes the perl subdirectory to rectify this issue.
Generally customers do not implement the AppViewer in production. Expanded mode is only available for Oracle WebLogic and Oracle Utilities Application Framework V4.0 and above.
1 2
14
WEB_ISEXPANDED VALUE
COMMENTS
true false
The t3 protocol is only used for sites that have separated the Web Application and Business Application tiers using the Oracle WebLogic platform on selected versions of the Oracle Utilities Application Framework. The iiop protocol is used for the same scenario but for IBM WebSphere platforms.
3
15
Configure J2EE Web Application Server SSL support to use the certificate as outlined in the documentation sites outlined below 4:
TABLE 2 J2EE SSL CONFIGURATION WEB APPLICATION SERVER REFERENCE
http://download.oracle.com/docs/cd/E13222_01/wls/docs100/secmanage/ssl.html http://download.oracle.com/docs/cd/E12840_01/wls/docs103/secmanage/ssl.html
Enable the HTTPS port on your environment using the console provided with your J2EE Web Application Server. Remember to reference the certificate you processed in the previous step. Note: For customers using Oracle WebLogic on Oracle Utilities Application Framework V4.1 and above the setting for WebLogic SSL Port Number will enable this facility without the need of the console. Note: If changes are made to the console then to retain the change across upgrades and service packs it is recommended to use custom templates or user exits to retain the setting. Refer to the Server Administration or Configuration and Operations Guide for more details of implementing custom templates. For Oracle WebLogic customers the config.xml templates may require changes.
Examine the $SPLEBASE/etc/conf directory (or %SPLEBASE%\etc\conf on Windows), unless otherwise indicated, for configuration files that use the protocol:
TABLE 3 SSL CONFIGURATION FILES CONFIGURATION FILE CHANGES
spl.properties
Change references to the t3 protocol to t3s, if exists Change references to the http protocol to https with the SSL port replacing the HTTP ports
Change references to the http protocol to https with the SSL port replacing the HTTP ports Change references to the http protocol to https with the SSL port replacing the HTTP ports Change references to the http protocol to https with the SSL port replacing the HTTP ports. This file is located under $SPLEBASE/splapp/businessapp/config/META-INF (or %SPLEBASE%\splapp\businessapp\config\META-INF on Windows)
Note: If these files are changed they may revert to the product template versions across service packs and upgrades. To retain change across service packs and upgrades it is advised to use custom templates and/or user exits. Refer to the Server Administration or Configuration and Operations Guide for more details.
For Oracle WebLogic customers, refer to the section Configuring Identity and Trust for the additional steps.
16
Shutdown the J2EE Web Application Server to prepare to reflect the changes. Run the initialSetup[.sh] w command to reflect the changes into the server files Restart the J2EE Web Application Server. Ensure that any Feature Configuration options using the product browser that use the HTTP protocol as part of their options are also converted to HTTPS and the appropriate port number. Use the Admin F Feature Configuration menu option to check each of them. The Features will vary from product to product and version to version. Ensure that any XAI JNDI Server provider URLS using the product browser that use the http/t3 protocol as part of their options are also converted to https/t3s and the appropriate port number. Use the Admin X XAI JNDI Server menu option to maintain the JNDI server. Any customization that refers to the HTTP protocol such as custom algorithms or service scripts must also be converted from HTTP to HTTPs. For customers using the Multi-Purpose Listener (MPL), the use of secure protocol should alter the $JAVA_HOME\jre\lib\security\java.security (or %JAVA_HOME%/jre/lib/security/java.security on Windows) file to enable SSL support. Modify the WLPORT entry in the $SPLEBASE\splapp\mpl\MPLParamaterInfo.xml (or %SPLEBASE%/splapp/mpl/MPLParamaterInfo.xml on Windows) to use the SSL Port.
TABLE 4 JAVA SSL CONFIGURATION VENDOR CHANGES
Oracle WebLogic
Refer to http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html
IBM WebSphere
ssl.SocketFactory.provider=com.ibm.jsse2.SSLSocketFactoryImpl ssl.ServerSocketFactory.provider=com.ibm.jsse2.SSLServerSocketFactoryImpl
17
have allowed access to production from inappropriate sources, which has had an adverse impact on performance. For example, it is not appropriate to allow people access to the production database through ad-hoc query tools (i.e. such as DB2 Control Center, SQL Developer, SQL*Plus etc). The freestyle nature of these tools can allow a single user to wreak havoc on performance with a single inefficient SQL statement. The database is not optimized for such unexpected traffic. Removal of this potentially inefficient access can typically, improve performance.
18
Customer Modification - If the record is owned by Customer Modification then the implementation has added the record. The implementation can change and delete the record (if it is allowed by the business rules).
Basically you can only delete records that are owned by Customer Modification. It is possible to alter or delete the records at the database level, if permitted by database permissions, but doing this will produce unexpected results so respect the ownership of the records.
Backup of Logs
By default product removes existing log files from $SPLSYSTEMLOGS (or %SPLSYSTEMLOGS% for Windows platforms) upon restart. This is the default behavior of the product but may not be desirable for effective analysis as the logs disappear. To override this behavior the following needs to be done: A directory needs to be created to house the log files. Most sites create a common directory for all environments on a machine. The size allocation of that directory will depend on how long you wish to retain the log files. It is generally recommended that logs be retained for post analysis and then archived (according to site standards) after processing to keep this directory relevant. Typically customers create a subdirectory under <SPLAPP> to hold the files. Set the SPLBCKLOGDIR environment variable in the .profile (for all environments) or $SPLEBASE/scripts/cmenv.sh (for individual environments) to the location you specified in the first step. For Windows platforms then the environment can be set in your Windows profile or using %SPLEBASE%/scripts/cmenv.cmd. Logs will be backed up at the location specified in the format
<datetime>.<environment>.<filename> where <datetime> is the date and time of the restart, <environment> is the id of the environment (taken from the SPLENVIRON environment variable) and <filename> is the original filename of the log.
Once the logs have been saved you must use log retention principles to manage the logs under SPLBCKLOGDIR to meet your sites standards. Most sites archive the logs to tape or simply compress them after post processing the log files (See Post Process Logs for more details on post processing).
19
Details of the logs written by the product are documented in the Performance Troubleshooting Guide. Use these guides to determine what data to extract from the logs for post processing.
Viewing and checking for errors on a regular basis to quickly reduce the amount of error that may occur can detect trends and common problems. The Performance Troubleshooting Guide outlines the logs and error conditions contained within those logs.
20
The value of an individual operating system setting is the maximum value of any product on that machine. For example, typically if Oracle or DB2 is contained on a machine, the values for those products are used. The settings used in this way are usually are sufficient for the other products on that machine. If the machine is dedicated for a particular product or tier, then refer to the documentation in the installation guide and the particular vendor's site for further advice on setting up the operating system in an optimal state.
The product has up to three connection pools to configure: Client connections and Business Server connections These are the number of active connections supported on the Web Application Server from the client machines. Remember that in an OLTP product (such as product) the number of connections allocated is always less that the number of users on the system. It needs to be sufficient to cater for the number of actively running transactions at any given point of time. In Oracle Utilities Application Framework V2.2 and above, it is possible to separate the Web Application Server and Business Application Server. If this configuration is used then it is recommended that the Business Application Server connection pools be set to the same values as the Web Application Server connection pools. Refer to Configuring the Client Thread Pool Size for more information about pool sizing.
21
Note: The Client connections and Business Sever connections are managed within the J2EE Web Application Server software. Database connections These are the number of pooled connections to the database. The Framework holds these connections open so that the overhead of opening and closing connections is minimized. For Version 2.x of the product, the number of connections allocated is dictated in each individual web applications hibernate.properties file using c3p0 connection pool (In Oracle Utilities Application Framework V4.0 the connection pooling is now handled by Universal Connection Pool (UCP)).
The figure below illustrates the connection pools available for each version of the Oracle Utilities Application Framework:
Client Client
V2.x
Refer to the whitepaper Server Administration Guides (also known as Operations and Configuration Guides) provided with your product for advice on the configuration and monitoring of the connection pools.
22
23
Development Overview - An introduction to the development process and internals of product. Packaging Utilities - Documentation of the tools provided to package custom builds Key Generation Overview of the routines and tables used for generation of random keys in the product. Application Workbench Overview An overview of the Application Workbench component of the SDK. User Guide A developers cookbook and users guide to the SDK.
Utilities Documentation This is detailed guides to the various tools supplied with product including: Background Processing Details of all the background processes available with the product. Reports Details of the reporting interface available with the product including installation of the algorithm and configuration of the reporting interface. CTI/IVR Integration An overview of the installation, capabilities and configuration of the CTI/IVR integration components delivered with the product. Framework/System Wide Standards An overview of the various UI standards employed by the product. Application Security An overview of the authorization security model used in the product including guidelines for configuration. User Interface Tools An overview of the meta data tools available for the user interface including menus, navigation keys etc. Zone Configuration An overview of how to configure the zones and portals supplied with the product. Database Tools An overview of the Meta data tools available for maintenance object, table and field definition including auditing. Algorithms An overview of all the algorithms supplied with the product. Scripting Details of the Business Process Scripting engine supplied with the product including configuration. Application Viewer Overview of the maintenance and operation of the Data Dictionary and code view supplied with the product. XAI Detailed overview and configuration of the Web Services/XML Application Integration component of the product.
24
LDAP Import Detailed overview of the LDAP import function supported by the product to synchronize LDAP information with the authorization information stored in the product.
Batch Operations and Configuration Guide - Details of the configuration settings and common operations for the batch component of product.
559880.1
A whitepaper outlining how to design, setup and monitor a ConfigLab solution for an implementation. This is a companion document to the Software Configuration Management Series.
560382.1
A series of whitepapers outlining the tracking points available in the architecture for performance and a troubleshooting guide based upon common problems.
560401.1
This series of documents outlines a set of generic processes (that can be used as part of the site processes) for managing code and data changes. This series includes documents that cover concepts, change management, defect management, release management, version management, distribution of code and data, management of environments and auditing configuration.
773473.1
A whitepaper outlining the security facilities in the Oracle Utilities Application Framework. A whitepaper outlining the common process for integrating an external LDAP based security repository with the framework. A whitepaper outlining all the various common integration techniques used with the product (with case studies). A whitepaper outlining a generic process for integrating an SSO product with the Oracle Utilities Application Framework. A whitepaper outlining the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations.
774783.1
789060.1
799912.1
Single Sign On Integration for Oracle Utilities Application Framework based products
807068.1
836362.1
Batch Best Practices for Oracle Utilities Application Framework based products
A whitepaper outlining the common and best practices implemented by sites all over the world relating to batch. Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x
856854.1
25
DOC ID
DOCUMENT TITLE
CONTENTS
specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt integration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. 1068958.1 Production Environment Configuration Guidelines 1177265.1 What's New in Oracle Utilities Application Framework V4? 1290700.1 Database Vault Integration This whitepaper outlines common production level settings for Oracle Utilities Application Framework products. This whitepaper outlines the changes since the V2.2 release of Oracle Utilities Application Framework. This whitepaper outlines the Database Vault integration available with Oracle Utilities Application Framework V4.1 and above. 1299732.1 BI Publisher Integration Guidelines This whitepaper outlines some guidelines for integration available with Oracle BI Publisher for reporting. 1308161.1 Oracle SOA Suite Integration This whitepaper outlines the integration between Oracle SOA Suite and the Oracle Utilities Application Framework. 1308165.1 MPL Best Practices Addendum to the XAI Best Practices focusing on the Multipurpose Listener. 1308181.1 Oracle WebLogic JMS Integration This whitepaper outlines the integration between Oracle WebLogic JMS and the Oracle Utilities Application Framework for Oracle Utilities Application Framework V4.1 and above. These features are also available for Oracle Utilities Application Framework V2.2 via patches.
This documentation is updated regularly with each release of product with new and improved information and advice. Announcements of updates to whitepapers may be tracked via http://blogs.oracle.com/theshortenspot or http://www.twitter.com/theshortenspot.
26
IT Infrastructure Library (ITIL) is a set of consistent and comprehensive documentation of best practice for IT Service Management. Used by many hundreds of organizations around the world, a whole ITIL philosophy has grown up around the guidance contained within the ITIL books and the supporting professional qualification scheme. ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the accommodation and environmental facilities needed to support IT. ITIL has been developed in recognition of organizations' growing dependency on IT and embodies best practices for IT Service Management. The ethos behind the development of ITIL is the recognition that organizations are becoming increasingly dependent on IT in order to satisfy their corporate aims and meet their business needs. This leads to an increased requirement for high quality IT services. ITIL provides the foundation for quality IT Service Management. The widespread adoption of the ITIL guidance has encouraged organizations worldwide, both commercial and non-proprietary, to develop supporting products as part of a shared "ITIL Philosophy". For more information about ITIL refer to http://www.oracle.com/itil
The following products have been used with product at customer sites:
27
Oracle Application Test Suite (http://www.oracle.com/enterprise_manager/applicationquality-solutions.html ) Borland Silk (http://www.borland.com/us/products/silk/silkperformer/index.html ) Mercury Load center/loadrunner/ ) Runner Performer
(http://www.mercury.com/us/products/performance-
export CLASSPATH=/axis/lib/axis.jar;$CLASSPATH
or
set CLASSPATH=c:\axis\lib\axis.jar:%CLASSPATH% When splenviron.sh script (or splenviron.cmd on Windows) runs it will look in the scripts directory for the existence of the cmenv.sh script (or cmenv.cmd on Windows) and executes it.
Additional to this, it is possible to do this WITHOUT adding the cmenv.sh script (or cmenv.cmd on Windows). Set the CMENV environment variable to the location of a script, with the above commands contained, BEFORE running any command and splenviron.sh script (or splenviron.cmd on Windows). The CMENV facility is for global changes as it applies across all environments and the cmenv.sh/cmenv.cmd solution is per environment. You can use both as CMENV is run first then cmenv.sh/cmenv.cmd. Note: It is possible, using this technique, to manipulate any environment variable used by the product but this is not recommended.
28
Note: The AppViewer and Help applications are only supported in the browsers supported by the product.
29
SPLADM
Owner of the database. Database Administrators are the only valid users of this account. This account is created during the database creation process.
SPLREAD
This account is used by Archiving, ConfigLab and Reporting. Only available on Oracle database installations. This account is created during the database creation process.
splsys
SPLUSER
This account is used for all application database access. This account is created during the database creation process.
SYSUSER
This user needs to be available to add other users. This needs to be defined to the Web Application Server on install. The password will reside in the repository defined in the Web Application Server (usually LDAP).
system
WEB XAI
Web Self Service Default User Default XAI userid (some versions)
Note: There are other userids supplied by products used by product, refer to the documentation for the products on these users.
30
Create a different userid for XAI transactions. This allows tracking of XAI within the architecture. It is also possible to assign each transaction in XAI a different userid, as it is passed as part of the transaction but usually most customers consider this overkill. Create a different userid for each background interface. This allows security and traceability to be tracked at a lower level. Create a generic userid for mainstream background processes. This allows tracking of online versus batch initiation of processes (especially To Do, Case and Customer Contact processing).
Note: Remember that any product user must be defined to the product as well as the authentication repository.
31
Most hardware vendors have recommendations on optimal time intervals to restart machines. Some vendors even "force" the issue for maintenance reasons. Check with your vendor for specifics for your platform.
Using incorrect SQL may violate any of the validations and even make the system unusable. If you have to manipulate data within the product, use one or more, of the following provided methods: The browser user interface. XML Application Integration. Conversion Toolkit. Software Development Kit.
32
In the Oracle Utilities Application Framework, portals were introduced to all sites to decide what zones and their sequence should for different user groups. For performance reasons, it is recommended that you configure portal preferences to collapse zones that are not needed every time a portal is displayed. The system does not perform the processing necessary to build collapsed zones until a user expands the zone, so configuring them as initially collapsed improves response times. This is especially relevant for the To Do zones that may take a while if the number of To Do records is excessive.
Perform Backups
Perform the backup of the database and file system using the site procedures and the tools designated for your site.
Check the log files for any error conditions that may need to be addressed. Refer to Post Process Logs and Check Logs For Errors for more details.
Collate and process day's performance data to assess against any Server Level targets. Identify any badly performing transactions.
Execute the batch schedule agreed for your site. This will include overnight, daily, hourly and adhoc background processes.
Rebuild Statistics
DB2 and Oracle require the database statistics for the product schemas to be rebuilt on a regular basis so that the access to the SQL is optimized. At DB2 sites, a rebind is also required to reflect the changes in the execution plan/packages.
File Cleanup
On a regular basis, the output files from the background processes and logs will need to be archived and removed to minimize disk space usage.
The Oracle Utilities Application Framework features an inbuilt archiving facility that can transfer transaction data not considered required for online processing to another environment, to a file or simply deleted. Refer to Archiving for more details.
There are a number of background processes that remove staging records that have been already successfully processed. Refer to "Removal of Staging Records" for more details.
Note: The tasks listed above do not constitute a comprehensive list of what needs to be performed. During the implementation you will decide what additionally needs to be done for your site.
33
Peak
8 12 16
Off Peak
20
Note: The above diagram is for illustrative purposes only and could vary for your site. Typically a business day contains the following elements: There is a peak online period where the majority of call center business is performed. Typically this is performed in business hours varying according to local custom. There is a call center off peak period where the volume of call center traffic is greatly reduced compared to the peak period. Typically in call centers, which operate 24x7, this represents overnight and weekends. At this time the call center is reduced in size (usually a skeleton shift). Some sites do not operate in non-peak periods and rely on automated technology (e.g. IVR) to process transactions such as payments etc. Backups are either performed at the start of the peak period or the end of the peak period. The decision is based upon risk around failure of the background processing and its risk to the impact of online processing. The product specific background processes can be run anytime but avoiding them during peak time will maximize the available computing resources to the successful processing of call center transactions. The backup at the end of the peak period is the most common patterns amongst product customers. Background processes are run at both peak and off peak times. The majority of the background processing is performed at off-peak times to maximize the computing resources to the successful completion of the background processing. The background processing that is run at off peak times is usually to check ongoing call center transactions for adherence to business rules and process interface transactions ready for overnight processing. Monitoring is performed throughout both peak and off peak times. The monitoring regime used may use manual as well as automated tools and utilities to monitor compliance against agreed service levels. Any non-compliance is tracked and resolved.
The definition of the business day for you site is crucial to schedule background processing and set monitoring regimes appropriate for the traffic levels expected.
34
In Oracle Utilities Application Framework V4 and above the concept of a Login Id is supported. This attribute is the used by the framework to authenticate the user. For backward compatibility the 8 character userid field is still used for auditing purposes internally. Therefore both Userid and Login Id should be populated. They can be different or the same values. The Login Id can be set manually, via Oracle Identity Manager or set in a class extension to auto generate a value.
Figure 6 Login Id
If the site is cost sensitive and/or the availability requirements allows it, then having all the architecture on a single machine is appropriate. This is known as the single server architecture. This configuration is popular with some sites as: The cost of the hardware can be minimal (or least very cost effective). Maintenance costs can be minimized with the minimal hardware. Virtualization software (typically part of the operating system or third party virtualization software) can be used to partition the machine into virtual machines.
The one issue that makes this solution less than ideal is the risk of unavailability due to hardware failure. Customers that choose this solution, typically address this shortcoming by buying a second
35
machine of similar size and using it for failover, disaster recovery as well as non-production. In essence, if the primary hardware fails then the backup machine assumes the responsibility for production till the hardware fault is resolved. In this case, additional effort is required to keep the secondary machine in synchronization with the primary. The diagram below illustrates the single server architecture:
Browser Client
One of the variations on the single server architecture is the "simple multi-tier architecture". In this hardware architecture, the database server and Web Application Server/Business Application Server are separated on different machines. For product V1.x customers, you can also separate the Web Application and Business Application Servers. This is chosen by customers who want to optimize the hardware for the particular tier (settings and size of machine) and therefore separate the maintenance efforts for each server. For example, Database Administrators need only access the Database Server to perform their duties and set the operating system parameters optimized for the database. Unfortunately the solution can have a higher cost than the single server solution and still does not address the unavailability of any machine in the architecture. Customers that have used this model adopt a similar solution to the single server architecture (duplicate secondary machines at a secondary site) but also have the option of having both machines in the architecture being the same size and shifting the roles when availability is compromised. For example, if the database server fails, the Web Application Server can be configured to act as a combination of the Database Server and Web Application Server. The figure below illustrates the Simple Multi-Tier Architecture:
36
Database Server
Database Server
Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various variations. Typically customers use a smaller machine for the Web Application Server as compared with the database server.
Multiple Web Application Servers
To support higher availability for the product, some sites consider having multiple Web Application servers. This allows online users to be spread across machines and in the case of a failure be diverted to the machine that is available. To achieve this, the site must use a load balancer (see "Load balancers" discussion later in this document). At the time of failover the load balancer will redirect traffic to the available server. This is made possible as the product is stateless. The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or Managed? section of this document for advice. This architecture is quite common as it represents flexibility as one of the Web Application Servers can be dedicated to batch processing in non-business hours making the architecture more cost effective. Typically the Web Application Server software is shutdown to allow batch processing to use the full resources of the machine while allowing users (usually a small subset) to process online transactions. The only drawbacks with this solution are a potential higher cost than a multi-tier solution and the potential impact of database unavailability. Customers that use this architecture overcome the potential unavailability of the database by either using a secondary site to act as the failover or using one of the Web Applications in a failover database server role. The latter is less common, as most customers find it more complex to configure, but is possible with this is a possibility with this architecture. The figure below illustrates the Multi-Tier Architecture:
37
Browser Client
Load Balancer
Database Server
Machines in this architecture can be the same size or different sizes depending on the cost/benefits of the various variations. Typically customers use a smaller machine for the Web Application Server as compared with the database server.
High Availability Architecture
The most hardware intense solution is where all the tiers in the architecture have multiple machines for high availability and distribution of traffic. The solution can vary (number of machines etc) but have the following common attributes: There is no single point of failure. There is redundancy at all levels of the architecture. This excludes redundancy in the network itself, though this is typically out of scope for most implementations. The number of servers will depend on segmentation of the traffic between call centers, noncall centers, interfaces and batch processing. It is possible to reuse existing servers or setup dedicated servers for different types of traffic. Availability can be managed with either hardware based solutions, software based solutions or a valid combination of both. The number of users will dictate the number of machine to some extent. Experience has shown, that a large number of users tend to be better served, from a performance and
38
availability point of view, by multiple machines. Refer to the What is the number of Web Application instances do I need? for a discussion on this topic. The Web Applications Servers are either clustered or managed. Refer to the discussion in the Clustering or Managed? section of this document for advice. Database clustering is typically handled by the clustering or grid support supplied with the database management system.
This solution represents the highest cost hardware from both hardware and a maintenance perspective. Historically customers with large volumes of data or specific high availability requirements have used this solution successfully. The figure below illustrates the High Availability Architecture:
Browser Client
Load Balancer
Database Server/Cluster
39
of availability. This routing can be done automatically through the use of high availability software/hardware or manually by operators. The Oracle Utilities product architecture supports failover at all tiers of the architecture, using either hardware or software based solutions. Failover solutions can be varied but a few principles have been adopted successfully by existing customers: Failover solutions that are automated are preferable to manual intervention. Depending on the hardware architecture used the failover capability can be automated. Availability goals play a big part in the extent of a failover solution. Sites with high availability targets tend to favor more expensive, comprehensive hardware and software solutions. Sites with lower availability (or no goals) tend to use manual processes to handle failures. Failover is built into the software used by the products (though it may entail an additional license from the relevant vendor). For example, Web Application Server vendors have inbuilt failover capabilities including load balancing, which is popular with customers. Hardware vendors will have failover capabilities at the hardware or operating system level. In some cases, it is an option offered as part of the hardware. Sites use the hardware solution in combination with a software based solution to offer protection at the hardware level. In this case, the hardware solution will detect the failure of the hardware and work in conjunction with the software solution to route the traffic around the unavailable component. Failover is made easier to implement for the product as the Web Application is stateless. The users only need connection to the server while they are actively sending or receiving data from the server. While they are inputting data and talking on the phone they are not consuming resources on the machine. For each transaction the infrastructure routes the calls across the active components of the architecture. At the database level the common failover facility used is the facility provided by the database vendor. For example, Oracle database customers typically implement RAC. Failover configuration at the database is the least used by existing sites, as the cost of having additional hardware is usually prohibitive (or at least not cost configurable). Sites wanting to have failover and disaster recovery but cannot afford both consider a solution which combines both. In this case, the disaster recovery configuration is used as a failover for non-disasters. For any failover solution to be effective, the site typically analyses all the potential areas of failure in their architecture and configures the hardware and software to cover that eventuality. In some cases, sites have chosen NOT to cover eventualities of extremely low probability. Using hardware Mean Time between Failure (MTBF) values from hardware vendors can assist in this decision.
When designing a failover solution then the following considerations are important: Determine what the availability goals are for your site.
40
Determine the inbuilt failover capabilities of the hardware and software that your site is using. This may reduce the cost of implementing a failover solution if it is already in place. List all the components that need to be covered by a failover solution. Review the list to ensure all aspects of "what can fail?" are covered. Design your failover solution with all the above information in mind that you can automate (within reason) for your site. Ensure the solution is simple and reuses already available infrastructure to save costs. Commonly sites use the following failover techniques in the architecture:
TABLE 8 COMMONLY USED FAILOVER TECHNIQUES TIER COMMON FAILOVER SOLUTION
Network
Load Balancer (hardware for large numbers of users; software based for others). Consider redundant load balancers for "no single point of failure" requirements.
Use inbuilt clustering/failover facilities unless load balancer is doing this. Consider hardware solutions for batch or interface servers. Use inbuilt failover facilities in database unless hardware solution is more cost effective.
The theory is that the first place the error occurs is the most likely candidate tier.
41
LOG FILES
COMMENTS
spl_service.log
Business Application Server log. In some versions of the Oracle Utilities Application Framework this log does not exist as it is included in the spl_web.log. Errors in here can be service or database related.
spl_xai.log
Web Services Integration also known as XML Application Integration (XAI) log. This log file is exclusively used for the XAI Servlet. More detail can exist in the xai.trc file if tracing is enabled.
spl_web.log
Web Application Server log. This is typically where errors from the browser interface are logged. If errors are repeated from the spl_service.log then the issue is not in the Web Application Server software but in the Business Application or below.
Note: There are other logs that are related to the J2EE Web Application Server used that exist in this directory or under the location specified in the J2EE Web Application Server First error message is usually the right one When an error occurs in the product, it can cause other errors. Usually the first occurrence of any error is usually the root cause. This is more apparent when a low level error occurs which ripples across other processes. For example, if the database credentials are incorrect then the first error will be that the product cannot connect to the database but other errors in the product will appear as meta data cannot be loaded into various components. In this case fixing the database error will correct the other errors as well. Not all errors are in fact errors The product will issue errors if components are missing but are able to overcome this issue. For example, if meta-data is missing the system may resort to using default values. In most cases this means the product can operate without incident but the cause should be resolved to ensure correct behavior. Note: In some versions, such errors are reported as a WARNING rather than an ERROR. Tracing can help find the issue The products includes trace facilities that can be enabled to help resolve the error. This information is logged to the logs above (and other server logs) that can be used for diagnosis as well as for support calls. Refer to Online and Batch tracing and Support Utilities for more information about these tools. There are usually a common set of candidates When an error occurs there are a number of typical candidates for causing issues: Running out of resources The product uses the resources allocated to it that are available on the machine. If some capacity is reached in the physical machine (memory or disk space are typical resource constraints) or logical, via configuration, such as JVM memory allocations, then the product will report a resource issue. In some cases, the product will directly report the problem in the logs but in some case it will be indirectly. For example, if the disk space is limited then a log may not be written which can cause issues. Incorrect configuration If the product configuration files or internal configuration are incorrect for any reason, they can cause errors. A common example
42
of this is passwords which either are wrong or have expired. File paths are also typical settings to check. Missing metadata The product is meta-data driven. If the metadata is incorrect or missing then the behavior of the product may not be as expected. This can be hard to detect using the usual methods and typically requires functionality testing rather than technical detective work. Out of date software All the software used in the solution, whether part of the product or infrastructure, has updates, patches and upgrades to contend with. Upgrading to the latest patch level typically can address most issues.
Refer to the Performance Troubleshooting Guides for more techniques and additional advice.
Data driving the configuration of the product (e.g. Menus, rates, security, reference data etc).
Maintained by a subset of individuals. Kept indefinitely and only represents small part of any database. Maintained by end users. Kept indefinitely but can be driven by government legislation such as privacy laws or industry rules.
Master Data
Data pertaining to customers/taxpayers such as personal records, addresses, account information, contracts, etc)
Transactional Data
Day to day data relating to any interaction or activity against the Master
43
DATA TYPE
TYPICAL COMPOSITION
TYPICAL MANAGEMENT
The table above illustrates the various differences between the types of data and their usual data retention rules. During an implementation and post implementation, you must be aware of the data types and then plan the data retention rules accordingly.
Archiving
Note: The Archive Engine is only available for selected Oracle Utilities Application Framework based products. Refer to your product documentation to verify its validity. One of the most used techniques of managing data is Archiving. The idea is that you only have data in your database that is actively needed and any additional data is either archived to another place or simply deleted. The processing therefore is optimized against the active data, without having to ignore records no longer needed for processing. Archiving is usually associated with transaction data as it typically has a limited live. Data is kept according to business practices or government regulations (especially around taxation records retention). Most customers keep a number of years of active transaction data and archive any data past the activity date. The key to archiving is to know what to archive and to ensure that archiving that data does not violate a business rule or compromise integrity of the overall system. Therefore most of the activity in archive planning, is identifying the data to archive (transactional data), the criteria in which the data becomes valid for archive and what form the archive is going to take (another database, file or microfiche). Determining the data to archive is an important first step. Typically transactional data is a candidate but there may be circumstances where master data is also archived. For example, you might archive customer records if the customer becomes deceased. The following types of tables are ideal candidates for archiving: Transaction Tables with large amounts of records Archiving such tables can have double gains. First of all you are removing records that have to be ignored by the processing and also you may be freeing up valuable space as you reduce the sizes of the tables. Transient data Typically data that is included into interfaces may be loaded into tables prior to loading into the main transaction tables. This is known as staging. It is a common technique as validation can be executed against the staging area and only validated data passed to the transaction data. This separates invalid data from valid data. Invalid records are kept in the staging area until they are resolved. The only issue then is that records that are valid must be removed from the staging area on a regular basis. A common principal in records retention is if you can get a record from someplace else then you can remove one of the copies. For example, if you print off an email, you still have a record of it, therefore you can delete the electronic copy or destroy the physical copy, you do not need both. This principle applies to
44
the staging area, where the valid records are already in the transaction tables so they can be safely removed from the staging area. The only exception to this principal is where the business process or regulation requires you to keep both. Living data Data pertaining to living customers needs to be retained for processing but if you work in a deregulated market where you must surrender details of customers as part of the process of transferring them to a competitor (a.k.a. losing a customer) then they may become candidates for archiving. The validity of this case may vary according to business practices or regulations. The same principle can be applied to customers who become deceased. What data, if any, do you retain when a customer dies? Does the data become a candidate for archival?
Once the data to be archived is determined the next step is to identify the criteria that will be used to identify the data is valid to archive. Usually, archive criterion are time based (e.g. older than x months) but can be quite sophisticated. The criteria will be set by business processes or government legislation but there are a few additional criteria that need to also consider: Active Data If a record satisfies the time criteria but is somehow still active then it is not eligible for archival. For example, if a payment is older than business rules recommend, but is in dispute for some reason then it cannot be archived. Integrity When archiving data, no integrity rule (referential or otherwise) should be broken. You must guarantee that archiving of this record will not adversely affect other records in the system or even prevent the system from operating. For each record deletion, any related tables should be examined to see if any condition prevents the deletion (or they should be covered in the archive as well this is known as a cascade archive). De-archive - One of the major misconceptions about archiving in a data management aspect is the ability to return data from an archive (a.k.a. de-archived). Not all archiving facilities do this as typically the space saving benefits of archiving are somewhat diluted by this ability and the overhead of re-integrating the archived data into active data can be quite difficult and messy. The best advice is to avoid this situation altogether and ensure the criteria used covers data is not going to be de-archived.
Note: The current version of the archive facility inbuilt in the Oracle Utilities Application Framework does not support de-archival. Once the data to be archived has been identified and the criteria agreed and implemented then the format of the archive needs to be taken into account. There are a number of options that can be considered: Using the Archive facility If there is a requirement for the data to made available online but not active in the system, then consider setting using the inbuilt archiving facility provided in the Oracle Utilities Application Framework . File based If there is NO requirement to view the data but make it available to offline viewers (such as data loaders or even microfiche) then archiving data to a sequential file for
45
reference is alternative. This data can then be archived to a tape or to a location that can be retrieved and viewed at a later stage. The format of the file is site based but can be as simple as a database export or as complex as formatted fixed format multi-record data files. The archive facility provided with the Oracle Utilities Application Framework provides a facility to archive the data to a file. Purge Only One of the most common archiving techniques is to simply delete the data from the database. This is applicable to all techniques eventually (you cannot keep the data forever). In this technique the records identified to be archived (passing the criteria) are simply deleted from the database to release space.
Archiving data on a regular basis can remove inactive data from your data which may improve performance and save disk space. Generally, customers run archiving processes at least once a week.
46
implementation, or soon after, the audit requirements should be clarified and factored into any data retention policy. It should be noted that the product's themselves do not impose any particular data retention policy. Data Retention tends to apply to specific data types only: Transactional data is subject to data retention rules as it is the data that grows over time. Master data tends to remain in the database for the life of the system, even in a deregulated market, for fraud prevention purposes. Meta-Data is not covered by data retention policy as it needs to be there to make the product operate so is rarely archived or removed Configuration data will vary, as it is wide ranging, but generally is also rarely archived or removed.
In terms of their platform, customers should monitor the data growth to reach a decision about archiving, if they wish to do so, or simply removing the data. Typically the status of a record in the staging tables used for interfaces becomes Complete then it becomes redundant data. The data will be reflected in the main product tables and is not required in the staging tables anymore. Removal of completed records, on a regular basis, can have storage benefit as well as performance benefit.
47
Product
Output Staging
Figure 11 Staging Process Overview
It is assumed that completed staging records are no longer required, after a period of time, as the data they contain has been reflected have been reflected in the main tables. There is no business reason to keep completed staging records after they have been completed for long periods of time. Regular cleanups of the staging tables to remove completed records will have great performance benefits on interfaces. Successful sites run the provided purge jobs to improve performance and reduce disk space usage. To decide when to run these purge jobs and what parameters to pass to them the following is recommended: Work out with the business at the site how long they wish to retain the number of completed records. You can stress to them that NO important data is lost in purging completed records as their data is reflected in main tables. The value is used for the NO-OF-DAYS batch parameter passed to the job. The value is the number of days not the number of business days (e.g. A value of 14 for NO-OF-DAYS means 2 weeks). For the To Do Purge job, there are additional parameters to decide the specific To-Do type to purge or ALL (DEL-TD-TYPE-CD and DEL-ALL-TD-SW). Work with the business to decide if this job is to be run once (for all To Do types) or multiple times for each To-Do Type. Successful customers run it to delete all To Do types to reduce the number of jobs to run.
48
Decide the frequency based upon data growth of each table. Ideally these purge process should be run each business day at the end of the nightly batch schedule to keep the optimum but should be run once a week at a minimum.
Partitioning
One of the most popular data management techniques is the use of partitioning on tables. Partitioning enables tables and indexes to be split into smaller, more manageable components. Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics, such as having table compression enabled or being stored in different tablespaces. From the perspective of a database administrator, a partitioned object has multiple pieces which can be managed either collectively or individually. This gives the administrator considerably flexibility in managing partitioned objects. However, from the perspective of the product, a partitioned table is identical to a non-partitioned table; no modifications are necessary when accessing a partitioned table using SQL. Partitioning has known benefits: Divide and Conquer - With partitioning, maintenance operations can be focused on particular portions of tables. For example, a database administrator could back up a single partition of a table, rather than backing up the entire table. For maintenance operations across an entire database object, it is possible to perform these operations on a per-partition basis, thus dividing the maintenance process into more manageable chunks. Parallel Execution of SQL Most databases will sense that the table is partitioned and run SQL statements (including SELECT and INSERT statements) in multiple threads. Each of the partitions can be thought of as an individual table and the database uses this. Pruning Queries operating on one partition can run substantially faster due to reduced size of the data to search. Partition Availability - Partitioned database objects provide partition independence. This characteristic of partition independence can be an important part of a high-availability strategy. For example, if one partition of a partitioned table is unavailable, all of the other partitions of the table remain online and available; the product can continue to execute queries and transactions against this partitioned table, and these database operations will run successfully if they do not need to access the unavailable partition.
When using partitioning you should ensure that major processes accessing the table do not cross partition boundaries. Crossing from one partition to another can cause slight delays as physically the table has been separated into individual files per partition. This situation may be avoided when designing the partitioning regime for the table.
49
The key to success to partitioning is recognizing which tables are candidates for partitioning and what partitioning scheme to use. Partitioning must be planned and designed into a database to ensure that the partitioning regime is optimal for your products. The ideal candidates for partitioning are large tables with a small number of indexes. The benefits of partitioning are optimal for large tables rather than applying the principles across all tables. The minimal number of indexes is a criterion to minimize the likelihood of crossing partition boundaries in SQL. Once the tables are chosen to be partitioned then the next step is to decide the number of partitions to implement. The rule of thumb is to choose the number of partitions so that any SQL that accesses the table using the indexes will minimize crossing partition boundaries. If your product is multi-threaded then each thread of the process needs to remain within a partition. In this case the number of partitions should be equal to the number of threads (or a divisor). For example, if a major process runs in 10 threads then the number of partitions could be 10, 5 or 2. Each of the numbers ensures that each thread stays within a partition. Once the number of partitions is chosen the next step is to decide which partition scheme you can use. Database vendors have implemented numerous ways of dividing a table into partitions. Each of these schemes (and sometimes combination) tells the database how to split the data into the various partitions as well as how to access the partitions. The most common partitioning scheme used is known as range partitioning where a range of values (index based) is used to designate the partition a record is placed within. Refer to the partitioning documentation provided by your database vendor for details of all the different schemes that can be used to partition your table data. Table partitioning represents the easiest method of data management and is usually the first data management technique used before other techniques are considered.
Compression
Note: Database level compression varies from one database vendor to another. In some cases, it is included as an optional component of the database and in other cases, it is a separate piece of software that must be obtained from the database vendor (or an approved third party). A technique that is starting to emerge from the database vendors is compression of data. This can be done at a database level (global) or a table level and typically requires no changes to a product to implement. As the data is stored and retrieved it is compress and decompressed before passing back to the product. As far as the product is concerned it is unaware that the data is compressed or not. This appeals to database administrators as they can experiment with compression without the need to involve the product developers. Database systems have not heavily utilized compression techniques on data stored in tables. One reason is that the trade-off between time and space for compression is not always attractive for databases. A typical compression technique may offer space savings, but only at a cost of much
50
increased query time against the data. Furthermore, many of the standard techniques do not even guarantee that data size does not increase after compression. Over time, database vendors have addressed the trade-off by implementing unique compression techniques. It has come to a stage where virtually no negative impact on the performance of queries against compressed data; in fact, it may have a significant positive impact on queries accessing large amounts of data, as well as on data management operations like backup and recovery. Each database vendor will supply guidelines to effectively use of compression to minimize any overhead for all SQL statements (including INSERTs, UPDATEs etc) and which tables are the best candidates for compression. Note: Not all tables in Oracle Utilities Application Framework based products will benefit from compression as the database vendors have imposed efficiency rules that may preclude specific tables.
Database Clustering
One of the more advanced features that have emerged as a valid data management technique is the ability for databases to be clustered. This is a relatively new technique for data management, as most people associate clustering with availability rather than management of data volumes. Database clustering provides the ability for a database to be spread on more one machine but seem to the product as a single database. The database management system manages all the synchronization and load balancing of transactions automatically. It was designed to support the availability of the database in case of a hardware failure in one of the nodes of the cluster. Experience within the industry has shown that using the clustering capabilities can also improve performance when large amounts of data are involved. Logically clustering enables the database to access more power and spreading the workload across machines. This technique is applicable where the volume of the data is impacting database performance. One of the major symptoms is CPU usage on database is consistently high, no matter what tuning is performed at the database and product level. This implies that the database is CPU bound and while there may be an option to add more CPUs to the server, considering clustering the data becomes a viable alternative. While implementing clustering has been made progressively easier with each release of the database management system, implementing clustering must be planned using the guidelines outlined by the database vendor. Refer to the documentation provided on clustering by your database vendor.
51
Typically a site will have a preferred regime and set of tools that is used to achieve a backup and recovery of all systems that the site. When implementing product this regime and set of tools is typically reused to cater for the products and business needs. When considering a backup regime for product the following should be considered: There is nothing within product technically that warrants a particular approach to Backup and Recovery. Most customers continue to use their existing approaches. There is nothing within product technically that warrants a particular backup and recovery tool. Most customers use the native tools provide with their platforms, for cost savings, but some customer have purchased additional infrastructure to take advantage of faster backups/recoveries or additional features provided by such tools.
If your site does not have a backup regime already the following can be considered default industry practice: Use Hot Incremental backups on production during the business week to reduce outage times. Do a FULL backup (Hot or Cold) once a week at least to reduce recovery times. Verify backups after they are taken to reduce risk of delayed recoveries. On non-production, consider either the same regime as production or consider regular FULL backups at peak periods in an implementation.
[XFH-DEFAULT] FILEMAXSIZE=8 IDXFORMAT=8 You then place this configuration file in a location that can be referred to by the runtime. You can either deposit the file in $SPLEBASE/scripts (or %SPLEBASE%\scripts) or in a site specific central location. To enable support for larger formats your initialize the EXTFH environment variable with the location of the configuration file. For example:
set EXTFH=D:\oracle\TUGBU\scripts\cmextfh.cfg ( for Windows)
52
export EXTFH=/oracle/TUGBU/scripts/cmextfh.cfg
(for Linux/UNIX)
This can be done in your .profile (for Linux/UNIX) or using the facilities outlined in Custom Environment Variables or JAR files. For additional details and additional parameters refer to My Oracle Support Doc Id: 817617.1.
While all care is taken in specifying the hardware will cost in mind, experience has shown that customers need to review the specification in light of their internal standards.
Java script must be enabled. The product framework uses javascript to implement the browser user interface.
53
HTTP 1.1 supports must be enabled. If you use a proxy to get to the server, then also check "Use HTTP 1.1 through proxy connections".
Network bandwidth
One of the most common questions asked about the product is the network footprint of the Oracle Utilities Application Framework based product. This question is difficult to answer precisely for a number of reasons: The amount of data sent up and down the network is dependent on how much change is done by an individual user at the front end of the product. Only the elements changed by the end user are transmitted back to the server. The more the user changes the more the data is transmitted. Given the numerous possible permutations and combinations for data changes at any given time, this can be hard to estimate. The Oracle Utilities Application Framework supports partial object faulting. This means the framework only sends data to the client that is being displayed. In screen with more than one tab, the framework only sends the data for the tabs that are accessed by the end user. This
54
means only part of the overall object required by the screen. Most users tend to operate on a small number of tabs but this can vary from transaction to transaction. All transmission between the client and server are compressed using HTTP 1.1 natively supported compression. This can reduce the actual size of the data transmission considerably depending on the content of the changes. Screen data is cached on the client machine that can be reused. The product takes advantage of the caching facilities in the HTTP 1.1 protocol and the browser caching functionality. For example, screen definitions and graphics are stored on the client machine to reduce network footprint. Upon every transmission of a screen element the data in the cache is tagged with an expiry date to indicate the life of the element in the cache. Use of client side caching can reduce the network traffic considerably with some customers reporting up to 90% reduction in network traffic when this caching is enabled.
To provide an estimate for the network footprint, the range between 10-200k, on average, per transaction is quoted to adequately cover all the aspects outlined above. This value has been based upon experiences with customers. It is possible to track network bandwidth using a log analyzer against the W3C standard access.log produced by your Web Application Server. Refer to the Performance Troubleshooting Guides for more information about this log.
Ensuring that only legitimate traffic is on a network can provide greater bandwidth for all applications (including product) and improve consistency.
55
In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get from one designated point to another. In some usages, latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency. The greatest impact on performance is inconsistency latency. The latency assumption seems to be that data should be transmitted instantly between one point and another (that is, with no delay at all). The contributors to network latency include: Propagation - This is simply the time it takes for a packet to travel between one place and another at the speed of light. Transmission - The medium itself (whether optical fiber, wireless, or some other) introduces some delay. The size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than a short one. Router and other processing - Each gateway node takes time to examine and possibly change the header in a packet (for example, changing the hop count in the time-to-live field). This is a common cause of network latency. Other computer and storage delays - Within networks at each end of the journey, a packet may be subject to storage and hard disk access delays at intermediate devices such as switches and bridges.
Minimizing latency or latest ensuring consistent latency is the goal of most of the product sites. A discussion of latency and how to measure it is contained in the whitepaper Performance Troubleshooting Guide
56
It is possible to track errors and trends from the log using the log analyzers. It is possible to parse the log at a low level and determine the number of concurrent users and the users that have used the system (and interestingly conversely who has NOT used the system). It is possible to track flows of individual sessions, known as click streaming, to track the screens and data used for the screens. It is possible to determine the criteria used by users for searches. This is useful for detecting wildcard searching.
This log is useful but it is large so needs to be managed as suggested in Backup of Logs.
Customers implement the latter suggestion in the following ways: Oracle WebLogic A server entry for each new server is setup in the same WebLogic instance. The port number can be the same (if the server is housed on a separate machine, known as clustering) or a different port number (i.e. managed servers). A proxy is required to have a common connection point and to implement load balancing. The memory footprint will be the same size for each server. IBM WebSphere A new server is created within the WebSphere instance. The port number can be the same (if the server is housed on a separate machine, known as clustering) or a different port number (i.e. managed servers). A proxy is required to have a common connection point and to implement load balancing. The memory footprint can be different for each server as that is held against the server entry within WebSphere.
Refer to Production Environment Configuration Guidelines for more guidelines for production systems for JVM memory settings.
57
Load balancers
Oracle Utilities product customers who have more than one Application Server (physical or logical) must use a load balancer to route the traffic evenly across the available servers. This load balancer can be either software based (such as a web server with the appropriate plugin from the Application Server vendor) or a hardware based load balancer (such as BigIp or other Layer 7 switches) . Experience has shown that customers with a large number of users (typically greater than 1500) tend to use hardware load balancers and smaller customers use software based load balancers. Using load balancers with product may not guarantee that load is evenly distributed, as the transactions do not have a consistent resource load factor. The resource load factor for any product depends on the transaction type and the data used in that transaction. For example, Search transactions are different
58
from maintenance transactions and resource usage of any search is dependent on the criteria used. Two executions of the same search will have different response and resource usage profiles. Factored on top of that is the fact that the load on a server is a summation of the all the transactions sent to it and that transactions vary from second to second, minute to minute, hour to hour etc. The best you can do is When installing a load balancer there are a number of algorithms for load balancing offered:
TABLE 11 EXAMPLE LOAD BALANCING ALGORITHMS ALGORITHMS PROCESSING COMMENTS
Round Robin
Random
Variation on Round Robin but allows support for clusters where all servers are not the same size. Traffic is routed using client IP address as the identifier where servers are assigned IP address ranges.
Not generally used by product customers. Has been used by customers but found that has limitations if used with virtual servers such as Terminal services or Citrix.
Load
Load factors of transactions are measured and used to determine which server is best suited.
Not used with product, as most load factors are inconsistent across transaction invocations.
Typically most customers use Round Robin as it is simple and given load is unpredictable can yield the best results. Most customers understand that on some periods of time the load will not be balanced but on average the load is relatively balanced. Remember that each transaction time is a function of how much data is changed If using load balancing the following additional advice is applicable: Ensure that the load balancer does not interfere with Internet Explorer caching. This may result in a low cache hit rate and increase bandwidth used. Ensure that the load balancer supports HTTP 1.1 headers to support compression. Ensure that the balancer supports Passive and Active Cookie persistence for session cookies. The Web Application Server uses session cookies, for passing security credentials between the client and the server. The load balancer must not compromise this facility. Ensure that the load balancer supports SSL persistence, if SSL is used, to ensure that encryption and decryption are not compromised.
Preload or Not?
One of the startup features of product V1.5, and above, is the preloading of pages to save time. This preloading process dynamically rebuilds the screen definitions from the XML meta data on startup. While this setting (by default) enables the startup to pre-build them (instead of on first invocation) the
59
startup of the Web Application Server is delayed while the preload process is executing. The startup of the server is delayed until the last of the screens is preloaded. While the preloading of individual screens is very quick (measured in milliseconds) building all screens (1000+) can cause significant delays to initial availability AFTER a restart. It is possible to influence the amount of preloading using two parameters in the Web Application Descriptor called: preloadAllPages This parameter affects how much preloading is taking place if it is preloaded. A value of true preloads every screen for product. A value of false preloads screens off the Main menu only (the screens the end users will be using). disablePreload This parameter controls whether preload is performed or not at all. This parameter affectively overrides the preloadAllPages parameter.
true
true
Pages are not preloaded at all. First invocation of the screen by the first user in that screen loads the screen for all users. Can cause slight delay in initial screen load for a single user but application startup is quicker
true
false
All pages are preloaded including administration and utilities menu. This setting is not recommended for production as it delays Web Application Server startup unnecessarily.
false
True
Pages are not preloaded at all. First invocation of the screen by the first user in that screen loads the screen for all users. Can cause slight delay in initial screen load for a single initial user but application startup is quicker.
false
false
Default. Pages on the Main menu are preloaded. This delays the startup of each managed server but ensures screens are loaded quicker for ALL users.
Changing of this parameter affects availability rather than performance but should be considered if availability is critical or you are not using all the screens in product. It is recommended that the following settings be implemented if you do not use the entire product or you want startup to be quicker: preloadAllPages disablePreload false true
Note: This requires the Application Descriptors for all applications to be updated.
60
The reason that sites use the native utilities is that operations staff are more familiar with the native utilities, offer more options and typically have an number of interfaces (not just command line). The Oracle Utilities Application Framework provided utilities utilize the native utilities but use a subset of options only. If the native utilities are used then the spl[.sh] utility should only be used to start and stop nonWeb Application Server components.
Customers with multiple servers are either using a hardware or software proxy. The larger scale customers favoring hardware based solutions. The only thing to remember with a proxy is to make sure the following are taken into account: The proxy server must support the IE caching scheme and not disable it or adversely affect its operation. This will increase network through put. The proxy server must support session cookies. It must be configured to support the passing and processing of session cookies as they are used for security tokens in product. Failure of this point will result in the security dialog being displayed before EVERY screen.
61
Tests have shown that this number varies between 300 500 users on a single Web Application Server JVM instance. The number varies according to the JVM version used and the vendor that supplies the JVM. This number represents maximum number of simultaneous active users hitting the Web Application Server at peak time. The easiest method for finally determining the number of instances this will become is to divide the number of users expected on the system, at worst case, by 300 and then round up to the next integer. For example, to support 750 users then you can specify 3 instances, to support 500 then you specify 2 instances etc. This method assumes worst case. Regular monitoring of the actual number of connections will reveal whether this needs to be altered.
Each Web Server calls it a different name: Oracle WebLogic Server - Default Execute Queue/Threads IBM WebSphere Server - Thread Pool
Note: For newer versions of Oracle WebLogic the thread pool is automatically managed by the Web Application Server itself so the settings explained in this section may not apply. If you choose to manually manage the connections in Oracle WebLogic then the advice does apply.
62
For purposes of this article we will call it thread pool. The number of connections allocated in the pool is not the same as the number of users logged on. As product is a stateless application the thread pool represents the number of users actually hitting the web server, not idle users. Idle users in a stateless application consume little or no resources (actually the only resource an inactive user holds is an open socket to the web server). Therefore the size of the thread pool at any time is the number of ACTIVE users using the product. For the product, the number of users for the Web Server is dictated by this formula: Number of Active "Users" = Number of Active Users in the product + Number of Active Threads in XAI + Number of threads in MPL. Note: Not all Oracle Utilities Application Framework based products use the MPL. XAI and MPL threads should be treated as users as well. This is because they typically share the same thread pool.
Browser Client
HTTP/S
HTTP/S
MPL/Fusion
JMS
File
Database
Web Server
Email Server
Thread pools are not static in size, they can grow and shrink in size depending on the traffic volumes experienced. For product, thread pools have three attributes that need to be considered for sizing: Minimum Size - This is the size of the thread pool at Web Application Server startup time and the absolute minimum if the pool is shrunk due to inactivity. For product, this typically represents the "typical load" on the Web Application Server. In other words, the "typical number of active users", on the system at any time. Most customers either use the typical load for the day period or the typical load for after business hours. The latter is used where sites
63
want to minimize the resource usage as the pool is directly related to the amount of memory used by the Web Application Server. The higher the minimum, the higher the memory usage for the server (even at rest). Maximum Size - This is the maximum size the thread pool can grow to within the Web Application Server responding to the peak load of the traffic. For the product this typically represents the peak load expected on the largest amount of traffic expected at any point in time. You know "those days". If the maximum is set too low for the load then end users will experience delays even getting a connection to the Web Application Server. Again the value here is also tied to the memory usage. The higher the value, the higher the memory footprint at peak. Inactivity Tolerance - This value (usually in seconds) is the amount of time that a thread is not allocated to a user before it is destroyed. This value is to reduce the pool size when it has grown about the minimum to detect when there is a drop in traffic. Each Web Application Server will have a default and even a different name for it. Typically customers leave the default but it is worth noting to see if it needs changing in the future.
How do you work out the pool sizes? The product does not have a specific recommendation as it varies according to the volume of transactions but the following has been observed at customer sites: For the minimum pool size, set the tolerance to the minimum number of active users for your site. This may be able to deduced from testing but be aware that each transaction has different durations depending on the transaction type (Maintenance, List and Search) and the actual data used in the transaction. Experience has shown us that if you divide the number of defined users by three (3) then it may be a good rule of thumb. Several product customers have noticed that only about a third of their users are active at any time. It should be pointed out that this rule of thumb may not apply to your site but at least it may be used a guide. As for the maximum, the only advice that is applicable is that the value should NOT equal the number of users you have defined to the system. The value will vary according to the expected peak traffic experienced at the site. Customers have used between 33-70% of the number of defined users as the setting for the maximum pool size. To determine the optimum value for your site, it may be necessary to use trial and error.
Note: Setting the minimum and maximum to higher than normal values may waste memory resources on the Web Application Server and may cause performance degradation. Once you have set the settings in your configuration you will need to monitor it to see whether you need to adjust the minimums and maximums. Customers have determined their own rules of thumb and get to the sweet spot after a few weeks or months of testing or production.
64
Note: A detailed discussion of LDAP integration is available in the Oracle Utilities Application Framework LDAP Integration whitepaper. Lightweight Directory Authentication Protocol (LDAP) is promoted as a means to leverage an organizational directory as a principal registry for product user authentication. Therefore as part of the security setup of product you may need to integrate to an onsite LDAP security repository. This is supported directly by the Web Application server software and product does not require additional configuration. Each of the Web Application Server vendors has specific instructions for integrating LDAP but the same process is followed: Determine LDAP Query - The LDAP query to find the users is required to be determined. Even though LDAP is a standard protocol determined by the IETF the repository structure itself will vary from vendor to vendor and even the same vendors repository structure will vary from customer to customer as it can be altered to suit the business model. This is the hardest part of the process, as the query needs to be correct else it will not return the right records. It is akin to submitting the wrong SQL statement. There are tools, like ADFIND (for Microsoft ADS for example). to help you with this process. Define LDAP settings to Web Application Server - Input the query and credentials to access the LDAP repository. This will vary between Web Application servers but basically you need to define the following: The location (host) of the LDAP server(s) The port numbers for the LDAP server(s) (usually 389) The credentials used to read the LDAP server(s) (userid/password) The LDAP query to get the users (and sometimes groups for some Web Application Servers). (Optional) Cache settings to save data retrieved from the LDAP server for performance reasons. Note: Ensure that the LDAP you have specified contains a definition of the administration account you use to start/stop/administrate product, else if you have made a mistake it may not be possible to restart the Web Application Server. To reduce the risk of this happening, some sites define two repositories, one to the LDAP server and one to the default security repository provided by the Web Application Server vendor as a precaution. The latter is used to house the administration accounts you do not want to store in the company LDAP. Restart to reflect changes - Restart the Web Application Server.
For more information see the following sites for your Web Application Server:
65
66
Look at both inputs and find the fields from the LDAP that you can map to the product schema. The mapping uses the cdxName attribute for product fields and ldapAttr for LDAP field name. For example: <LDAPCDXAttrMapping cdxName="Firstname" ldapAttr="cn"> The above entry maps product field "First Name" to the "cn" field in the LDAP. So when the import is performed it will use the "cn" value in "First Name".
Any fields in product required but not in the LDAP will use the "default" tag instead of the ldapAttr tag. Repeat the above tasks for "User Group" and "membership" of the groups.
Store the mapping file - Create a mapping file in a location readable by the product administration account. The format of the file is documented in the Importing Users And Groups online documentation. Include the mapping file - Include the mapping file in the XMLParameterInfo.xml configuration file as documented in the Importing Users And Groups documentation. Define the LDAP server location - Configure the LDAP Server location and port number in the XAI JNDI Server dialog to define the location. Run - Initiate the import as documented in the Importing Users And Groups online documentation.
The LDAP import service calls the LDAP Import Adapter, which performs the following: It reads the configuration information provided as XAI parameters to the request. Parameters include the Java Name Directory Interface (JNDI) server, user and password for the LDAP server, and the transaction type (i.e., import). It connects to the LDAP store using a JNDI specification. For each element (user or group) in the request, the LDAP is searched by applying the search filter (specified for the element) and the searchParm (specified in the request). The adapter goes through each entry found in the search and verifies whether or not there is already an entry in the system and whether a user belongs to a user group. From this information, it automatically determines the action to be taken: Add Update Link user to group Unlink user from group (by setting the expiration date)
67
If the entry is a group, the adapter also imports all the users in LDAP that are linked to the group. If the entry is a user, the adapter imports the groups to which the user belongs in LDAP. For each imported entity, the adapter creates an appropriate XML request and adds it to the XAI upload staging table. For example if the action is to add a user, it creates an XML request corresponding to the CDxXAIUserMaintenance service; and if the action is to add a group, it creates an XML request corresponding to the CDxXAIUserGroupMaintenance service. The XML upload staging receiver processes the upload records in sequential order (based on the upload staging ID). The MPL is used to complete the processing. Note: If a user is imported because it belongs to an imported group, the adapter does not import all the other groups to which the user belongs. If a group is imported because the imported user belongs to it, the adapter does not import all the other users that belong to the group. Note: Users and groups whose names exceed the length limit in the system are not synchronized.
F1-AVALG
Generate AppViewer XML file(s) for Algorithm data (includes javadocs). This is code generation as well.
Generate AppViewer XML file(s) for Batch Control. This is useful for run book information. Generate AppViewer XML file(s) for Maintenance Object data Generate AppViewer XML file(s) for Table/Field data Generate AppViewer XML file(s) for To Do Type
The introduction of the batch jobs, means you can decide which information is important for your site to display in the AppViewer. For example, if you wish not to have To Do Types documented then you can omit that information by not running that job. If you wish to populate ALL the information then you can use the genappvieweritems command (or genappvieweritems.sh for UNIX).
68
Consider only populating the information in any design and development environments to save disk space. The AppViewer can extend to a number of gigabytes if fully loaded.
Additional java options for the ANT make tool. Maximum memory size for ANT make tool. Minimum memory size for ANT make tool. Additional java options for Batch Threadpool workers. Maximum memory for Batch Threadpool workers. Maximum permanent generation size for Batch Threadpool workers.
Minimum memory for Batch Threadpool workers. Additional java options for J2EE Web Application Server. Maximum memory for J2EE Web Application Server. Maximum permanent generation size for J2EE Web Application Server.
WEB_MEMORY_OPT_MIN
Web/Business
The values for these settings will vary according to your site needs and the JVM vendor used at your site. The following guidelines should be considered when changing these values: The additional java options supported by each JVM vendor is slightly different to take advantage of specific platform requirements by the JVM. Refer to the JVM options documentation provided with your JVM. For Oracle/Sun based JVM's refer to http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp Ensure any options specified are within the constraints and restrictions of the JVM. For example, setting invalid values may result in failure or unexpected behavior.
69
Do not specify the Xms, -Xmx or XX:PermSize parameters as additional options as these are have dedicated settings already. The following common settings have been used by customers:
TABLE 15 COMMON JAVA ADDITIONAL OPERATIONS CONFIGURATION SETTING USAGE
Use Parallel Garbage Collection Bump the number of file descriptors to maximum (Solaris Only) Use a policy that limits the proportion of the VM's time that is spent in Garbage Collection before an OutOfMemory error is thrown.
-XX:+UseLargePages
Use large page memory. See Large Memory Pages for more details.
-XX:-HeapDumpOnOutOfMemoryError
Dump heap to file when java.lang.OutOfMemoryError is thrown. Commonly used by Oracle Support if necessary.
-XX:+PrintGC
Note: The Production Environment Configuration Guidelines whitepaper contains advice for settings for all versions of the Oracle Utilities Application Framework based products.
<host> <port>
The hostname for the Web Application Server. The port number allocated to WL_PORT at installation time. To avoid the port number a value of 80 may be specified. This value can only be specified once per Web Application Server machine. The server context that can be set using WEB_CONTEXT_ROOT at installation time. This value must be valid for the J2EE Web Application Server and is restricted to a single value text value without any embedded blanks or special
<server>
70
Clustering or Managed?
One of the decisions that must be made, when dealing with multiple web application servers, is to whether the servers will be clustered or managed. The attributes of each style are outlined below: Clustered A cluster is a group of servers running a Web application server simultaneously, appearing to the users as if it were a single server (usually managed by a separate administration server). The advantages of using a cluster are that you can manage the servers as a group and also the servers communicate to each other to monitor availability. Clusters can load balance within themselves as they are in constant communication with each other. The disadvantages are that there is an overhead in communication (usually each server uses multicast to communicate to the other servers in a cluster) and each server must use a different IP address and port number. This means clusters can only operate on one machine per server. The figure below summarizes a cluster:
Load balancer
Administration Server
Managed A managed set of Web Application servers that are independent of each other. They can be housed on a single machine or multiple machines and can be housed on machines of differing size. The advantage of managed servers is that each server can be targeted for specific user groups and can be managed independently. There is no additional communication between the servers. A separate administration server can manage the servers but that role can be taken by one of the managed servers if desired. The disadvantages are that the load balancing software/hardware housed between the users and the managed servers performs the load balancing and that deployment must be performed individually. The figure below summarizes managed servers:
71
Load balancer
Administration Server
There are no clear winners between clustering and managed Web Application Servers as the main factors in the decision are: Amount of hardware Clustering requires a hardware server per server . Sites where a small number of servers are deployed cannot use clustering. Maintenance Effort Clustering can reduce maintenance overhead if there are a large number of servers involved. Managed servers require individual maintenance. Tolerance for multi-casting Some sites ban multi-casting as it constitutes can be perceived as an unacceptable overhead on the network. Deploying a private network between the servers can minimize this, though this is more expensive. Flexibility Many sites use managed due to its flexibility in routing particular traffic to particular servers. For example, setting up specific servers for non-call center traffic (e.g. XAI, interfaces, depots).
Whether your site uses clustering or managed servers does not factor into high availability solutions as customers have deployed high availability solutions using either technique.
Clustering and Environmental configuration settings
The configuration files used by the Oracle Utilities Application Framework specify a number of environmental focused settings (e.g. hostnames, ports, file paths etc). These are used by the runtime of the Oracle Utilities Application Framework to orientate to the correct environment. Given these environment settings are embedded in the configuration files, there may be an impact on sites using clustering. To support clustering with embedded environmental settings the following guidelines are recommended: Apply Patches - For Oracle Utilities Application Framework V2.2 sites, it is recommended to install Single Fix 8218568 to externalize some of the configuration files outside of the product. At the time of writing, this patch was only available for Oracle Utilities Application
72
Framework V2.2. Check "My Oracle Support" for the latest for other versions of Oracle Utilities Application Framework. Sites using Framework V4 and above do not need to apply any additional patches at the time of writing. Host Name settings In a clustered environment the hostname used for any configuration setting should be the cluster host or the load balancing proxy used for the cluster. To access a cluster, the users (or servers) need to access a single URL; the host component of that URL should be used for any host name configuration settings. Custom Context In Oracle Utilities Application Framework V4 and above, it is possible to support a custom URL context for use with the product at installation time. In a clustered environment, the context should be common and therefore the setting of this value should be the same across all nodes of a cluster. Port Numbers As part of the URL used for the product, a port number can be explicitly used. In most sites, Port 80 is used for production as it does not need to be specified on the URL by users. In a clustered environment this port should be common and therefore the setting of this value should be the same across all nodes of a cluster. Most J2EE Application Server vendors insist that all nodes of a cluster have the same port number (but different hostnames). File Locations - The product requires some knowledge of where environmental specific information is stored. This information is then configured to inform the product where specific configuration files and important directories are located. Installing the software in a common location or on the same location on each node can help allow the file locations to support clustering.
Note: There are environmental configuration settings in the J2EE Web Application Descriptor (web.xml) and XAI Options screen as well as configuration files covered by Single Fix 8218568.
I I P P P I
Default JMX Port for monitoring Batch threadpool Default JMX port used for Business App Server Monitoring Oracle Application Server Request Port for Business App OC4J Instance RMI Port for Business App OC4J Standalone ORMI Port JVM Child process starting Port Number (COBOL products
73
only) BSN_WASBOOTSTRAPPORT DBPORT MPLADMINPORT OSB_PORT_NUMBER SOA_PORT_NUMBER WEB_JMX_RMI_PORT_PERFORMANCE WEB_OASREQPORT WEB_TCATSHUTPORT WEB_WLPORT WEB_WLSSLPORT P P I I I I P I P P Bootstrap port (WebSphere) Database Connection Port MPL Administration Port (if MPL available) Port allocated to Oracle Service Bus interface (if available) Port allocated to Oracle SOA (if available) Default JMX port used for Web App Server Monitoring Oracle Application Server Request Port for Web App Tomcat Shutdown Port Web Server Port Web Server Port using SSL (WebLogic)
Legend: P Port allocated prior to installation of product, I Port allocated during installation of product. Prior to installation of product, the database and Web Application Server need to be installed and the ports allocated to these components recorded and provided for the installation of the product (they are indicated with a "P" in the table). Each vendor will have the port definitions stored in different places. Refer to the vendor documentation for more information. When allocating ports (indicated with an "I" in the table) during the installation the following advice may be useful: Pick the same port numbering scheme per environment to save time allocating ports. Some sites find using the same last digits for the type of port is helpful. For example, having 4 allocated for BSN_RMIPORT (6504, 7914, 9724, 22034 etc). BSN_RMIPORT denotes starting ports. The number indicates the start of the port range. The JVMCOUNT determine the ports allocated. Ensure that there are free ports in the range starting from that port number. Note: BSN_RMIPORT and JVMCOUNT only applies to products using COBOL support. Document the ports used in your documentation or services file for future reference. Do not allocated used ports as there will be port conflicts and potentially the applications will refuse to work.
74
JMX Enablement System Userid JMX Enablement System Password RMI Port for JMX Web
Userid used for logging onto JMX Mbeans Password to be used for JMX Enablement System Userid Port number to allocate to the JMX for the Web Application Server
This information is added to the spl.properties file in the etc/conf/root/WEBINF/classes subdirectory for the environment, for the Web Application Server. An example of the applicable settings is shown below:
spl.runtime.management.rmi.port=.. spl.runtime.management.connector.url.default=service:jmx:rmi: ///jndi/rmi://hostname:../oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enab led ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled The following settings are important to the JMX monitor:
The spl.runtime.management.connector.url.default is the JMX url to be used in the JMX console or JMX browser. The jmx.remote.x.password.file and jmx.remote.x.access.file are the default security setup for the JMX. These are for basic security setup. For more information about the files and alternative security setups refer to http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html. The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.
Once the Web Application Server component is started; the JMX Mbeans defined in this configuration are started and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The remote URL and credentials are provided as configured above. Within the JMX console or JMX browser there are a number of specific facilities that are available: It is possible to manage the data within the Web Application Server cache from JMX. In past releases of Oracle Utilities Application Framework this was possible using utility URLS's which required the IT group to logon to the product to issue commands. This is still possible but can be replaced with JMX console commands. This is controlled by the FlushBean Mbean. It is possible to get environmental information about the Web Application Server Java Virtual Machine (JVM) for support purposes. . In past releases of Oracle Utilities Application Framework this was possible using utility URLS's which required the IT group to logon to the product to issue commands. This is still possible but can be replaced with JMX console commands. This is controlled by the JVMInfo Mbean.
75
It is possible to get internal JVM information about the Web Application Server using the JVMSystem Mbean. This is an extension of the base Java MXBeans (http://java.sun.com/javase/6/docs/api/java/lang/management/package-summary.html). By default these are disabled and can be seen by executing the enableJVMSystemBeans operation from the BaseMasterBean. When enabled the following additional areas can be monitored via JMX for the Web Application Server: Class Loading statistics Memory statistics Operating System statistics (statistics vary by platform). JVM Runtime information (additional to JVMInfo) Thread statistics Statistics on individual java threads.
Note: No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care should be taken when issuing commands.
Run the initialSetup utility to reflect the change. This configuration will be added to the Oracle WebLogic configuration.
76
The issue becomes then if the infrastructure provides such an interface for the product to hook into. There are a number of patterns in this area: Customers implement an identity management solution to manage the passwords, expiry and rules. In this case the implementation needs to interface to the identity management solution by calling the appropriate facilities in the identity management solution around passwords. Of course, the J2EE Web Application Server used is then interfaced into the identity management solution or the related security store to provide the authentication mechanism. Customers link the security store for authentication directly to the security configuration of the J2EE Web Application Server. In this case, the J2EE Web Application Server provides the interface to the password change facility.
In the latter case, if you are a customer using Oracle WebLogic, there is an example JSP available under Oracle TechNet (registration required) under Code Samples (project S20) to allow an application to change the passwords, irrespective of the security used. This example can be altered to suit your sites standards and linked to the product as a custom JSP via a Navigation key to link to the appropriate menu.
<crit> Error occured while running java Dweblogic.RootDirectory=/splapp weblogic.security.Encrypt : Output is Exception in thread "main" java.lang.NoClassDefFoundError: weblogic/security/Encrypt Caused by: java.lang.ClassNotFoundException: weblogic.security.Encrypt Could not find the main class: weblogic.security.Encrypt. Program will exit.
To fix this issue set the WEB_SERVER_HOME using the configureEnv[.sh] i utility (or set WL_HOME) to access the appropriate security encryption classes.
WL_HOME is used by Oracle Utilities Application Framework V2.x. WEB_SERVER_HOME is used by Oracle Utilities Application Framework V4.x and above.
77
Corrupted SPLApp.war
By default, the product installer uses archive mode for the product deployment (this is true for Oracle WebLogic and IBM WebSphere though in Oracle WebLogic expanded mode is also supported). When using archive mode the product utilities build the product into a set of J2EE WAR and EAR files prior to deployment. The WAR and EAR build is performed by the initialSetup[.sh] utility. Refer to the Server Administration Guides or Configuration and Operations Guides for the product for a detailed description of the options and operations supported by this utility. If, for any reason, the WAR or EAR files are not built completely, and are therefore are corrupted, then the product start may abort. This can manifest in a number of error messages depending on the nature of the corruption:
<info> ERROR: /splapp/applications/SPLApp.war war file does not exist. Problem with the environment. Exiting.
or
weblogic.management.DeploymentException: Unexpected end of ZLIB input stream at weblogic.application.internal.EarDeploymentFactory.findOrCrea teComponentMBeans(EarDeploymentFactory.java:189) To resolve this issue then rerun the initialSetup[.sh] utility to recompile the WAR and EAR files. Web Application Server Logs
In the Server Administration Guide or Operations and Configuration Guide for your product the product specific logs are outlined including the formats and location. Given the product runs within a J2EE Web Application Server, that server also has a set of configuration files that can be used for diagnostic information. The table below outlines the default set of J2EE Web Application Server log files:
TABLE 18 WEB APPLICATION SERVER LOGS ORACLE WEBLOGIC ($SPLEBASE/LOGS/SYSTEM) IBM WEBSPHERE ($WAS_HOME/PROFILES/APPSVR01/LOGS/<SERVER>)
Refer to the J2EE Web Application Server documentation for details of the logs and their format.
78
By default IBM WebSphere loads its own classes ahead of any classes used by products running within WebSphere. If there is a conflict or a different version of the class the default behavior under IBM WebSphere then the IBM WebSphere versions are used and that may cause conflicts if the product uses a different version (such as a newer version of the class libraries). To avoid issues with the classes provided with IBM WebSphere and any Oracle Utilities Application Framework based product, it is highly recommended to set the class loadinf within IBM WebSphere to load parent (i.e. WebSphere) class libraries last. Note: The Oracle Utilities Application Framework does not include its own class loader as it uses the class loading options in the J2EE Web Application Server. To set this value, navigate to the Enterprise Applications [Web Enterprise Application Name] Manage Modules option within the IBM WebSphere console. Select Class Loader Order and then choose Classes loaded with local class loader first (parent last) to set the correct value. If this setting is not set then startup or runtime errors may occur similar to the one below:
[12/28/10 23:14:31:854 PST] 00000000 FfdcProvider W com.ibm.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on /opt/IBM/WebSphere7064/AppServer/profiles/AppSrv01/logs/ffdc/server8_35c035c_10.1 2.28_23.14.31.7522146543044581884850.txt com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCre ated 1341 [12/28/10 23:14:31:896 PST] 00000000 webapp E com.ibm.ws.webcontainer.webapp.WebApp notifyServletContextCreated SRVE0283E: Exception caught while initializing context: {0} java.lang.NoSuchMethodError: com/ibm/icu/math/BigDecimal.<init>(Ljava/math/BigDecimal;)V at com.splwg.base.support.sql.NumericSQLTypeHelper.getFromResult Set(NumericSQLTypeHelper.java:50)
JNDI Issues with EJB
79
Note: Oracle Utilities Application Framework V4.x and above , uses Enterprise Java Beans (EJB) for the Business Application Server. This advice therefore only applies to that version. By default during the deployment process the product configuration settings within IBM WebSphere are set correctly. If there is an issue with the deployment, for any reason, typically the EJB definitions are the most likely to be set incorrectly. Typically an error similar to the one below is displayed:
12/28/10 23:14:40:039 PST] 00000000 WASSessionCor I SessionContextRegistry getSessionContext SESN0176I: Will create a new session context for application key default_host/ouaf/help [12/28/10 23:14:40:103 PST] 00000000 webcontainer I com.ibm.ws.wswebcontainer.VirtualHost addWebApplication SRVE0250I: Web Module null has been bound to default_host[*:9081,*:80,*:9444,*:5065,*:5064,*:443,*:9083]. [12/28/10 23:14:40:152 PST] 00000000 ApplicationMg A WSVR0221I: Application started: SPLWeb-server8 [12/28/10 23:14:40:176 PST] 00000000 CompositionUn A WSVR0191I: Composition unit WebSphere:cuname=SPLWeb-server8 in BLA WebSphere:blaname=SPLWeb-server8 started. [12/28/10 23:14:40:200 PST] 00000000 ContainerHelp E WSVR0501E: Error creating component com.ibm.ws.runtime.component.CompositionUnitMgrImpl@67a067a com.ibm.ws.exception.RuntimeWarning: javax.naming.NameAlreadyBoundException: The com.splwg.ejb.service.Service interface of the SPLServiceBean bean in the spl-servicebean-4.1.0.jar module of the SPLService-server8 application cannot be bound to the ouaf/servicebean name location. The com.splwg.ejb.liteservice.api.ServiceRemote interface of the TUGBULiteServiceBean bean in the spl-servicebean-4.1.0.jar module of the SPLService-server8 application has already been bound to the ouaf/servicebean name location.
To correct this the WAR/EAR files can either be rebuilt and redeployed using the initialSetup[.sh] utility or the target JNDI name definition for the default EJB module TUGBULiteServiceBean be set correctly (to <Web Context Root>/TUGBULiteServiceBean where <Web Context Root> is the context assigned for the environment URL [usually ouaf]).
CORBA Transient Security Errors
In IBM WebSphere a number of users are setup by the installation process in the initial setup. These users are: A user to administrate the product on the IBM WebSphere console (by default wasadmin). A user for the Web Application Server to securely connect to the Enterprise Java Beans on the Business Application Server (by default webjndi).
80
If these users are not setup correctly (directly or indirectly) then the product will experience a org.omg.CORBA.TRANSIENT error thrown by IBM WebSphere. To correct this navigate to the Environment Naming CORBA Naming Service Users option and ensure both the users that are used above (in particular webjndi) have the following CORBA roles: Cos Naming Read Cos Naming Write Cos Naming Create Cos Naming Delete
The userid from the product is passed as part of the application context in each transaction between the browser client and the Web Application Server. If the security components are not configured correctly then an error stating No User profile found for user=' ' (though authenticated to web server as 'null') can occur. For example:
0000001a SystemOut O - 006177-10-1 2011-05-03 11:39:03,681 [WebContainer : 1] WARN (web.services.InitializeUserTag) No user profile found for user='' (though authenticated to web server as 'null') com.splwg.shared.common.ApplicationError: (Server Message) Category: 11001 Number: 902 Call Sequence: Program Name: InitializeUserService Text: User does not have Display Profile. Description: The current user does not have a valid Display Profile. Please refer to the Display Profile setting on the User record. Table: null Field: null at com.splwg.base.domain.web.InitializeUserService.read(Initiali zeUserService.java:71) at com.splwg.base.support.pagemaintenance.AbstractPageMaintenanc e.readItem(AbstractPageMaintenance.java:91)
To resolve it is important to ensure that the security configuration of IBM WebSPhere is correct. At a minimum the following should be enabled in the IBM WebSphere console in the relevant security section: Enable administrative security Enable application security Enable LTPA
81
The Oracle Utilities Application Framework Architecture Guidelines whitepaper discusses the various architectures available with their individual advantages and disadvantages.
82
By default, there are two (2) COBOL based Child JVM's spawned by the product for each of the online, XAIApp and the background processing components of the product. This is the minimum recommended for availability and performance of the product in normal conditions. It is worth considering more instances of the Child JVM's if any of the following situations occur: The site has a large number of users (>800) which use a large proportion of the product over the business day. In this case there are a lot of potential calls to COBOL modules by different users and to avoid out of memory conditions it is important to have more child JVM's available. This situation can also be negated by the presence of more than one Web Application Server as each Web Application Server has its own Child JVM's. If the product functionality used at the site is across a majority of the product then the number of unique COBOL modules that may be called may be more than expected and extra Child JVM's may be required to avoid out of memory situations.
In most cases the default value for the number of Child JVM's is sufficient for most non-production situations. Refer to the Production Environment Configuration Guidelines for production level settings.
83
Child JVM's reuse existing loaded modules as much as possible. An individual module that has been called is only attached once per Child JVM at any given time. An installation parameter in the Environment Configuration called Release Cobol Thread Memory controls this behavior. This value should be set to true. This can be overridden for each mode of access (online, batch and XAI) by specifying the spl.runtime.cobol.remote.releaseThreadMemoryAfterEachCall parameter in the spl.properties file. Note: Refer to the Batch Best Practices for advice pertaining the optimal setting of this parameter for background processes.
To reclaim memory of the COBOL objects, the Child JVM must be shunned (stopped and restarted) on a regular basis. This is known as brute force memory management. The Oracle Utilities Application Framework allows control of this in the relevant spl.properties file by setting the following parameters:
PARAMETER COMMENTS
spl.runtime.cobol.remote.jvmMaxLifetimeSecs
spl.runtime.cobol.remote.jvmMaxRequests
As soon as either tolerance is met the Child JVM is shunned automatically. This does not necessarily occur straightaway as it waits for any uncompleted outstanding work in the individual Child JVM to complete. As the product uses more than one Child JVM at any time, availability is not compromised as at least one Child JVM is active at any time. The default values for these parameters are sufficient for most sites. Refer to the Batch Operations and Configuration Guide/Batch Server Administration Guide and Operations and Configuration Guide/Server Administration Guide for your product for the default values and additional advice on this facility. With the above facilities the COBOL memory within the Child JVM can be managed by the Oracle Utilities Application Framework to help avoid memory issues.
Cache Management
One of the features of the Oracle Utilities Application Framework is the implementation of a level 2 cache within the architecture to provide performance benefits for commonly used configuration information. Generally the cache is managed by the Oracle Utilities Application Framework automatically with little or no interaction from operators. By default, the cache is reloaded as needed or every eight (8) hours, whichever occurs first. Some elements of the cache such as security information is refreshed on a more frequent basis (every 30 minutes).
84
There are a number of cache management utilities to manually cause all or parts of the cache to refresh manually. These utilities are documented in the Operations and Configuration Guide/Server Administration Guide for your product. While these utilities are rarely used in production, they can be used, by appropriately authorized personnel to make sure the cache contains the correct information. Typically the manual refresh is required if the configuration data is changed and needs to be reflected as soon as possible.
JMX Enablement System Userid JMX Enablement System Password RMI Port for JMX Business
Userid used for logging onto JMX Mbeans Password to be used for JMX Enablement System Userid Port number to allocate to the JMX for the Business Application Server
This information is added to the spl.properties file in the etc/conf/service subdirectory for the environment, for the Business Application Server. An example of the applicable settings is shown below:
85
The ouaf.jmx.* settings enable individual beans at startup time. These may be enabled at runtime.
Once the Business Application Server component is started; the JMX Mbeans defined in this configuration are started and a JSR160 compliant JMX console or JMX browser can be used to connect to the JMX Mbeans. The remote URL and credentials are provided as configured above. The only Mbean available with the Business Application Server is the PerformanceStatistics Mbean. This Mbean collects object performance data for analysis. For customer familiar with the Oracle Tuxedo product, this facility is similar to the txrpt facility available for performance analysis. The statistics are collected by the Mbean from the time the Mbean is enabled until the environment statistics are reset. By default, the Mbean is enabled at startup time but may be disabled (or re-enabled) at any time using the disableMbean or enableMbean operations from the PerformanceMbeanController Mbean. When using this Mbean there are a few recommendations: The completeExecutionDump operation returns a CSV of the performance statistics of individual application services to the JMX console or JMX browser. This represents the current state of the statistics at that time. The reset operation resets the statistics within the Mbean to start collection. This operation is handy to ensure performance over a selected period. There are other operations and attributes that return individual value information that may of interest. Refer to the Server Administration Guide provided with your product for a detailed description of what statistics are available.
Note: No confirmation (i.e. Are You Sure?) dialog is provided with most JMX consoles or JMX browser so care should be taken when issuing commands.
Replicating the txrpt statistics
One of the features customers of past releases of V1.x of Oracle Utilities Customer Care And Billing used to use to gather performance data was the txrpt facility within Oracle Tuxedo. The utility would take performance data gathered from every service call and produce summary statistics per hour. The statistics were the number of calls and the average response time for each defined service. The txrpt utility collected the statistics from log files that were enabled in the Oracle Tuxedo configuration. This information was useful in tracking the performance of individual services within the product against a sites's SLA. With the advent of Oracle Utilities Application Framework V2.x and the removal of Oracle Tuxedo from the architecture meant that this information was not available for collection as easily as originally. In Oracle Utilities Application Framework the implementation of the PerformanceStatistics Mbean allows for collection of performance information in a similar fashion txrpt. To achieve the same results as txrpt the following should be performed:
86
On the hour boundary the completeExecutionDump operation must be executed by your JMX console or JMX browser to extract and save the CSV information to a file. The file should have the date and time of the collection for reference reasons. After collection of the statistics has been completed, the reset operation should be executed from your JMX console or JMX browser. The information in the files can be collated according to the desired analysis required by your site to summarize the information. The CSV can be loaded into a database for analysis or into your sites preferred spreadsheet or analysis tool. Remember that the date and time of the collection is not recorded in the data only the data itself.
Note: While this process can be manually done using a JMX console such as jconsole, it is recommended that the JMX console or JMX browser automate the collection of the process in the background. Refer to the documentation of the JMX console and JMX browser to configure your console or browser to achieve this. This facility is flexible for a number of reasons: The time period for collection is not limited to hourly as txrpt was. The time collection period can be increased or decreased according to your site standards. For example, you might want to collect the data every 10 minutes. The statistics are live and can be queried regardless of the collection process. The level of information is higher than the original txrpt. The following additional information is collected and summarized: The data is now also summarized by the type of transaction that is performed. This will allow the site to assess the performance of reads, updates, deletes, inserts etc separately. The last transaction recorded is detailed including the user. This information is useful for checking against other statistics to assess where the performance is at the present moment. Statistics are already calculated by the utility prior to analysis. The txrpt utility only collected the average. This facility collects the average, minimum (best case) and maximum (worst case) performance statistics in the collection period.
87
Note: This is a rule of thumb and may NOT apply to the traffic patterns at your site. It is recommended to start with an agreed value and then monitor to optimize the values as necessary. Refer to the Batch Operations and Configuration Guide/Batch Server Administration Guide and Operations and Configuration Guide/Server Administration Guide for your product for additional advice on this facility.
88
To minimize this the Oracle Utilities Application Framework has introduced two new settings in the spl.properties file for the Business Application Server, where the dimensions of the XPath statement cache are defined. These settings allow the site to optimize the control the XPath cache to support caching of commonly used XPath statements but allowing for optimal specification of the cache size (to help prevent memory issues). The settings are shown in the table below:
TABLE 20 XPATH CACHE SETTINGS SETTING USAGE
com.oracle.XPath.LRUSize
Maximum number of XPath queries to hold in cache across all threads. A zero (0) value indicates no caching, minus one (-1) value indicates unlimited or other positive values indicate number of queries stored in cache. Cache is managed on a Least Reused 7 basis. For memory requirements, assume approximately 7k per query). The default in the template is 2000 queries.
com.oracle.XPath.flushTimeout
The time, in seconds, when the cache is automatically cleared. A zero (0) value indicates never auto-flush cache and a positive value indicates the number of seconds. The default in the template is 86400 seconds (24 hours).
Note: The templates provided with the product have these settings commented out. To use the settings uncomment the entries in the generated configuration files. In most cases the defaults are sufficient but can be altered if the following is guidelines are: If there are memory issues (e.g. out of memory) then decreasing the LRUSize or decreasing the flushTimeout may result in a reduction in memory issues. LRUSize has a greater impact on memory than flushTimeout. If decreasing value the value of the LRUsize causes performance issues, consider changing the flushTimeout initially only and ascertain if that works for your site.
There are no strict guidelines on the value for both parameters as cache performance is subject to the user traffic profile and the amount and types of XPath queries executed. Experimentation will assist in determining the right mix of both settings for your site.
7 In laymans terms, older cached entries that are not reused are removed from the cache automatically to make roon for more used entries or new entries.
89
90
It is recommended to use the dbms_stats package for collecting statistics. An estimate percentage of 10 percent is generally sufficient. Set the degree parameter to a higher level to enable parallel collection of statistics. It is suggested to set the block_sample parameter to false. The Method option while gathering statistics on tables should be set to FOR ALL COLUMNS SIZE AUTO. This will make sure that Oracle automatically determines which columns require histograms and the number of buckets (size) of each histogram. Gathering statistics separately for indexes is generally faster than the cascade=true option while gathering table statistics. It is recommended to not collect statistics on all the tables at a single batch run at a single point of time. Dividing the tables into multiple groups and then executing statistics calculation for each group at different time frames will minimize any disruption due to statistics calculation. Depending on the stability of the query performance, it is suggested that the statistics collection frequency can be altered to maintain query performance.
91
Note: Additionally for UTF8 customers ensure that the spl.runtime.cobol.encoding spl.properties file, is set correctly to display the correct character set.
in the
92
<Database
Users>
-r
-r <ReadRole,UserRole>
-l <logfile> -h
This command line can be used in site specific DBA scripts or as a standalone command line. Executing the utility without any options starts interactive mode.
-r
93
overwrite the existing environment identifiers -q -l <logfile> -h Silent Installation mode (no confirmations) Optional. Name of the log file. Help
This command line can be used in site specific DBA scripts or as a standalone command line. Executing the utility without any options starts interactive mode.
Note: The scripts in this section have been designed for Oracle database only. Sites using DB2 or SQL Server should use the language equivalents of these scripts. Typically referential integrity of a database is managed by the database itself. In product this is not so as the Maintenance Objects contain ALL the business logic including referential integrity. The reasons for this are varied: From a maintenance cost point of view, all the code is in one place. This reduces maintenance effort. Databases implement all or nothing referential integrity. This means that referential integrity is checked whether the data has changed or not. From a performance point of view this is potentially wasting time. The Maintenance Objects in product decide when to enforce referential integrity rules. Most of the referential rules in product are optional. If there is a value in the foreign key field it is checked, if there is no value (blanks, zero or nulls) then the referential integrity is not
94
checked unless it is a mandatory column. This is not possible in database imposed referential integrity. If the database controlled referential integrity then the application has no control on when it is imposed in the course of a transaction. Maintenance Object controlled referential integrity allows finer levels of control on when referential integrity is enforced in the transaction flow. Each database implements referential integrity in a slightly different way. To reduce maintenance costs, code differences are kept to a minimum. Maintenance Object enforced referential integrity is more efficient as far as product is concerned and translates to superior performance across many database types.
All is not lost though. The Oracle Utilities Application Framework maintains its own data dictionary in the form of meta-data that is used by the Oracle Utilities Software Development Kit, ConfigLab and Archiving. If you insist that you want the data model in a tool or adorning a large wall then the following is recommended process to be used to generate the data model using the meta-data: Export the CISADM schema as a backup using the database export utility. Create constraints from the meta-data structure. The two Oracle pl/sql scripts below can be used to achieve this. The names of the constraints is already documented in the meta data as well. Run the utility and created the constraints in the database.
Function to join
create or replace function join ( p_cursor sys_refcursor, p_del varchar2 := ',' ) return varchar2 is l_value varchar2(32767); l_result varchar2(32767); begin loop fetch p_cursor into l_value; exit when p_cursor%notfound; if l_result is not null then l_result := l_result || p_del; end if; l_result := l_result || l_value; end loop; return l_result; end join; / show errors;
95
SET serverout ON size 1000000 SET echo OFF SET feedback OFF SET linesize 300 spool constraints.sql DECLARE CURSOR c1 IS SELECT tbl_name, CONST_ID, REF_CONST_ID, table_name FROM ci_MD_CONST, user_indexes WHERE CONST_TYPE_FLG='FK' AND TRIM(index_name)=SUBSTR(REF_CONST_ID,5,7) AND TRIM(tbl_name) IN (SELECT TRIM(table_name) FROM user_tables ) ORDER BY tbl_name, CONST_ID; stmt VARCHAR2(400); field_list VARCHAR2(300); BEGIN FOR r1 IN c1 LOOP stmt := 'alter table ' || trim(r1.tbl_name) || ' add constraint ' || trim(r1.const_id); dbms_output.put_line(stmt); SELECT JOIN(CURSOR (SELECT trim(fld_name) FROM ci_md_const_fld WHERE const_id = r1.const_id ORDER BY seq_num )) INTO field_list FROM dual; stmt := 'foreign key (' || field_list || ')'; dbms_output.put_line(stmt); SELECT JOIN(CURSOR (SELECT trim(fld_name) FROM ci_md_const_fld WHERE const_id = r1.ref_const_id ORDER BY seq_num )) INTO field_list FROM dual; stmt := 'references ' || trim(r1.table_name) || ' (' || field_list || ');'; dbms_output.put_line(stmt); END LOOP;
96
END; / spool OFF; EXIT; Empty a copy of the database (truncated the tables). None of the relationships are expressed as constraints in the physical database; this is because ALL the referential integrity (RI) and validation is done in the code based Maintenance Objects. More importantly most of the constraints are data conditional (if there is data in the column, then RI applies; no value, no RI) so a loaded database might actually break "database strict" RI rules. Remove the data to prevent constraint violations using a valid method for the database (for example, TRUNCATE TABLE <tablename> REUSE STORAGE for Oracle et al).
Run the constraints.sql file created in the previous step to create the RI using the CISADM user. Load the data model in the tool of your choice. Load the data model with the constraints in the desired tool. This should build the data model. Note: This may take a while for the WHOLE data model. Removed the newly created constraints. This is to return the database back to the original condition.
set serverout on size 1000000 set echo off set feedback off set linesize 300 spool drop_constraints.sql select 'ALTER ' || tbl_name || ' drop constraint ' || CONST_ID || ';' from ci_MD_CONST where CONST_TYPE_FLG='FK' order by tbl_name, CONST_ID; spool off; @drop_constraints.sql exit; Reload the database. You then have the data model in your tool and the database returned to its original state.
97
Technical Best Practices June 2011 Author: Anthony Shorten, Principal Product Manager Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com
Copyright 2007-2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 1010