Sie sind auf Seite 1von 80

Using Performance Tools

PegaRULES Process Commander v 4.2


© Copyright 2006
Pegasystems Inc., Cambridge, MA
All rights reserved.

This document describes products and services of Pegasystems Inc. It may


contain trade secrets and proprietary information. The document and product are
protected by copyright and distributed under licenses restricting their use,
copying distribution, or transmittal in any form without prior written
authorization of Pegasystems Inc.

This document is current as of the date of publication only. Changes in the


document may be made from time to time at the discretion of Pegasystems. This
document remains the property of Pegasystems and must be returned to it upon
request. This document does not imply any commitment to offer or deliver the
products or services described.

This document may include references to Pegasystems product features that have
not been licensed by your company. If you have questions about whether a
particular capability is included in your installation, please consult your
Pegasystems service consultant.

For Pegasystems trademarks and registered trademarks, all rights reserved. Other
brand or product names are trademarks of their respective holders.

This document is the property of:


Pegasystems Inc.
101 Main Street
Cambridge, MA 02142-1590

Phone: (617) 374-9600


Fax: (617) 374-9620
www.pega.com

PegaRULES Process Commander


Document: Performance Tools
Software Version 4.2 SP6
Updated: February, 2006
Contents

Performance Tools in the Process Commander Application ........................1


Overview..............................................................................................................1
PAL Usage ..........................................................................................................2
Strategies for Testing Performance....................................................................3
Development Strategy..................................................................................3
QA Strategy ..................................................................................................3
Performance & Scale Testing Strategy........................................................4
Production Strategy ......................................................................................4

Collecting Performance Information from your Process Commander


Application ...........................................................................................................5
Best Practice Protocol to Prepare for a PAL Review.........................................5
Determining when to use PAL .....................................................................5
Users...................................................................................................... 5

QA.......................................................................................................... 5

Taking PAL Readings...................................................................................6

Understanding PAL Performance Information from your Process


Commander Application...................................................................................11
The Theory of PAL........................................................................................... 11
Introduction ................................................................................................ 11
Factors Affecting Performance.................................................................. 11
Required information before using PAL.................................................... 12
PAL Tools......................................................................................................... 14
Performance Tool ...................................................................................... 15
Interactions .......................................................................................... 15

Detail Display ............................................................................................. 18


PAL Features............................................................................................. 19
Add Reading........................................................................................ 19

Add Reading with Clipboard Size ....................................................... 19

Reset Data........................................................................................... 20

Save Data............................................................................................ 20

The PAL Detail Screen .................................................................................... 22


Overview .................................................................................................... 22
Using PAL to understand response time .................................................. 23
Troubleshooting process..................................................................... 23

Interpreting PAL Detail Readings.............................................................. 24


Total CPU Time................................................................................... 25

Total Elapsed Time ............................................................................. 29

Database Access Counts ................................................................... 32

Requestor Summary ........................................................................... 35

DB Trace .......................................................................................................... 37
Preparing a DBTrace................................................................................. 38
Running a DB Trace.................................................................................. 42
DB Trace Data..................................................................................... 44

Example: Using DBTrace .................................................................. 47

Other DBTrace Spreadsheets ............................................................ 49

Global Trace .............................................................................................. 55


GLOSSARY: PAL Readings in Detail ............................................................ 56
Performance Tools in the Process
Commander Application

Overview
PegaRULES Process Commander is a very powerful enterprise software product which
can be the base for many different types of applications. These applications can be
designed to handle large volumes of work objects and store quantities of data. Poor
performance, such as delays in processing, refreshing screens, submitting work objects,
or other application functions, can signal that the design of the application or the setup of
the server can be improved. It is important to be able to quickly pinpoint the source of
such problems.

PegaRULES captures information necessary to identify these inefficiencies or excessive


use of resources in your application. This data is stored in “PAL counters” or “PAL
readings.” PAL stands for Performance AnaLyzer, and is a collection of counters and
timer readings, stored in the requestor, that an application developer could use to analyze
performance issues in a system.

NOTE: The PAL data is generally tracked on the work that is being done by the
application. Depending upon the type of application, this work could involve “cases”,
insurance forms, credit card forms, or a myriad of other items. Thus, in this document, the
generic term “work object” will be used to describe the item being worked on.

This document is divided into two main sections:

• Collecting Performance Information – This section is written for all users, and
describes the method of collecting this information to give to your application
developer

• Understanding PAL Performance Information - This section is designed for


application developers, and describes the different performance tools in detail. It
looks at the specifics of what the PAL readings measure, as well as how to use
them to better understand and improve the performance of your application.

In the second section of this document, the following PegaRULES Process Commander
performance tools are described:

• PALAdvisor – a “wizard” which uses the PegaRULES engine functionality to


analyze the results of the PAL Detail screen and give guidance on improving the
system performance
• Performance (PAL) Detail Screen – the details of all the PAL counters being
tracked in the system, and how to interpret them
• DBTrace – helps developers understand the details of the database interactions.
(It is easy for a database to become a performance bottleneck – this tool raises
the visibility of the database processing.)
• Usage Tracking – tracks some performance counters system-wide

CONFIDENTIAL 1
Performance Tools in the Process Commander Application

NOTE: Another good source for system-wide performance information is the System
Console (Monitor Servlet), which is documented in the Administration and Security
Guide.

PAL Usage
PAL is a tool which should be used to gain insight into where the system is spending
resources; use PAL to determine if there are resource issues impacting performance, or
may begin to do so when more load is added to the system.

PALAdvisor, along with all the other PAL readings, is not meant to give developers a
definitive answer about performance problems. PAL readings highlight processes
which fall outside of the norm. Depending upon how the application is constructed,
there may be good reasons why a particular application has readings at a certain level;
something which in general might be considered too high a reading might be correct for
your application. PALAdvisor gives the developer the capability to label and explain these
readings, as well as investigate problem readings.

PAL is designed to be used both in test (development) and production environments.


• Use PAL readings in production to troubleshoot problems
• Use PAL readings in development to prevent introducing a problem into a
production system, and during the development cycle to catch performance
issues early on in the application’s construction

The primary user of PAL is an application developer using it (as described above) in a
development environment. However, other users might include:

Role Use PAL for

QA Analysts Testing – run through a script, and take one reading at the
end of the script process which goes through the
PALAdvisor process to check performance efficiency

System troubleshooting production systems


Administrators

End users These will be the first people to see problems in production.
They can use PAL to gather information about the problems
seen in production and forward the data for analysis to the
application developer, who may not have access to create
the problem in production.

2 CONFIDENTIAL
Performance Tools in the Process Commander Application

Strategies for Testing Performance


PAL is designed to be used during all the major steps in the life cycle of an application. It
is designed to provide the developer with insight into the performance profile of the
application processes that they are building during the time that they are building them.
PAL is also designed to be used during the testing stages for QA, as well as for
performance and scale testing. Finally, it is available for use in production, to help identify
production issues.

NOTE: For most of the business processes, it may be necessary to run them through
once before taking PAL readings. The first time any process is being run, Rules Assembly
of the Rules used in the process may be occuring, which will skew the performance
numbers. PAL readings should always be taken for processes where Rules Assembly
has already occurred (since it should only occur once in a properly-designed system). If
after the process has been run once, Rules Assembly seems to keep occuring, that would
be an issue for further investigation.

Development Strategy
During development, PAL should be used in conjunction with other PegaRULES Process
Commander tools such as DBTrace and the Monitor Servlet to evaluate and project the
performance implications of the particular business process under construction. As a
developer begins iteratively creating their business process, they would collect PAL
information for every user screen in the process, so that as the process is changed, the
developer can see how the performance profile changes, and can therefore quickly
identify if a given change has created positive – or negative – performance ramifications.
Every time a change is made, a new set of PAL readings would be taken; each set would
be saved to compare with the prior and next set.

Developers would not necessarily use PALAdvisor here (as that is more of an overall
application view of performance); they would be using the Detail Screen to focus only on
the process they are currently developing.

QA Strategy
PAL, and particularly the PALAdvisor, are designed to be used in the QA cycle to try to
catch performance issues that can be identified during the normal functional testing. Most
QA is done by running “scripts” to test the various functional areas of an application; there
may be a number of different business processes that different types of users would
perform in production. For each of these processes, QA should run their test scripts until
no errors occur in the process. Once an error-free test is achieved, then the QA engineer
should take a PAL reading, perform the business process end-to-end, take another PAL
reading, and invoke PALAdvisor, to get a sense of the performance implications of that
particular business process from start to finish. This sequence should be followed for
each business process in the application, to give a perspective of the aggregate
performance of the application.

CONFIDENTIAL 3
Performance Tools in the Process Commander Application

Performance & Scale Testing Strategy


Scale testing should be done to verify that as more users are added to the system, the
performance for a specific user does not degrade. This type of testing measures resource
contention.

Begin this testing by tracking a single isolated user on the target PegaRULES system (i.e.,
only this user should be using this system right now). As with the QA test strategy above,
follow the below steps for this isolated user:

1. Identify a specific business process to be tested in the application.


2. Before running the process, take a PAL detail reading.
3. Run the process end-to-end.
4. Take another PAL detail reading.

This will give a performance baseline for this single user in the target hardware
environment.

Repeat the above process as additional users are added to the system. For example, at
100 users, run the business process for one of these users and take the same PAL
readings; compare these to the baseline readings for the isolated user. For one user in a
perfect system, the PAL readings should not change (as the readings are based on one
requestor). If the system is well crafted, the number of users should not materially impact
the performance profile of the application on an average user basis.

At certain user levels, performance will be affected when resources are not available; at
this point, it may be necessary to set the JVM or other system resource levels higher.

Production Strategy
The production model is similar to the QA model:

1. Identify a specific business process to be tested in the application.


2. Before running the process, take a PAL detail reading.
3. Run the process end-to-end.
4. Take another PAL detail reading.

Use PALAdvisor to do an analysis of these readings.

4 CONFIDENTIAL
Collecting Performance Information from
your Process Commander Application

Best Practice Protocol to Prepare for a PAL


Review
This section details the process of gathering performance data for analysis. This data can
then be sent to the application developer for study.

NOTE: It is not necessary to be an application developer to collect this information;


Process Commander makes it easy for anyone to run the PAL collection process and
track performance.

It is necessary to have a portal that allows the user access to the Tools gadget, in order to
take readings; the user must also have the rights to run the workflow being tested.

Determining when to use PAL


Users
Users should follow this process for collecting PAL data when they determine there is a
performance issue. The system may be slow in responding, or system errors (“Out of
memory”) may be occuring.

NOTE: In order to complete this process, users require access to the PAL tool, which is
labelled Performance under the Tools gadget.

QA
QA should take PAL readings as an ongoing part of their testing process in the
development of a new application. A User Scenario (use case) should be set up, and
readings taken as described in the next section. (NOTE: This User Scenario must be
realistic – i.e., it must be something that the users of the system would do regularly – or
the PAL readings will be meaningless. An example of a common, realistic User Scenario
might be Open New Work Object.) QA may discover that, in order to completely test
performance for the application, more than one User Scenario must be created.

CONFIDENTIAL 5
Collecting Performance Information from your Process Commander Application

Taking PAL Readings


Important: Before beginning to take PAL readings, go through the scenario once with no
PAL or screen captures being taken. It is necessary to go through the scenario the first
time in order to verify that Rules Assembly has occurred on all the Rules being used for
the scenario. (Since the Rules need to be assembled the first time a scenario is used,
taking PAL readings on this process will give misleading readings on the system
efficiency.)

1. Take Baseline Reading

Once the first run-through of the scenario has been completed, open PAL to create the
first reading. This will give a baseline from which the actions in the scenario may be
measured.

From the Tools menu, click on Performance. This will open the Performance window,
with the PAL Summary statistics.

2. Start Process to be Measured

Begin the process being measured by initiating the first action. In this example, a General
Task was opened.

After the first step in the process:

a. take a PAL reading (click on the Add Reading link to add the DELTA)
b. take a screenshot of the first screen (make sure to include the entire browser) and
put it into a document.

6 CONFIDENTIAL
Collecting Performance Information from your Process Commander Application

3. Continue through every step of the process.

At every step, follow the above procedure:

a. complete the step


b. take a PAL reading
c. take a screenshot

Important: The reading must be taken after the step has fully completed, and all the
screens are fully generated.

These screenshots must be taken for every step, for every PAL review that is done. They
will be examined by the application developer for consistency between PAL tests, to make
sure that the procedures are exactly the same, so the PAL numbers may be valid when
compared.

CONFIDENTIAL 7
Collecting Performance Information from your Process Commander Application

8 CONFIDENTIAL
Collecting Performance Information from your Process Commander Application

4. Save the data

After all the readings have been made (and all the screen shots taken), click on Save
Data in PAL.

The Save Data link will create a .csv file. Name the file with a meaningful name and save
the file. This file may be viewed in Excel:

Send this CSV file and the document with the screen shots to the person doing the PAL
analysis.

CONFIDENTIAL 9
Collecting Performance Information from your Process Commander Application

10 CONFIDENTIAL
Understanding PAL Performance
Information from your Process Commander
Application

The Theory of PAL

Introduction
Once the PAL data has been collected for an application, it is necessary to analyze the
information. Process Commander gives the application developer a number of
performance tools for this analysis.

• PALAdvisor
• the PAL Detail Screen
• DBTrace
• Usage Statistics

This section begins with an overview of what sorts of factors might affect an application’s
performance, and then describes each of the above tools in detail.

Factors Affecting Performance


There are a number of factors which may affect an application’s performance. The
availability of resources is central to performance issues – either consumption of
resources (resources which may run out, such as memory space), or contention of
resources (too many applications trying to do the same thing, such as access a database).

There are four resources that can be consumed by an application:


• memory
• CPU
• disk I/O
• network

Resources can contend due to:


• single-threaded resources
• saturation of resources

CONFIDENTIAL 11
The Theory of PAL

In addition, other factors may impact performance:


• Garbage collection – please refer to the JVM vendor’s documentation for details
on the most efficient garbage collection setup
• Agents – please reference the Agents tech note

Required information before using PAL


There are several facts that a developer must know about their application and their
system in order for the PAL readings to be meaningful.

1. The number of processors on the server, and the speed of each processor.

Many of the PAL readings measure CPU time, usually reported in seconds. CPU time is
used in PALAdvisor to help normalize readings across server machines (to make sure that
comparisons are accurate). A processor that runs at (for example) 1GHz would process
half the work in a second that a processor running at 2GHz would accomplish; therefore,
“one second” of CPU time on the 1GHz processor would not be equal to one second on
the 2GHz machine. This inequality must be taken into account when measuring
performance.

In addition, the number of CPUs in the machine must be taken into consideration. If
servers have different numbers of CPUs, then comparing the capacity of these servers
requires some calculation

Example:

A development/test server may have a 2GHz processor, but only one CPU.
The production system may have a 1GHz processor, but four CPUs.

The development machine runs a process that requires one second of CPU time. With
one CPU, the maximum number of users who can do this operation would be 60 in a
minute (60 seconds/minute).

The production machine is slower, so it takes two seconds to run the same process (only
a 1GHz processor). However, there are 4 CPUs, so 120 users may run this process in a
minute. The production system has twice the capacity, but the response time is degraded.

2. Application Type and Usage Profile

It is necesssary to understand what the application is doing, and how it is used – how
many work objects are handled per hour, how many users are simultaneously using the
system - in order to add context to the PAL readings. (For example, a call center where
each person handles 20 work objects per hour will have different performance
requirements than some kind of dispute center, where people might handle one item per
hour.)

The application developer must know what factors they are trying to tune in the system,
which could include:

• business transaction arrival rate – the rate at which the users expect the
business transactions (work objects) to arrive. It is important to understand the
business transaction arrival rate before defining a set of load test scenarios; in

12 CONFIDENTIAL
The Theory of PAL

particular, the types of business transactions being executed, and the proportional
mix of those transactions (for example, the application expects to run 200 call
entries a day, or 25 per hour/one every 2 ½ minutes, as well as 10 manager
reports in the morning)
• response time – the speed at which the system completes the processing
requested by the user and is available for the user to begin their next work (for
example, how long the user has to wait for a New Work Object to open)
• resource utilization – the efficiency with which the application completes the
processes, and the number of simultaneous processes that may be available.

3. Network Architecture

When assessing performance of applications, an understanding of the network


architecture between the application server and the client is necessary. If the application
server is on the same local area network as the client, the application workflows will be
tuned differently, and may have different design criteria, than if the client and application
server are separated on different LANs or WANs.
• A Local Area Network works most efficiently with traffic composed of lots of little
packets
• A Wide Area Network works most efficiently with traffic composed of few but large
packets (such as FTP)

4. Performance expectation

Along with what kind of work the application is doing, it is necessary to understand the
expectation of the users regarding performance. It may be that a system which is running
at its absolutely most efficient speed might refresh a work object in 10 seconds (due to the
complexity of the application); if the users expect the system to refresh in one second,
there will be a perceived lack of performance, even if the system is tuned to peak
efficiency. On the other hand, if an application takes two full minutes to process a work
object, but this processing used to take three weeks, that might be considered excellent
performance.

NOTE: If the performance time is unknown, begin with the assumption that each screen
should take less than one second.

5. Detailed Application Knowledge

PAL readings are taken at many levels in the application, with lots of good data. However,
it is important to know what in the application the numbers point to. For example, the PAL
data may show 200 database reads for a work object. The application developer must be
able to understand the work object structure of the application, in order to find the work
object and actually solve the problem and reduce the database I/O.

CONFIDENTIAL 13
PAL Tools

PAL Tools
PAL readings are taken for each requestor, and measured on the server (not the client
side of the application). These readings are grouped into several types, signified by
keywords in their labels:

• number/count
• CPU
• elapsed (a.k.a. ‘wall time’)

Number is a count of the number of times a specific action occurs. Example: Rules
executed measures the number of times a Rules-Assembled Rule is executed.

CPU is the amount of CPU processing time, in seconds, that this action takes for this
thread. Example: CPU time compiling Rules measures the amount of CPU time the
system takes to compile the generated Java code for a Rule.

Important: CPU readings are not tracked for systems running on UNIX platforms.
Since PAL readings are taken constantly in the system in order to measure the application
performance, they must never impact the performance themselves – PAL should not take
up significant resources trying to figure out more efficient resource use. On UNIX
systems, the gathering of thread-level CPU data has a significant impact on the
performance of the system; therefore, it was decided not to gather the thread-level CPU
information in UNIX, as the reading itself impacted the performance being measured.

Elapsed is the system time, in seconds, that a process takes; this time includes the CPU
processing time. Thus, this time is generally equal to or longer than the CPU time.
Example: Elapsed time compiling Rules measures the amount of elapsed (system)
time the system takes to compile the generated Java code for a Rule.

For each PAL reading, a specific point in the code has been instrumented. For example,
for the reading Activities Executed, when anything in the engine requests an Activity to
run, the Activities Executed PAL counter is acquired and incremented. Likewise, for
every compile of generated Java code that is done, the system acquires the Java
Compilations PAL reading and increments it; it also starts a CPU and an Elapsed timer
for the compile; calls the engine code to compile, and then stops both timers.

PAL readings are gathered at two levels:

• Requestor
• System (Node)

A Requestor is a list of Threads, and is associated with one user ID. When the Reset
Data link is clicked, the Requestor-level data for the user’s requestor will be reset to zero,
whereas the system-level data is unaffected.

Most of the System/Node counters are displayed on the System Console; only a few
(such as System Cache Status or Database time threshold) are displayed in the PAL
Detail Screen.

14 CONFIDENTIAL
PAL Tools

Performance Tool
In order to view PAL readings in Release 4.2, click on the Tools bar, and then choose
Performance. The Performance screen will be displayed:

PAL displays the system readings as they exist now, summed up to the present. This
information is shown as three types of readings:

• INIT – the first full reading from this PAL display. This reading will never change,
unless the Performance numbes are Reset.
• FULL – the last full reading of the readings. If there are any Delta readings, the
Full reading will display the Init reading plus the Delta reading. The Full reading
shows the Performance numbers summed up to the present.
• DELTA – the difference between the current full reading and the last full reading.

Interactions
An interaction is counted whenever the browser or the requestor makes a request of the
server. Thus, each request/response of the server is an interaction. A business process
will probably have more than one interaction; a single screen could require many trips to
the server for all the required information. Lookup lists require many interactions. (For
example, the act of starting up PegaRULES in the browser for a user involves about 15
interactions.)

NOTE: The Interaction count does not include static content, such as .jpeg files (like the
signon screen being sent to the client).

Some interactions are required by PegaRULES and can’t be controlled (such as signon),
but many are controllable. The number of interactions for a particular action (Open a Work
Item) may indicate how efficiently the application is designed (if it takes 250 interactions to
open a work item, this is an inefficient or overly complex form).

Interactions are the lowest common denominator to measure and analyze the results of a
PAL trace. Developers may find it useful to divide a PAL reading by the number of
interactions comprising it, to get the counts per interaction, for comparison purposes with
other PAL traces.

NOTE: Running PAL itself creates one interaction. Thus, if Add Reading is clicked
without doing any other work in the system, the Interaction # and Interaction Count will
increment by one.

The interaction numbers may also be used to track and compare the PAL information. For
example, if a user takes different PAL readings when gathering information for analysis,

CONFIDENTIAL 15
PAL Tools

and also runs a DBTrace, then the interaction numbers in the Trace can be matched to
the PAL readings during the analysis.

The interaction numbers identify what data is in the PAL readings for INIT, FULL, and
DELTA. For example, after signing in to PegaRULES but before doing any work, clicking
on Performance to display the PAL data will show just an INIT and a FULL reading. The
number of interactions is 16, which is 15 for signon and one for PAL, and the readings for
INIT equal the FULL readings because no deltas have been added.

After some work has been processed, a DELTA reading can be added (using Add
Reading).

• The number of Interactions (Int #) for the INIT reading will stay the same, as the
initial reading is there to serve as the baseline.
• The number of interactions for FULL shows 28, as that is how many interactions
there have been for this requestor.
• Interactions for the DELTA reading show 28, as it measures up to the full number
of interactions for this requestor; however, the Interaction Count is 12, showing
the difference between the INIT reading (at 16) and the FULL reading (28).

Clicking on the FULL, INIT, or DELTA labels will show detailed PAL data for these
readings.

• Clicking on FULL will display PAL counter readings for all interactions (1 through
28).
• Clicking on INIT will display the PAL counter readings for the original baseline
(interactions 1 through 16).
• Clicking on DELTA will display PAL counter readings of the difference between
the original baseline (INIT) and the final FULL reading (interactions 17 through
28).

For any PAL reading, beginning with the INIT data and adding all of the DELTA data
should give the FULL reading.

16 CONFIDENTIAL
PAL Tools

Note that as the number of interactions increases, the performance may decrease, as the
system is having to do more work passing information to and from the server. For each
screen in a Work object, in order to render the screen, one or more interactions with the
server will be required; screens may be considered a collection of interactions.

If there are a large number of interactions required to render one screen (15 – 20 or more),
this very high number should be investigated for performance issues. Each trip to the
server may not require a lot of CPU processing (so total CPU might not be high), but the
trips themselves take time.

CONFIDENTIAL 17
PAL Tools

Detail Display
Clicking on the name of the INIT, DELTA, or FULL type of readings will display a detail
screen of the data for that reading:

18 CONFIDENTIAL
PAL Tools

Information on using these readings begins in the next main section of this document.
More detailed information on each of the PAL readings is found in the PAL Readings in
Detail section.

PAL Features
There are a number of action links at the top of the PAL display.

Add Reading
Clicking on Add Reading will take another reading of the PAL data, and add a Delta
reading to the Performance Readings display.

Add Reading with Clipboard Size


This function will not create any difference in the display of the summary view of the
Performance readings; it will only show a change in the Delta detail screen.

Clicking Add Reading with Clipboard Size will add a Delta reading. When the Requestor
Summary section of that Delta detail is viewed, the Requestor Clipboard Size (bytes)
reading will show the estimated size of the Clipboard in memory (in bytes). This includes
the clipboard pages that belong to all the threads of that particular requestor.

CONFIDENTIAL 19
PAL Tools

If the Add Reading with Clipboard Size is not clicked, this reading will display zero (as it
does in the Detail example in the previous section).

NOTE: The Requestor Clipboard Size reading is an expensive operation, which is why
it must be requested (rather than running automatically, like the other readings).

Reset Data
This function resets all the displayed data for the requestors to zero.

The above display shows that almost all the counters have reset to zero. The ones that
did not are the ones involved in creating the PAL display itself. Thus, the time shown for
Total Elapsed and for Total CPU is the time required to create the PAL reading. The Total
Rules Used and the one Activity also relate to the PAL reading.

Save Data
Clicking on Save Data will save all of the data that is currently in the PAL screen to a
comma-delimited file.

NOTE: This file will be stored on the client machine (the user’s PC). The user is
prompted to name the file and point to the directory where this file will be stored.

20 CONFIDENTIAL
PAL Tools

The file may then be reviewed by using a program that reads CSV format (such as Excel).

For information on the Start DB Trace and DB Trace Options choices, please reference
the DBTrace section of this document.

CONFIDENTIAL 21
The PAL Detail Screen

The PAL Detail Screen


Overview
PAL is not intended to point directly to the source of a perceived performance issue.
Instead, the PAL counters can indicate problem areas. The research process to pinpoint
possible performance issues in an application starts by looking at either Total CPU or
Total Elapsed time, and then following that data to other related readings.

The basic process for researching performance issues follows these steps:

1. Take PAL readings (as described in the first section of this document).

2. Look at the Total CPU number to determine whether it is a high, medium, or low value.

3. Compare Total CPU to Total Elapsed to see where to begin the investigation.

3. If Total CPU is high:


A. Review the specific CPU numbers available in the CPU section to determine
where the CPU time was spent (by percentage).
1. If one reading dominates the group (62% of the Total CPU time was spent
retrieving Rule database lists, for example), then investigate that area of
the application.

B. If the specific CPU readings are inconclusive, move on to the Rule Execution
counts, and see if one number stands out there.

C. Check Database Access and/or Requestor Summary readings.

4. If Total CPU is a mid-level value, compare to Total Elapsed to see if that value is high
or low.

5. If Total CPU is low, then that part of the system is fine – go on to check Total Elapsed.

6. If Total Elapsed is high:


A. Review the specific Elapsed numbers available in the Elapsed section to
determine where the elapsed time was spent (by percentage).
1. If one reading dominates the group (62% of the Total Elapsed time was
spent retrieving rule-resolved Rules from the database, for example), then
investigate that area of the application.

B. If the specific Elapsed readings are inconclusive, move on to the Rule Execution
counts, and see if one number stands out there.

C. Check Database Access and/or Requestor Summary readings.

22 CONFIDENTIAL
The PAL Detail Screen

Using PAL to understand response time


As stated in a prior section, response time is the time it takes for the system to return from
a processing step (saving a work object to the database, opening a New Work object).
There are several factors which make up response time:

Network time – includes time spent:


• sending data to and from the browser
• sending data to and from the database or other external resource
• interactions between the systems

Response time from databases or other external resources – includes:


• time spent waiting for responses from other resources
• frequency of access to the other resources

CPU time – includes time spent on:


• application/business processing
• Rule Resolution
• looking up and invoking Declarative Rules

Rules Assembly time – includes time spent on:


• assembling the rules
• database I/O – looking up rules not currently in the cache
• Java compilation time

Troubleshooting process
In order to pinpoint whether a performance issue is due to the network being slow (rather
than ineffiencies in the application itself), begin by measuring the response time for an
item, measured either manually by the user or the user’s test tool. Compare that number
against the Total Elapsed PAL reading. Elapsed time measures the time spent
processing inside the PegaRULES Process Commander application to fulfill the request.
The total elapsed time is the time the server (the JVM) required to complete a transaction.
NOTE: This number is impacted by other threads running in the JVM (including garbage
collection).

If the user-measured response and the Total Elapsed times are compared and are
reasonably close, this means that the network latency and the time spent sending data
across the network are minimal – the time the user sees the system spending is being
spent processing inside the application. At this point, further investigation in the
application itself is indicated.

However, if these two times are substantially different, then the network is impacting the
response time. If a problem is indicated here, then more work must be done to isolate
where in the network the problem is occuring.

The problem might be with the network itself, or it might be with the browser trying to
execute Javascript to display data. Test tools can be used to execute work objects
without running the Javascript, which would remove the second part of that equation. If
there is still a problem, the network is where the slowdown occurs. If the problem
disappears when the Javascript is not run, then it is necessary to look further into the data
displays.

CONFIDENTIAL 23
The PAL Detail Screen

There could be two different issues with the network itself:

• the network is slow


• PegaRULES is overloading the network by trying to move too much data

To determine which issue is occuring, look at whether the network is “saturated” with
sending data: the number of bytes being sent across the network, divided by interaction
(in other words, how many bytes, on average, would be in a single request). Use:

• Number of Input Bytes received by the server


• Number of Output Bytes sent from the server

These readings will give the developer an idea of whether there is too much data being
sent across the network by PegaRULES, or whether the network itself is having problems.

Once the issue is isolated to the application, the PAL detail screen can be used to further
pinpoint the problem.

Interpreting PAL Detail Readings


1
The first PAL screen shows a number of summary readings . Some of these are the
same as the detail readings; other numbers are summed from several of the detail stats.
This summary section is then repeated at the top of the detail display.

Analysis of the application’s performance should start with the Detail readings behind
these these Summary statistics. As described in the first section of this document, take
PAL readings of the business process to be reviewed. After running the process, click on
the FULL or DELTA reading to look at the Detail screen.

On the Detail screen, begin the analysis by comparing Total CPU and Total Elapsed. A
quick review can point the developer to the appropriate section for more detailed
investigation. If the CPU numbers seem reasonable, but the Elapsed time is high, then
begin by looking at Elapsed. If both Elapsed and CPU time look high, begin with Total
CPU.

1
The PALGetDetail activity in Code-Pega-PAL returns a Code-Pega-PAL page, which contains all
the PAL readings and will display them when the Performance screen is opened.

24 CONFIDENTIAL
The PAL Detail Screen

Total CPU Time


As stated earlier, CPU readings are not tracked for systems running on UNIX platforms, as
the tracking was determined to be too resource-expensive.

For systems where CPU is tracked, start with the Total CPU number. On the Detail
screen, Total CPU time for the reading(s) is the time, in seconds, that the PegaRULES
server CPU spent processing the current action. CPU readings are very important to
performance analysis.

For an application with a suspected CPU issue, the Total CPU should be measured first
for one screen. After displaying the PAL Summary screen (to establish the baseline),
open a Work object to display the first screen; click on Add Reading in PAL. The DELTA
line should give the Total CPU required for this first screen.

Based on whether the Total CPU reading is High, Medium, or Low, different analysis
paths may be chosen.

CPU Reading Description

High A CPU reading could be considered high if it was over a half-


second per screen.

Medium Between .2 and .5 seconds per screen.

Low Under .2 seconds per screen.

Important notes about CPU:

• A “CPU second” means that the CPU is at 100% for that user for
that second. If more than one user hits that point in the process
at the same time as the first user, then the second user will have
to wait until there is CPU capacity for their processing.

• The above measurements are average numbers, to give


application developers guidelines on where they should spend
their time reviewing the performance of the application. Note
that the above numbers are weighted and based on the CPU
speed of the PegaRULES server being used. PegaRULES
uses a 2GHz Intel Pentium server as its reference for the above
CPU numbers; customers should adjust these numbers up or
down based on their production server CPU speed.

• These CPU numbers may change depending upon the type of


application being run. For a low-volume, very complex
application with lots of processing, the values for low, medium,
or high CPU usage should be higher. For a high-volume
application with simple screens, the CPU usage numbers might
be lower. For example, if a process formerly taking three hours
is automated onto one screen, and that screen takes five
seconds to process, although that might seem like a very high
CPU reading, that is still a huge time savings for the company in
terms of overall process, and would be considered appropriate
performance. The same five-second CPU time for a high-

CONFIDENTIAL 25
The PAL Detail Screen

volume call-center application would be completely


unacceptable.

High Total CPU Reading

There are a number of reasons that an application might have a high CPU reading. These
reasons include (but are not limited to):

1. Excessive I/O to databases.

An application may be requesting a list of a thousand rows from a table in the database;
not only does the system have to extract this large amount of information from the
database, but then all the objects must be created to store this information on the
Clipboard in PegaRULES in order to use the information. If only one or two entries in this
huge list are actually required, these queries can be exceedingly inefficient.

If the developer believes this might be happening, the Database Access counters can
indicate the amount of data being requested from the database.

2. Improper Use of Clipboard

On the clipboard, are items being reused, or are they being newly created each time, and
then destroyed after use? Every item that is created and then removed must be managed
(object creation and garbage collection), using up resources.

3. Rules Assembly

If the PAL readings were done the first time that a process is run in the PegaRULES
system, the Rules Assembly numbers may be very high (as the Rules Assembly is done
on the first use of most Rules). Run the process a second time, and if the Total CPU
process is much lower, this was in all probability a Rules Assembly problem.

4. Complex or Excessive HTML

The business processes build HTML for the user interface. These HTML screens could
range from being very simple to quite complex, with many different sections all having to
be processed every time the screen is updated. If all of the processing is on one screen,
then everything on that screen must be processed every time one change is made.
Breaking up the display among several screens can help prevent excessive unnecessary
processing.

5. Inappropriate use of Declarative Rules

Declarative Expressions may be set to run whenever any of their properties are changed.
If the Target Properties aren’t needed for every transaction, it may be more efficient to set
the Expressions to calculate when they are used (“Whenever Used”), to prevent
unnecessary calculation of the Expressions.

6. Too Many Rules Executed

26 CONFIDENTIAL
The PAL Detail Screen

It is possible to create interactions where the system must work through thousands and
thousands of Rules (rules may be run repetitively, or the process may loop). This could
cause a performance slowdown.

To determine where the problem of high CPU originates, look at the CPU Time Detail
section of the Detail screen:

NOTE: Not all of these readings may contain data.

Add up all the CPU counters, and subtract that number from the Total CPU Time reading.
The difference between the sum and the Total CPU is directly related to the processing of
the application. The direct actions of querying the database and getting the Rules ready
for execution is tracked by the PAL readings; the actual application processing is not, as it
is different for every customer.

If any of the counters are a high percentage of the total, that needs to be investigated.

Example:

From the above example, a DELTA reading was taken after a Work object was
processed. This delta reading shows that Total CPU was 11.89 seconds.

CPU PAL Reading Value Percent of


(seconds) Total
Total CPU 11.89 100%
retrieving rows other than Rules .19 1.6%
retrieving Rules 1.77 14.88%
executing Declarative Rules .14 1.18%
retrieving Declarative Rules .22 1.85%
processing Storage Streams – non-Rule database lists .05 .42%
performing Rules Assembly 1.75 14.71%
compiling Rules 5.49 46.17%

CONFIDENTIAL 27
The PAL Detail Screen

In the above example, CPU time compiling Rules is taking almost half the Total CPU.
This reading measures the CPU time spent compiling Rules for execution; thus, it may be
that the Rules Assembly process was being executed here, and another reading of the
same process may result in lower readings.

If looking at the individual CPU readings is inconclusive (they are all low, or they are about
the same percentage and none stands out), then the number counters should be
investigated. Counters fall into several groups:

• Rule execution counters


• Database access counters
• Clipboard Page numbers

Check each one to see where the volume of the processing is, to determine whether there
is an isolated problem or a volume problem.

Begin with the Rule Numbers, in the Rule Execution section:

The Rule Execution counters are measures of volume (how many Activities ran, how
many Flows, etc.). Individually, these counters may not be able to give a developer a
good indication of a problem (unless the numbers are huge – if you are running 800
Activities just to open a Work Object, then that is probably too complex a process).
However, looking at these counters as a group can give a developer a picture of the level
of complexity of the Work Object process.

28 CONFIDENTIAL
The PAL Detail Screen

If there is a large count in one of the Rule execution counters (Activities, Flows, Models,
Streams, etc. Executed), then focus there; use Tracer (not DBTrace, which is for
database) to find out what part of your business process is consuming a lot of CPU.

If the numbers for these counts is low (even if the CPU is high), then there is not an issue
with the Rule Execution processing being done, and investigation continues.

For details on Database Access counters or Requestor Summary readings, please see
those sections below.

If none of the counters point to something definitive, the problem may be in something that
is not being directly measured, which would be in the application processing itself.

Medium Total CPU Reading

If the CPU is not a high reading, but it is a medium value, that is generally a bit higher than
recommended, but not necessarily a problem.

To determine whether there is a problem, compare the Total CPU Time number with the
Total Elapsed Time. What percentage of the Total Elapsed is Total CPU? If the ratio is
high – i.e., the CPU time is a large percentage of Total Elapsed – then investigate the
CPU number, even if the Total CPU number is not that large. (A high ratio may indicate
too much CPU time being spent processing data.) If the ratio is low, then CPU is probably
not the issue.

Move on to the next investigative area, but keep an eye on the CPU numbers; a problem
of scale (adding many users) may develop in the future. (NOTE: If CPU is high, there is
already a scaling problem – more users cannot be added without maxing out the system
capacity.)

Low Total CPU Reading

Low Total CPU readings are generally healthy (just like low cholesterol). The developer
could check the ratio of the Total CPU Time to Total Elapsed Time, to see what
percentage of Total Elapsed the CPU consumes. If this ratio is high, and the overall Total
CPU number is low, then that signifies that the majority of time that is being spent on a
Work object is for processing, and that the processing is efficient (low CPU). This shows a
well-designed application.

Total Elapsed Time


If the CPU investigation is inconclusive, the next major step in the analysis is to look at
Total elapsed time for the reading(s).

As with the Total CPU reading, based on whether the Total Elapsed reading is High,
Medium, or Low, different analysis paths may be chosen.

CONFIDENTIAL 29
The PAL Detail Screen

Total Elapsed
time for the Description
reading(s)

High An Elapsed reading could be considered high if it was over a


second per screen.

Medium Between .5 and 1 second per screen.

Low Under .5 seconds per screen.

For this reading, the focus will be on Elapsed time counters which are disproportionate to
the CPU time, or that don’t have CPU associated with them (such as Elapsed Time
executing Connect Rules) when the number is large.

High Total Elapsed Reading

Connections to other systems and database times are typical problems in Elapsed time
readings – the time spent getting data from other systems. There may be too many calls
to another system, or the other system may be slow in responding.

Note that if Java steps are used in Activities, it is not possible for PAL to measure those
directly. The focus for PAL will be on accessing data outside the scope of the system,
when the access was not done with a Connect Rule (Rule-Connect-Java, for example).

To determine where the problem of high elapsed time originates, look at the Elapsed Time
Detail section on the DELTA Detail screen:

30 CONFIDENTIAL
The PAL Detail Screen

NOTE: Not all of these readings may contain data.

Add up all the Elapsed counters, and subtract that number from the Total Elapsed Time.
The difference between the sum and the Total Elapsed is again related to the processing
of the application. The direct actions of querying the database and getting the Rules ready
for execution is tracked by the PAL readings; the actual application processing is not, as it
is different for every customer.

If any of the Elapsed counters are a high percentage of the total, that needs to be
investigated.

Example:

From the above example, a DELTA reading was taken after a Work object was
processed. This delta reading shows that Total Elapsed was .67 seconds:

PAL Reading Value Percent of


(seconds) Total
Total Elapsed Time 14.28 100%
retrieving rows other than Rules from database 2.49 17.43%
retrieving Rules from the database 4.01 28.08%
executing Declarative Rules .01 .07%
retrieving Declarative Rules .33 2.3%
retrieving non-Rule database lists 1.06 7.42%
processing Storage Streams – nonRule .06 .42%
retrieving lists of Rules .1 .7%
performing Rule assembly 1.93 13.52%
compiling Rules 4.75 33.26%
executing Connect Rules .95 6.65%
writing to the database .05 .35%

In the above example, retrieving Rules from the database and compiling Rules both take a
third of the Total Elapsed Time. Again, these readings measure the elapsed time spent
retrieving and compiling Rules for execution; thus, it may be that the Rules Assembly
process was being executed here, and another reading of the same process may result in
lower readings.

Special Case: If the Total Elapsed Time for the Reading(s) is a high number, but the
individual CPU times are low, then there might be another process running in the JVM that
is consuming large amounts of resource and impacting response time. This might be:

• an agent (set to wake up too frequently, like every second)


• garbage collection (again, happening too frequently)
• the JVM itself
• listeners (for email and other processing – check file listeners and MQ)

CONFIDENTIAL 31
The PAL Detail Screen

This issue may be pinpointed by looking at the Process CPU, which is displayed at the
top of the detail screen:

To check whether another process is running on the JVM, take one PAL reading, wait
maybe 10 seconds, and then take another, without doing any other actions (like opening
work objects). If the Process CPU number at the top of the next Detail screen changes
noticeably, then that shows that the CPUs are busy working on other processing.

Database Access Counts


The Database Access counters show the counts of Rules (or non-rules, like Work- or
Data-) that were read from the database. (This is not the number that was actually
executed - that would be Rule Execution.) Database Access shows the number of rules
that needed to be read in order to determine the Rules to execute. For Rule Resolution
(for example), the system could load many rules, which are then all evaluated to choose
the one best rule for the situation. Looking at the Rules accessed shows how much
database access is done before the chosen Rule gets executed; a high number of
retrieved Rules for each Rule executed could indicate that the system is doing a great deal
of database access in order to find the Rules to execute, and might need to be optimized.

These readings can also show whether there is a problem with database response speed.
The PegaRULES timers can show the time that the PegaRULES system spends
processing, but there is no way to directly track the amount of time the third-party relational
database spends processing; that information may only be found indirectly.

CPU readings will not necessarily indicate whether there is a problem with database
response times. Processing one massively complex SQL query may take several full
seconds and return only one row. This will not be counted as a CPU time issue, as CPU
measures the time the PegaRULES system is processing – not the database processing
time. In the case above, the long query would not be seen in the CPU reading, but in the
Elapsed time reading, which shows the full time (CPU and other, such as database
processing) that was required to return results from the database.

Database counters can help specify whether the problems is with a small number of the
aforementioned massively complex queries, or whether the issue is an excessively high
number of queries (even though they may be simple queries).

32 CONFIDENTIAL
The PAL Detail Screen

Database queries come in two types:

• queries as a result of the Rules that are running


• queries that are the result of the actions of running Rules

Queries that are a result of the Rules that are running – The act of needing a Rule (to run)
may cause database queries to occur, when processing such as Rule Resolution takes
place (which is trying to determine the best Rule to run in a specific situation).

Queries that are the result of actions of running Rules - This type of query is involved in
the actual actions the Rules are performing – when a report is run, which uses an Obj-
Open Method, or an RDB-Save; these rules actually perform work on the database.

When a form includes a lookup list (the fields which have the small blue triangle which
indicates that using the down-arrow will create a list of appropriate entries for that field),

CONFIDENTIAL 33
The PAL Detail Screen

that involves a lot of processing and database lookup. Lists can be either Obj-List or RDB-
List requests to the database, and the following information is tracked:

• how many rows were processed for each list request


• how many of those rows required that data be extracted and uncompressed from
the Storage Stream

Extracting data from the Storage Stream is a very expensive process, which should be
avoided if at all possible.

Storage Stream

All of the PegaRULES data is stored in the Storage Stream column, which is also known
as the BLOB – the Binary Large OBject column. This column is currently part of all
PegaRULES database tables. It is used to store data that is structured in forms that do
not readily “fit” into the pre-defined database columns, like properties that contain multi-
dimensional data such as pages, groups or lists. Some PegaRULES data may also be
stored in “exposed” columns in the database table; these are separate columns in the
table which hold data from scalar properties.

When the Storage Stream is present in a database table, all the data in each instance
written to that table is stored in the Storage Stream column, including some of the data
which might also be in exposed columns in the table. The reason for this storage
arrangement is twofold:

• it is not possible to run reports on the data in the Storage Stream, so any
properties upon which queries will be run must be in exposed columns
• when extracting data from the database, since some of the information is stored
in the Storage Stream and some in exposed columns, it is faster for the system to
read it all out of the Storage Stream

Storage Stream data handling is slow, and it takes up a lot of space. Due to the volume of
data stored in this column, PegaRULES compresses the data when storing it in the
column, using one of several compression algorithms (“V4”, ”V5,” or “V6”). When data is
requested from the Storage Stream, it must be read from the column and decompressed
onto a temporary clipboard page; then, since the Storage Stream includes all the
properties in a particular table entry, the data must be filtered, so that only the requested
properties are returned; the temporary clipboard page is then thrown away.

If this entire process is run to only extract one property from the Storage Stream, that is a
highly inefficient and costly setup. It may be that certain properties which are frequently
reported on should be exposed as columns in the database table, if they are continually
read from the Storage Stream. This then allows them to be read directly, rather than
going through the above process.

Database vs. Cache

Some of the Database Access counts also display the number of times that data is found
in a system cache. There are a number of different caches in PegaRULES, which help
optimize processing. If a Rule is required for some processing, the first thing that is done
is to check the cache to see if that Rule has been accessed before in the current system
setup. If it has, the system doesn’t have to go to the database to retrieve this Rule;
accessing the Rule from the cache is much more efficient than having to retrieve it from

34 CONFIDENTIAL
The PAL Detail Screen

the database - especially if this Rule has data in the Storage Stream that must be
retrieved.

Once it has been determined that requested information is not in the cache, and the
system must go out to the database for that information, it will then be stored in the cache
for the next request. Thus, the longer users work on the system, the more efficient their
processing should be. If this expected efficiency increase does not occur, the developer
should investigate the situation. (Perhaps all the users have different RuleSets, which
prevents reuse in the cache, or perhaps there is so much information being cached that
the cache is running out of room and clearing out the older entries; or perhaps there is
some other problem.)

Likewise, once the system has been running for some time, all the Rule Resolution should
be done and cached. If the Rule-resolved Rules continue to be accessed from the
database, with all the processing that requires, there is something wrong with the system
setup.

Requestor Summary
The Requestor summary shows general information about this requestor (the user’s
interaction with the system). Several key readings in this section include:

• Clipboard pages are tracked in the Number of Named Pages Created reading.
The number of named clipboard pages, if high, indicates complexity in the
clipboard structure. This might be where a lot of the processing is happening. If a
great deal of information is being loaded onto the clipboard from database
queries, and hundreds of Named Pages are being created, that will impact
performance.

• Number of Input Bytes received by the server and Number of Output Bytes
sent from the server show how much work is being done to render the screen
for the user (the HTML information necessary for the browser). High values might
indicate an unnecessarily complex screen.

Database Requests Exceeding Threshold

The limit on the amount of time one database query should require is set by default to a
half-second (500 milliseconds). In the Requestor Summary section, the counter Number
of database requests that exceeded the time threshold shows how many database
requests in one measurement went over this threshold. If database requests exceed the
threshold, this might indicate overly complex queries. Investigate this possibility by using
DBTrace (see the DBTrace section of this document).

The threshold may be overridden in the pegarules.xml file, by changing the value for the
operationsTimeThreshold entry:

<node name="database">
<map>
<entry key="operationsTimeThreshold" value="500"/>

In addition to displaying the requests exceeding the threshold in the PAL Detail screen,
this information is written to the PegaRULES-ALERT-yyyy-mmm-dd.log file (example:

CONFIDENTIAL 35
The PAL Detail Screen

PegaRULES-ALERT-2005-Nov-17.log). For each interaction where the threshold is


exceeded, an entry is made in the log file. This entry includes:

• the warning “WARN – Database update took more than threshold of 500 ms” and
the measurement of the amount of time the request took
• the SQL query that caused the alert
• the substituted data for the query

Example:

02:59:02,328 [sage Tracking Daemon] (base.DatabasePreparedStatement) WARN


- Database update took more than threshold of 500 ms: 520.1726808741121
ms

02:59:02,328 [sage Tracking Daemon] (base.DatabasePreparedStatement) WARN


- SQL: insert into pr4_log_usage (pxActivityCount , pxCommitElapsed ,
pxConnectCount , pxConnectElapsed , pxDBInputBytes , pxDBOutputBytes ,
pxDeclarativeRulesInvokedCount , pxFlowCount , pxInputBytes , pxInsName ,
pxInteractions , pxJavaAssembleCount , pxJavaCompileCount , pxObjClass ,
pxOtherBrowseElapsed , pxOtherBrowseReturned , pxOtherIOCount ,
pxOtherIOElapsed , pxOutputBytes , pxProcessCPU , pxRequestorID ,
pxRequestorStart , pxRequestorType , pxRuleBrowseElapsed ,
pxRuleBrowseReturned , pxRuleCount , pxRuleIOElapsed , pxServiceCount ,
pxSnapshotTime , pxSnapshotType , pxSystemName , pxSystemNode ,
pxTotalReqCPU , pxTotalReqTime , pyUserIdentifier , pzInsKey) values (? ,
? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ?
, ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ?)

02:59:02,328 [sage Tracking Daemon] (base.DatabasePreparedStatement) WARN


- inserts: <0> <0.0> <0> <0.0> <0> <0> <0> <0> <0>
<BD65DE7294464B6265522D2AA7A414F1E!20051120T075744.501 GMT> <0> <0> <0>
<Log-Usage> <0.0> <0> <0> <0.0> <0> <112.6875>
<BD65DE7294464B6265522D2AA7A414F1E> <2005-11-20 02:57:44.501> <BATCH>
<0.0> <0> <0> <0.0> <0> <2005-11-20 02:57:44.501> <INITIALIZE> <wfe>
<sdevrulesb> <0.0> <0.0> <<null>> <LOG-USAGE
BD65DE7294464B6265522D2AA7A414F1E!20051120T075744.501 GMT>

36 CONFIDENTIAL
DB Trace

DB Trace
DBTrace is a tracing facility to assist in tuning system performance. If users are perceiving
that work items in the system take a long time to process, the DBTrace facility might help
point to where the time was being spent. DBTrace will display a lot of low-level detail
about system-level intractions.

This function records the sequence of SQL operations that PegaRULES performs during
processing, such as reads, commits, etc. Unlike the Trace facility, it is not possible with
DBTrace to do real-time viewing of the operations data. Instead, DBTrace gathers the
data into a text output file which contains all the low-level database operations that
PegaRULES performs. Then the Excel template included in DBTrace formats this data for
viewing.

DBTrace should be used when:

• the Database Access counts are high


• Elapsed Time is high (especially if CPU time is low)
• the Database Threshold is exceeded

If a very complex SQL query to the PegaRULES database is taking a long time to return,
DBTrace can give further information on details of the query.

CONFIDENTIAL 37
DB Trace

Preparing a DBTrace
Before beginning a DBTrace, determine what aspects of the application are being
researched. Click on DB Trace Options to get the Database Trace Events screen.

All of the Database Trace Events listed above are database operations which can be
traced. An event is one instance of the chosen item happening – one Commit, one
instance of the blob being read, etc. One line will be printed in the Trace each time each
event which is checked in this list occurs.

Depending upon the focus of the problem area, different items may be checked. For
example, if there is an issue with sending SQL statements to the database, the Prepared
Statement items would definitely be chosen, but it is not necessary to track reading and
writing to the blob. Likewise, if there is an issue with caching, it is not necessary to track
the SQL statements.

38 CONFIDENTIAL
DB Trace

The output for DBTrace is read into several worksheets in an Excel workbook (see
following sections for details). Depending upon what Events are checked in the DB Trace
Options, the data in these worksheets will change.

NOTE: By default, all of the Trace events are checked. Thus, if the DB Trace Options
step is skipped, and the DB Trace is immediately started, all of these events will be traced.

Clicking Generate Stack Trace for any of the listed items will give a stack trace of the
Java code being called each time that event happens. Example stack trace:

com/pegarules/generated/activity/ra_activity_pegaaccel_integration_servic
es_partynewsetup_a998caa619a569b312d1790eab029512$PzStep3_circum0_Embed_W
orkPartyDef
at
com.pegarules.generated.activity.ra_activity_pegaaccel_integration_servic
es_partynewsetup_a998caa619a569b312d1790eab029512.step3_circum0_Embed_Wor
kPartyDef(ra_activity_pegaaccel_integration_services_partynewsetup_a998ca
a619a569b312d1790eab029512.java:486)
at
com.pegarules.generated.activity.ra_activity_pegaaccel_integration_servic
es_partynewsetup_a998caa619a569b312d1790eab029512.perform(ra_activity_peg
aaccel_integration_services_partynewsetup_a998caa619a569b312d1790eab02951
2.java:165)
at
com.pega.pegarules.engine.runtime.Executable.doActivity(Executable.java:2
186)
at
com.pegarules.generated.activity.ra_activity_pegaaccel_integration_servic
es_new_d20bf170ebd9ebef4cd8e7babf7d01e5.step12_circum0(ra_activity_pegaac
cel_integration_services_new_d20bf170ebd9ebef4cd8e7babf7d01e5.java:1442)
at
com.pegarules.generated.activity.ra_activity_pegaaccel_integration_servic
es_new_d20bf170ebd9ebef4cd8e7babf7d01e5.perform(ra_activity_pegaaccel_int
egration_services_new_d20bf170ebd9ebef4cd8e7babf7d01e5.java:416)
at
com.pega.pegarules.engine.runtime.Executable.doActivity(Executable.java:2
186)
at
com.pega.pegarules.engine.context.PRThreadImpl.runActivitiesAlt(PRThreadI
mpl.java:948)

Obviously, if Generate Stack Trace is checked for every event, tracing this information
will put a great deal of data into the DBTrace, and bring system performance to a near-
standstill. However, if there is one event that is happening many more times than it
should, the Stack Trace can help focus on precisely where the issue is occurring. For
example, if opening one work item calls 200 lists out of the database, it might be
necessary to do a Stack Trace to find out precisely which code is calling all these lists.

Due to its performance implications, Stack Trace should be used sparingly; it will produce
reams of information to be examined during troubleshooting. In the example above, look
at the activities surrounding the one work item calling 200 lists, to get some insight as to
why they are called; run Tracer to see where these activities start and end. As a last
resort, if it is still not clear where the 200 lists are being called, Pegasystems might
recommend to a customer to use Stack Trace.

NOTE: Although the names in the Database Trace Events might be similar to the PAL
readings, these are not readings – they will list one instance of the event in the DBTrace.
They do not (for example) count and then add together the number of times a cache hit
happens, but print out one line in the DBTrace each time it happens.

CONFIDENTIAL 39
DB Trace

Event Description

Prepared Statements Prepared Statements are SQL queries that PegaRULES sends to the
database for any SQL operations – reading records, updating records,
deleting records, etc. If there is a question on SQL, probably all three
Prepared Statements items should be checked.

These events might be traced if the system is trying to get a list, and no
results are returned, especially if the user thinks that there are some valid
results that should be returned.

In certain cases where custom SQL is being used (such as reporting, or


other functions using Rule-Connect-SQL), the system can’t predict what
SQL statements are included. The Prepared Statements choice will track
all custom SQL..

Prepared Statement If this Prepared Statement event is checked, a line will be added to the
Queries DBTrace whenever information is being read (“queried”) from the
database.

Prepared Statement If this Prepared Statement event is checked, a line will be added to the
Updates DBTrace whenever records are being updated in the database (which
includes inserting and deleting).

Get Database Metadata When a query is done on a database table, as part of the query, the cache
is checked to see if there is information on what data is in the table (is
there a blob column [pzPVStream], what columns are exposed, etc.). If
this information about the table is not in the cache, then it is necessary to
go to the database and query the table itself to Get the Metadata
(information about the table). If this Event is checked, then a line is added
to DBTrace whenever this happens.

If the system has been running for awhile, this number should be low, as
most of the table information should be cached. Therefore, if there are a
lot of Get Database Metadata events in the DBTrace, there may be some
caching issue in the system, and further research should be done.

Read blob If this Event is checked, a line would be added to the DBTrace every time
a blob column entry (Storage Stream) is read from the database. NOTE:
This Event will also report the size of the compressed data in the Storage
Stream.

Write blob If this Event is checked, a line would be added to the DBTrace every time
a blob column entry is written to the database. NOTE: This Event will also
report the size of the compressed data in the Storage Stream.

Commit If this Event is checked, a line would be added to the DBTrace every time
a commit is done in the database. This would be a database (low-level)
commit, NOT the PegaRULES Commit (Activity Method).

Rollback If this Event is checked, a line would be added to the DBTrace if, in the
middle of doing a “high-level” commit (PegaRULES Commit – the Activity
method), something goes wrong and the transaction is rolled back.

40 CONFIDENTIAL
DB Trace

Assign Connection to This Event would track whenever one of the PegaRULES threads takes a
Thread connection (from our connection pool) to the database to accomplish some
function.

Cache Hit (Found) When trying to get a Rule from the database, the database cache is
checked first. If the Rule is found in the cache, this event is recorded.

Cache Hit (not found) When trying to get a Rule from the database, if the Rule is not in the
cache, then the database is checked. If the Rule is not in the database
either, that rule is then marked in the Cache as “does not exist”. The next
time that Rule is requested, the Cache reports the Rule doesn’t exist
(without having to check the database for this information), and this event
is recorded in DBTrace.

Cache Miss If the cache has no information about the Rule being requested (it may or
may not be in the database, but the database must be checked), then this
event is recorded in DBTrace.

As the system runs, the application developer would expect to see more
and more cache hits and fewer cache misses. If the cache misses remain
high, and the system isn’t in development (i.e., constant changes are being
made which would invalidate cache entries), then there may be an issue
with caching which should be investigated.

Run List If this Event is checked, a line would be added to the DBTrace every time
the system does an Obj-List (which corresponds to the listAPI method in
Java)

This event also shows that the blob entry was retrieved, and what property
required going to the blob for this event. When obtaining information from
the database, the system will first try to retrieve data from exposed
columns, as that is less expensive an operation than retrieving the blob. If
the property or properties requested are not exposed as columns in the
database table, then the blob entry must be returned, and all the properties
in the blob filtered to report only the list of properties that were requested.
If there is one particular property that is frequently being requested that
continually requires going to the blob and filtering the list, it might improve
performance to expose that property as a column in the database table.

NOTE: Checking this Event will not include tracking custom SQL..

List Result Count displaying on one line how many results are returned from doing a list.

Activity Start If this Event is checked, a line would be added to the DBTrace every time
an Activity is started, whether at a top level or as part of the processing of
another activity. These lines provide context for all the other events (which
tend to center around activity processes).

Activity End If this Event is checked, a line would be added to the DBTrace every time
an Activity finishes.

CONFIDENTIAL 41
DB Trace

Running a DB Trace
After choosing the appropriate Events to track (and perhaps one or two Stack Traces),
click on Start DB Trace.

Once Start DB Trace has been clicked, the link changes to Stop DB Trace.

Run the business process being traced, and then click on Stop DB Trace. The
Download DB Trace to PC screen will display:

As the directions state, first click on Download or Display DB Trace. This will download
the text trace file to the user’s PC.

After the file has been downloaded, click on Start Excel to, well, start Excel. This should
open an Excel spreadsheet inside the browser, with a Macro button displayed:

42 CONFIDENTIAL
DB Trace

As instructed, click this button, and choose the text trace file that was created. A great
deal of data will be displayed in the Excel spreadsheet.

A PivotTable Field List will be displayed. Unless you are an experienced Excel PivotTable
user, just click on the “X” in the upper right corner of the box to close it.

There are four possible worksheets in the Excel workbook:

• Interaction Summary
• SQL Calls
• Cache Access
• DB Trace Data

These worksheets have columns of data corresponding to the DB Trace Events chosen.

CONFIDENTIAL 43
DB Trace

DB Trace Data
The DBTrace worksheet contains the list of all the events recorded for this DBTrace. This
worksheet is designed to be used with pivot tables; if the developer is familiar with
Microsoft Excel pivot table technology, they should customize their own pivot tables to
highlight desired fields.

This worksheet contains the following columns:

Column Description Example


Name

Sequence the order in which the events sequential numbers (1, 2, 3, etc.)
occurred

Interaction number of the interaction 48

Connection ID of the connection (from the 5


connection pool) that was used for
this event

RequestorID ID of the requestor which performed H2CBF71A19AA669E2BE0A6B1


this event EAA4FE7A5

NOTE: For a requestor-level trace


(the DBTrace), this should always
be the same value.

User ID of the user associated with the PALuser@pegasystems.com


requestor.

NOTE: For a requestor-level trace


(the DBTrace), this should always
be the same value.

Time(s) the duration, in seconds, of this 0.00281066275576739


event.

44 CONFIDENTIAL
DB Trace

High Level For every operation, there may be 3


Op ID several events. For example, for the
operation of a request to get data
from the database, there may be
multiple events: checking the cache,
getting a connection to the
database, reading the database,
reporting data back. The High-Level
Operation ID is the number that links
several events which are part of one
high-level operation.

High Level A more detailed textual description open instance of class Rule-Obj-
Op of the event Property by keys

Label If the Label property is filled in, this


column could be used to label or
categorize Events as the processing
is being Traced (if desired).

Operation the type of Event preparedStatementQuery

Object Class The Class of the instance upon Rule-Obj-Activity


which the Event is acting – the class
being opened, or saved, or from
which the Activity is being started.

Size size of the BLOB entry being read 5359

Note English description of Event. Looking for instance Rule-Obj-


Property EMBED-
ACTIVITYSTEPS!PYSTEPSPRE
CONDPARAMS in cache

SQL SQL statement being run to select pzInsKey, pyRuleStarts,


accomplish the Event. pyRuleSet, pyRuleSetVersion,
pyClassName, pyRuleAvailable,
pyRuleEnds,
pyCircumstanceProp,
pyCircumstanceVal,
pyCircumstanceDateProp,
pyCircumstanceDate from
pr4_rule_property where
pxObjClass = ? and pxInsId = ?

SQL Inserts Values to plug into Prepared SQL <Rule-Obj-Property>


Statement. <!PYSTEPSPRECONDPARAMS
>

CONFIDENTIAL 45
DB Trace

SQL Type of SQL operation. Could be select


Operations one of four values:
• select
• insert
• update
• delete

Database Name of the database – the value in PegaRULES


Name Data-Admin-DB-Name.

Table Name The fully-qualified name of the table pr4_rule_property


on which the SQL statement is
acting.

Stack Trace If Generate Stack Trace were


checked for any Events in the DB
Trace Options form, then the Stack
Trace would be displayed in this
column.

NOTE: Generate Stack Trace


should only be chosen when
Pegasystems recommends, as this
option has a significant effect on
performance.

The top 20 events that took the most time in this trace are highlighted in yellow:

46 CONFIDENTIAL
DB Trace

If a developer believes they have a very large query which is taking too much time to
process, they can check details on this spreadsheet. The entries are for the requestor
being investigated, and shows where this user is spending time in the database.

Sort on the Time column, so the top 20 interactions are at the top of the spreadsheet, and
look at the interactions that took the most time. For each individual query, DBTrace
displays exactly what was passed to the database, including the SQL statement and the
variables that were substituted. The Storage Stream reference is also included, so the
developer can see which rows are returned. The developer should see if a query could be
made faster, or if the data from the query is available from the system in a different way
that would not require a database query at all.

In addition to information on individual queries, DBTrace will show the frequency of each
query in the specified process. For a given interaction, the developer should examine how
many queries were run, and why – was the same query run more than once? If so, the
developer should look at the business logic of the application to see if this duplication
could be eliminated.

Example: Using DBTrace


The developer of a new application measured one of his processes using PAL. The
Requestor Summary section of the PAL Detail Screen showed 10 database requests
that exceeded the time threshold in this process.

CONFIDENTIAL 47
DB Trace

The developer then turned to the DBTrace information for further details. After running
DBTrace, they clicked on the DBTrace Data tab, and saw the following information:

The Top 20 steps (measured by time) are highlighted in yellow. It is possible to rank all
the entries in the trace by clicking on the arrow to the right of the Time column header and
choosing Sort Descending:

This will then bring all the top 20 queries by time to the top of the spreadsheet:

48 CONFIDENTIAL
DB Trace

The spreadsheet shows the queries which took more than .5 seconds, and gives
information about what processing was happening. In this example, the first two entries,
which had the highest times, were Interaction 24 and Interaction 26; they were both a “list
using a list spec”.

Looking at the SQL code, they were both executing exactly the same code:

Therefore, in this example, the developer would investigate why there were two lists run
(for almost a full second of time, which is very high) almost one right after the other. If the
process is slightly redesigned, perhaps one of these queries could be eliminated.

Other DBTrace Spreadsheets


As mentioned before, depending upon what DB Trace Options are selected, some of the
columns may not appear in some of the spreadsheets. For example, if the SQL option
Prepared Statement Queries is not chosen, then that column will not appear in any of
the worksheets. If all three of the SQL events aren’t chosen (Prepared Statement
Queries, Prepared Statement Updates, Prepared Statements), then the entire SQL Calls
worksheet is omitted, with the following message:

CONFIDENTIAL 49
DB Trace

For each of these worksheets, several dropdown choices are available, which allow the
analyst to show only part of the data. For each sheet, the first column shows the type of
data being collected on the spreadsheet (Interactions, SQL calls, etc.). So on the
Interaction Summary sheet, the first column displays the Interaction Numbers. The
dropdown there allows the developer to uncheck one or more interactions, to remove
them from the list. (All are checked and visible from the start.)

50 CONFIDENTIAL
DB Trace

If any of these numbers are unchecked, then those rows will not display in the worksheet.

CONFIDENTIAL 51
DB Trace

These “hidden” rows may be restored by checking the box in the dropdown again.

For each column, the data is summarized in three rows:

Row Title Description

Sum the total amount of time, in seconds, spent on this event type for
the interaction

Count the number of occurrences of this type of event during the


interaction

Average the average time per instance of this event (Sum divided by
Count) for this interaction

Totals for the entire DBTrace and across each interaction are also provided.

The Data dropdown list allows the analyst to uncheck and remove one of these three row
types (Sum, Count, Average).

52 CONFIDENTIAL
DB Trace

The Operation dropdown list shows all the possible operations (Cache Hits, Activity
Starts, etc.) with the appropriate events for that worksheet checked. For example, the
SQL operations are checked on the Interaction Summary and the SQL Calls
spreadsheet, but not on the Cache Access sheet. Again, unchecking one or more of
these event types will hide that column of data in the worksheet.

Clicking on one of these numbers will show a filtered list of the operations that make up
that number. For example, in the Interaction Summary sheet, there were 21 instances of
End Activity in interaction 39:

Double-clicking on that number creates a new worksheet, with just those 21 interactions in
full detail:

NOTE: This information is filtered from the DB Trace Data worksheet, and has the same
columns as that sheet (see below).

CONFIDENTIAL 53
DB Trace

Interaction Summary

The Interaction Summary worksheet lists each interaction in the DBTrace (see the
Interactions section of this document for a definition of interaction). For each interaction, a
summary of the following types of requestor-level events is listed:

• assignToThread
• cachedNotFound
• cacheHit
• cacheMiss
• commit
• endActivity
• listResultCount
• preparedStatement
• preparedStatementQuery
• preparedStatementUpdate
• readBlob
• runList
• startActivity
• writeBlob

SQL Calls

The SQL Calls worksheet shows data about the SQL calls for the business process during
the DBTrace.

This worksheet contains the following columns:

• preparedStatement
• preparedStatementQuery
• preparedStatementUpdate
• Grand Total

Cache Access

The Cache Access worksheet tracks the number of times the cache was accessed when
the system requested various Rule instances.

This worksheet contains the following columns:

• cachedNotFound
• cacheHit
• cacheMiss
• Grand Total

These columns correspond to the Events checked in the DB Trace Options screen, and
will only have data if those Events were checked.

54 CONFIDENTIAL
DB Trace

Global Trace
PegaRULES’ features include the ability to trace the various types of operations globally
(across the entire system), as well as just for one requestor. A full description of global
DBTrace is outside the scope of this document; for complete details, please contact your
Pegasystems representative.

If tracing for one requestor, the DBTrace facility should be enabled through the
Performance tool under the Administrator portal, as described above. A global
DBTrace is enabled differently, through the pegarules.xml file:

• the dumpStats entry under database must be set to true


• any specific Events to be traced should be specified under traceEvents

NOTE: If no events are specified, then all events will be traced.

<node name="database">
<map>
<entry key="dumpStats" value="true"/>
<entry key="traceEvents"
value="preparedStatementUpdate;preparedStatement;preparedStatementQuery
"/>
</map>

A new file is created each time the PegaRULES engine is started. The file is stored in the
ServiceExport directory, and is named dbOperationstimestamp.txt, where timestamp is the
timestamp of the file.

NOTE: It is not possible to run the global DBTrace and the requestor-level DBTrace at the
same time. If the global DBTrace is being run, it will override the requestor-level DBTrace
(this is by design).

CONFIDENTIAL 55
GLOSSARY: PAL Readings in Detail

GLOSSARY: PAL Readings in Detail


This section describes each PAL counter on the Detail Screen, along with an indication of
how it should be used in a performance analysis of a system.

IMPORTANT: CPU readings are not available on the UNIX system.

Activities Executed

Property Name: .pxActivityCount

This reading measures the number of times Activities are started by a given Requestor.
This count is incremented when any Activity has started. Therefore, Activity 1 may call
Activity 2, which calls Activity 3; then in another step, Activity 1 calls Activity 2 again. At
the completion of Activity 1, the Activity Count reading will be 4.

NOTE: If an exception occurs during the execution of the Activity, the count is not backed
out.

Bytes read from database Storage Streams (uncompressed)

Property Name: .pxDBInputBytes

This reading tracks the size in bytes (after decompression) of all the data that has been
read in from the Storage Stream column (pzPVStream) in the database.

Bytes written to database Storage Streams (uncompressed)

Property Name: .pxDBOutputBytes

This reading tracks the size in bytes (before compression) of all the data that has been
written to the Storage Stream column (pzPVStream) in the database.

Connects executed

Property Name: .pxConnectCount

This is a Count reading that measures the number of times an Integration Services
connection (Rule-Connect- instance) is attempted to an external system.

56 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

CPU time accessing non-rule-resolved instances from the database

Property: .pxOtherIOCPU

This reading measures the CPU time spent when Process Commander performs a
database operation (such as Insert, Select, or Delete) for all non-rule-resolved instances.
This does NOT include rule-resolved Opens (which are covered by Rule-resolved rules
requested from database or cache), but DOES include Rules that are NOT rule-resolved,
OR rule-resolved Rules that are opened by handle instead of by using Open. This also
does not include RDB calls (providing custom SQL calls).

CPU time checking Java syntax

see Java Syntax Readings

CPU time compiling Rules

see Java Compilation Readings

CPU time executing Declarative Rules

see Declarative Rules Executed

CPU time performing Rules assembly

see Rule Assembly Readings

CPU time processing Storage Streams for non-Rule database lists

Property: pxOtherBrowseFilterCPU

When a request is made for data from the Storage Stream column in the database
(pzPVStream), but only a subset of the returned properties is requested, the list that is
returned is "processed" (filtered) to show only the requested properties. This statistic
tracks the amount of CPU time spent filtering the requested non-Rule database lists.

CPU time processing Storage Streams for Rule database lists

Property: pxRuleBrowseFilterCPU

When a request is made for data from the Storage Stream column in the database
(pzPVStream), but only a subset of the returned properties is requested, the list that is

CONFIDENTIAL 57
GLOSSARY: PAL Readings in Detail

returned is "processed" (filtered) to show only the requested properties. This statistic
tracks the amount of CPU time spent filtering the requested Rule database lists.

CPU time retrieving Declarative Rules

see Declarative Rules Lookup

CPU time retrieving non-Rule database lists

Property: pxOtherBrowseCPU

This reading measures the amount of CPU time spent generating lists of objects which are
not Rules (instances of classes that do not descend from Rule-). (For example, an Obj-
List call could generated a list of Data- instances.) NOTE: These lists may or may not
include the Storage Stream data.

CPU time retrieving Rule database lists

Property: pxRuleBrowseCPU

This reading measures the amount of CPU time spent generating lists of Rules (instances
of classes that descend from Rule-). (For example, an Obj-List call could generated a list
of Rule-Obj-Activity instances.) NOTE: These lists may or may not include the Storage
Stream data.

CPU time retrieving rule-resolved Rules from the database

Property: pxRuleCPU

This reading measures CPU time spent opening rules for the Rule Resolution process
(getting all versions of the Rules, going through steps to determine best Rule for use).
Includes CPU time spent opening Rules for a RuleSet Context.

Database time threshold

Property: pxDBThreshold

This reading displays the suggested threshold (in milliseconds) for the time that one
database operation should consume. If database operations are exceeding this threshold,
they should be scrutinized for efficiency.

This threshold is set in the pegarules.xml file; the default is a half-second (500
milliseconds).

58 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

Declarative Indexes written

Property: pxIndexCount

This reading tracks the number of Index- instances that have been created by Declarative
Indexing. Having a large number of Declarative Indexes defined can adversely affect
performance, so if this value is high, evaluate all the Declarative Index definitions to make
sure they are designed as efficiently as possible.

Declarative Rules executed

see Declarative Rules Executed

Declarative Rules Executed

These readings track the processing of the Declarative Rules that were actually fired as a
result of a property change. (Note that the Tracked Property Changes measurement
actually tracks those property changes.)

If these numbers are large, this may indicate that large numbers of Declarative Rules are
being fired, which may point to a design issue in creating Declaratives. (For example,
there may be a calculation based off customer zip code. If this calculation is fired every
time the customer’s name changes, the Declarative Rule may need to be redesigned so
that the Rule fires only when the zip code changes, not when any customer address
information changes.)

These Rules include:

• Rule-Declare-Expressions
• Rule-Declare-Index
• Rule-Declare-Constraints
• Rule-Declare-OnChange
• Rule-Declare-Trigger

CPU time executing Declarative Rules

Property: pxDeclarativeRulesInvokedCPU

This reading tracks the amount of CPU time spent executing the Declarative rules.

Elapsed time executing Declarative Rules

Property: pxDeclarativeRulesInvokedElapsed

This reading tracks the amount of total (elapsed) time spent executing the Declarative
rules.

CONFIDENTIAL 59
GLOSSARY: PAL Readings in Detail

Declarative Rules executed

Property: pxDeclarativeRulesInvokedCount

This reading shows the number of times that any Declarative rules are invoked, or that a
chaining rule was executed (an Expression was computed, a Constraint was run, an
OnChange Activity was run).

Declarative Rules Lookup

These readings measure the number of times that there is an attempt to look up
Declarative Rules from the database, in order to build the Dependency Network for a
particular class. The Declarative Rules include:

• Rule-Declare-Expressions
• Rule-Declare-Constraints
• Rule-Declare-OnChange
• Rule-Declare-Trigger
• Rule-Declare-Index

NOTE: These readings measure the lookup of the Rules for the Dependency Network,
not the actual building of that Network.

If these readings indicate that Declarative Rules are being looked up during operations
where the developer doesn’t believe that Declarative Rules were invoked, these readings
may assist in tracking down that information.

CPU time retrieving Declarative Rules

Property: pxDeclarativeRulesLookupCPU

This reading measures the CPU time spent looking up all required declarative rules to
build the Dependency Networks.

Elapsed time retrieving Declarative Rules

Property: pxDeclarativeRulesLookupElapsed

This reading measures the elapsed (total) time spent looking up all required Declarative
rules to build the Dependency Networks.

Declarative Rules retrieved from the database

Property: pxDeclarativeRulesLookupCount

This reading counts the number of times an individual dependency network was built (the
number of times a list of Declarative Rules was created for a specific class).

Declarative Rules retrieved from the database

see Declarative Rules Lookup

60 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

Edit Rules executed

Property Name: pxRunOtherRuleCount

This reading displays the number of Edit rules (Rule-Edit-Input and Rule-Edit-Validate)
which have been invoked by this requestor.

Elapsed Time accessing non-rule-resolved instances from the database

Property: pxOtherIOElapsed

This reading measures the elapsed (total) time spent when Process Commander
performs a database operation (such as Delete, Save, or Open) for all non-rule-resolved
instances.

NOTES: This reading does not include:


• rule-resolved Opens (which are covered by Rule-resolved rules requested from
database or cache)
• RDB calls (providing custom SQL calls for reports)

This reading does include


• Rules that are not rule-resolved
• rule-resolved Rules that are opened by handle instead of by using Open

Elapsed Time checking Java syntax

see Java Syntax Readings

Elapsed Time compiling Rules

see Java Compilation Readings

Elapsed Time executing Connect Rules

Property Name: pxConnectElapsed

This reading tracks the amount of elapsed (total) time spent waiting for a response from
an external system (time spent during Rule-Connects).

COMMENTS: There is not a ConnectCPU PAL reading because it is not possible for
PegaRULES to measure the CPU time spent in a separate system (PegaRULES can’t tell
whether the time spent waiting for the other system is CPU/processing time, or just wait
time).

CONFIDENTIAL 61
GLOSSARY: PAL Readings in Detail

Elapsed Time executing Declarative Rules

see Declarative Rules Executed

Elapsed Time performing Rule assembly

see Rule Assembly Readings

Elapsed Time processing Storage Streams for non-rule database lists

Property: pxOtherBrowseFilterElapsed

When a request is made for data from the Storage Stream column in the database
(pzPVStream), but only a subset of the returned properties is requested, the list that is
returned is "processed" (filtered) to show only the requested properties. This statistic
tracks the amount of elapsed time spent filtering the requested non-Rule database lists.

Elapsed Time processing Storage Streams for Rule database lists

Property: pxRuleBrowseFilterElapsed

When a request is made for data from the Storage Stream column in the database
(pzPVStream), but only a subset of the returned properties is requested, the list that is
returned is "processed" (filtered) to show only the requested properties. This statistic
tracks the amount of elapsed time spent filtering the requested Rule database lists.

Elapsed Time retrieving Declarative Rules

see Declarative Rules Lookup

Elapsed Time retrieving non-Rule database lists

Property: pxOtherBrowseElapsed

This reading measures the amount of elapsed time spent generating lists of objects which
are not Rules (instances of classes that do not descend from Rule-). (For example, an
Obj-List call could generated a list of Data- instances.) NOTE: These lists may or may not
include the Storage Stream data.

Elapsed Time retrieving Rule database lists

Property: pxRuleBrowseElapsed

This reading measures the amount of elapsed time spent generating lists of Rules
(instances of classes that descend from Rule-). (For example, an Obj-List call could

62 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

generated a list of Rule-Obj-Activity instances.) NOTE: These lists may or may not
include the Storage Stream data.

Elapsed Time retrieving rule-resolved Rules from the database

Property: pxRuleIOElapsed

This reading measures the elapsed time spent opening rules for the Rule Resolution
process (getting all versions of the Rules, going through steps to determine best Rule for
use). Includes time spent opening Rules for a RuleSet Context.

Elapsed Time writing to the database

Property Name: pxCommitElapsed

This reading tracks the amount of time spent committing data to the database. There are
two types of commit (both in the Obj-Save method) – immediate and deferred. This
reading will track the time spent for both types of commit.

Flows executed

Property Name: pxFlowCount

This is a Count reading that measures the number of times Flows are started by a given
Requestor. This count is incremented when any Flow has started. Therefore, Flow 1 may
call Flow 2, which calls Flow 3; then in another step, Flow 1 calls Flow 2 again. At the
completion of Flow 1, the Flow Count reading will be 4.

NOTE: If an exception occurs during the execution of the Flow, the count is not backed
out.

HTML transaction frame count

Property: pxFrameTransactionMapCount

Part of the Process Commander functionality includes the ability to configure transactional
forms, with synchronization tokens (see Knowledgebase article #10979, “How to
Configure Transactional Forms”). This counter measures the total number of HTML
frames which are tracked as transactional for this requestor.

HTML transaction frame map size (bytes)

Property: pxFrameTransactionMapSize

For the HTML frames which are being tracked as transactional, this counter measures the
total number of bytes stored for these transactional frames in this requestor.

CONFIDENTIAL 63
GLOSSARY: PAL Readings in Detail

Java compilations

see Java Compilation Readings

Java steps executed

Property Name: pxJavaStepCount

This reading shows the number of custom Java steps called during the running of the
application. PegaRULES Best Practice recommends that the number of custom Java
steps in an application be low.

Java syntax checks

see Java Syntax Readings

Java Compilation Readings

These readings measure the the number of times that a pre-assembled Java class could
not be found for a Rule, so a Java class was generated and compiled for this requestor.
In an active production system, the already-assembled Java class for a Rule should
almost always be found by the system, so this number should be low – ideally zero.

If this number is non-zero, Rule assemblies are occurring. If this is the first time that
someone might have referenced a particular rule, a non-zero value for this reading is
acceptable. However, the higher this number is, the more Rule Assemblies are being
done. If the developer believes that Rules are being assembled which were previously
used (and should thus have already been assembled), there may be a situation where the
user population has many varied RuleSet Lists (which would prevent re-use of previously-
assembled Rules).

NOTE: These readings measure the compilation time only for the generated Java code,
not the assembly time required to generate the Java code. The assemble readings are
measured separately (see Rule Assembly Readings).

Rules Assembly can have a significant negative impact on the application response time.

CPU time compiling Rules

Property Name: pxJavaCompileCPU

This reading tracks the CPU time spent compiling generated Java classes for Rule
Assembly.

64 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

Elapsed time compiling Rules

Property Name: pxJavaCompileElapsed

This reading tracks the amount of elapsed (total) time spent compiling generated Java
Classes for Rule Assembly.

Java compilations

Property Name: pxJavaCompileCount

This count-type reading keeps track of the number of times that Java Classes are
compiled for Rules Assembly.

NOTE: If there is an exception during compilation, the class is not built, but the counter
remains incremented.

Java Syntax Readings

Java Syntax validation occurs when a Rule is saved. The time required to assemble the
generated Java code is tracked in the Rule Assembly readings; after the code is
assembled, there is a preliminary compile done to verify that under the RuleSet Context
RuleSet (used at save time), the Java code compiles correctly. The Java Syntax readings
measure that preliminary compilation, done at design (save) time.

(NOTE: The actual Java code compilation for the generated code is done at assembly
time, immediately before execution of that code, and is tracked in the Java Compilation
readings.)

If this number is non-zero, changes to Rule assemblies are occurring. The higher this
number is, the more changes to assembled Rules are being done; the assumption would
be that development is occuring, which would create many changes to Rules. If
development is not occurring, then these values should be investigated to determine why
so many Rules are being changed.

CPU time checking Java syntax

Property Name: .pxJavaSyntaxCPU

This reading tracks the CPU time spent doing a preliminary compile on assembled Java
code, in order to verify that with that RuleSet Context, the code will compile correctly.

Elapsed time checking Java syntax

Property Name: .pxJavaSyntaxElapsed

This reading tracks the elapsed (total) time spent doing a preliminary compile on
assembled Java code, in order to verify that with that RuleSet Context, the code will
compile correctly.

CONFIDENTIAL 65
GLOSSARY: PAL Readings in Detail

NOTE: If an exception occurs when timing the Elapsed Time, the timer will not be stopped
(which will result in very large anomalous times).

Java syntax checks

Property Name: .pxJavaSyntaxCount

This reading measures the number of times a preliminary compile is performed on


assembled Java code, in order to verify that with that RuleSet Context, the code will
compile correctly.

NOTE: If there is an exception during assembly, the class is not built, but the counter
stays incremented.

Last Date and Time the Cache was Cleared

Property Name: pxDateTimeCleared

This reading is initialized when the cache is created, and shows the date/time for the last
time the database cache was cleared. Note that clearing the database cache also clears
the Rules Assembly Cache. While this is a system-level operation, it will cause Rules
Assembly to occur for all rules for each requestor. If suddenly, a user sees Rules
Assembly happening at a point when all the Rules Assembly should have been finished,
this reading should be checked to see if the cache had recently been cleared.

Legacy-based Rules executed

Property: pxLegacyRuleAPIUsedCount

This count indicates how many total rules used in the process being measured are
implemented with versions of the Rule Assembly process which are not current.

First-Use Assembly architecture allows for evolution in construction of the Java code that
implements the rules. In order to support old APIs, PegaRULES incorporates different
evolutionary versions of assemblers. This functionality may be seen in some of the Rule
forms, where the API version dropdown box allows a choice between version 2.0 and
3.2. Every Rule where the API version is set to 2.0 is counted in this reading.

Models executed

Property Name: pxRunModelCount

This reading counts the number of times models (instances of Rule-Obj-Model) are
applied for the given requestor.

66 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

Non-rule-resolved instances accessed from the database

Property: pxOtherIOCount

This reading keeps track of the number of times that Process Commander performs a
database operation (such as Delete, Save, or Open) for all non-rule-resolved instances.

NOTES: This reading does not include:


• rule-resolved Opens (which are covered by Rule-resolved rules requested from
database or cache)
• RDB calls (providing custom SQL calls for reports)

This reading does include


• Saves and deletes (all requests, not just reads)
• Rules that are not rule-resolved
• rule-resolved Rules that are opened by handle instead of by using Open

The inclusion of saves and deletes (as well as Opens) means that this number may be
higher than the number of requests (the next definition)

Non-rule-resolved instances requested from database or cache

Property: pxOtherCount

This reading measures the number of times a non-rule-resolved instance is requested in


the system (whether the instance is found in the cache or in the database).

Non-rule-resolved instances retrieved from the cache

Property: pxOtherFromCacheCount

When a non-rule-resolved instance is requested, this reading tracks the number of times
where the result comes from the cache.

Number of database requests that exceeded the time threshold

Property: pxDBOpExceedingThresholdCount

This reading shows the number of database requests which took longer than the threshold
limit set in the Database time threshold setting. If this number is significant, the
efficiency of database operations should be reviewed.

CONFIDENTIAL 67
GLOSSARY: PAL Readings in Detail

Number of Input Bytes Received by the Server

Property Name: pxInputBytes

This reading measures the uncompressed size (in bytes) of the HTML data sent to the
PegaRULES servlet (PRServlet) on the server from the client requestor (user’s browser)
via HTTP.

Number of Named Pages created

Property Name: pxPagesNamed

This reading counts the number of named pages currently existing on the Clipboard for the
specified Requestor.

Number of Output Bytes sent from the server

Property Name: pxOutputBytes

This reading measures the uncompressed size (in bytes) of the HTML and XML data sent
from the PegaRULES servlet (PRServlet) on the server to the user’s browser (the client or
requestor) via HTTP. This does not include data sent as static content, such as jpeg files,
Javascript, and .gif images. (Static content is tracked at the system level using the
PRMonitorServlet.)

Number of Server Interactions

Property Name: pxInteractions

This reading counts the number of requests made to the server by this requestor since the
last time PAL was cleared/reset. The reading includes all requests in this session for this
requestor.

NOTE: PAL itself creates a request to the server to gather the data, so the application
count for this reading is actually one less than what is shown. (For example, if there were
12 requests to the server, and PAL is displayed, the counter will show 13 requests.)

Interaction number is the number of the interaction associated with the current PAL
reading.

Obj-List requests to the Database

PegaRULES functionality includes the ability to query the database for information. There
are two main types of database SQL query:

• Obj-List
• RDB-List

68 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

Obj-List allows the developer to specify the conditions of their query in higher-level
language, using the PegaRULES objects (example: Return a list of all work items where
the Customer is Acme Corp.). PegaRULES then translates this request into SQL and
queries the database.

To increase storage efficiency in the database, and to allow a relational database to store
embedded pages, PegaRULES uses a Storage Stream column (pzPVStream). For most
PegaRULES tables, only a few properties are exposed in the database table columns;
most of the information is stored in the Storage Stream column (also referred to as the
BLOB), which is compressed for further efficiency. When information is queried from the
database and must be returned from the Storage Stream, uncompressing and filtering this
information has a high resource cost. It is vital to the performance of the system that
queries be designed carefully, to prevent unnecessary extraction of information from the
Storage Stream. (For example, if a particular property is requested frequently from the
Storage Stream, it may markedly increase performance to expose that column in the
database, so the Storage Stream does not need to be decompressed and filtered to get
that one property. DBTrace can be used to identify the column referenced in the Storage
Stream.)

Did not require a Storage Stream

Property: pxListWithoutStreamCount

This reading tracks all Obj-List queries where no data was requested from the Storage
Stream.

Required all of the Storage Stream

Property: pxListWithUnfilteredStreamCount

This reading tracks all Obj-List queries where all the data which was requested was from
the Storage Stream. Although queries to the Storage Stream are less efficient, when all
the data is used, it would not be practical to expose all the columns; therefore, queries
using all the information are considered to be as efficient as possible in this architecture.

Required some of the Storage Stream

Property: pxListWithFilteredStreamCount

This reading tracks all Obj-List queries where some data was requested from the Storage
Stream. These queries are considered the least efficient, as the data from the Storage
Stream must be extracted, decompressed, and then filtered to return only the properties
that were requested.

Procedural Rules retrieved from the database

Property: pxProceduralRuleReadCount

This reading tracks the number of rule-resolved Rules read from the database, excluding
the Declarative and Property Rules.

CONFIDENTIAL 69
GLOSSARY: PAL Readings in Detail

Property Rules retrieved from the database

Property: pxPropertyReadCount

This reading tracks the number of Rule-Obj-Property Rules read from the database.

RDB-List requests to the Database

PegaRULES functionality includes the ability to query the database for information. There
are two main types of database SQL query:

• Obj-List
• RDB-List

RDB-List allows the developer to specify the conditions of their query directly in SQL.
These queries are made through Rule-Connect-SQL instances.

To increase storage efficiency in the database, and to allow a relational database to store
embedded pages, PegaRULES uses a Storage Stream column (pzPVStream). For most
PegaRULES tables, only a few properties are exposed in the database table columns;
most of the information is stored in the Storage Stream column (also referred to as the
BLOB), which is compressed for further efficiency. When information is queried from the
database and must be returned from the Storage Stream, uncompressing and filtering this
information has a high resource cost. It is vital to the performance of the system that
queries be designed carefully, to prevent unnecessary extraction of information from the
Storage Stream. (For example, if a particular property is requested frequently from the
Storage Stream, it may markedly increase performance to expose that column in the
database, so the Storage Stream does not need to be decompressed and filtered to get
that one property. DBTrace can be used to identify the column referenced in the Storage
Stream.)

Did not require a Storage Stream

Property: pxRDBWithoutStreamCount

This reading tracks all RDB-List queries where no data was requested from the Storage
Stream.

Required the Storage Stream

Property: pxRDBWithStreamCount

This reading tracks all RDB-List queries where data was requested from the Storage
Stream.

Requestor Clipboard Size (bytes)

Property: pxEstimatedRequestorDataSize

This reading tracks the estimated cumulative size of the Requestor's clipboard pages.
(This includes the clipboard pages in the Requestor's Threads.)

70 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

NOTE: This statistic will only have a value if the "Add Reading with Clipboard Size" link is
clicked; clicking "Add Reading" will not affect this stat.

Rows Returned from Obj-List Requests

PegaRULES functionality includes the ability to query the database for information. There
are two main types of database SQL query:

• Obj-List
• RDB-List

One Obj-List database query could result in a great many rows returned from the
database. It is important to track the amount of information being requested, as very large
requests will materially slow down performance of the system.

To increase storage efficiency in the database, and to allow a relational database to store
embedded pages, PegaRULES uses a Storage Stream column (pzPVStream). For most
PegaRULES tables, only a few properties are exposed in the database table columns;
most of the information is stored in the Storage Stream column (also referred to as the
BLOB), which is compressed for further efficiency. When information is queried from the
database and must be returned from the Storage Stream, uncompressing and filtering this
information has a high resource cost. It is vital to the performance of the system that
queries be designed carefully, to prevent unnecessary extraction of information from the
Storage Stream. (For example, if a particular property is requested frequently from the
Storage Stream, it may markedly increase performance to expose that column in the
database, so the Storage Stream does not need to be decompressed and filtered to get
that one property. DBTrace can be used to identify the column referenced in the Storage
Stream.)

Did not require a Storage Stream

Property: pxListRowWithoutStreamCount

This reading tracks the number of records (rows) returned by Obj-List queries which did
not require the Storage Stream.

Required all of the Storage Stream

Property: pxListRowWithUnfilteredStreamCount

This reading tracks the number of records (rows) returned by Obj-List queries which
required all the information from the Storage Stream.

Required some of the Storage Stream

Property: pxListRowWithFilteredStreamCount

This reading tracks the number of records (rows) returned by Obj-List queries which
required some data from the Storage Stream. These queries are considered the least
efficient, as the data from the Storage Stream must be extracted, decompressed, and then
filtered to return only the properties that were requested.

CONFIDENTIAL 71
GLOSSARY: PAL Readings in Detail

Rows Returned from RDB-List Requests

PegaRULES functionality includes the ability to query the database for information. There
are two main types of database SQL query:

• Obj-List
• RDB-List

One RDB-List database query could result in a great many rows returned from the
database. It is important to track the amount of information being requested, as very large
requests will materially slow down performance of the system.

To increase storage efficiency in the database, and to allow a relational database to store
embedded pages, PegaRULES uses a Storage Stream column (pzPVStream). For most
PegaRULES tables, only a few properties are exposed in the database table columns;
most of the information is stored in the Storage Stream column (also referred to as the
BLOB), which is compressed for further efficiency. When information is queried from the
database and must be returned from the Storage Stream, uncompressing and filtering this
information has a high resource cost. It is vital to the performance of the system that
queries be designed carefully, to prevent unnecessary extraction of information from the
Storage Stream. (For example, if a particular property is requested frequently from the
Storage Stream, it may markedly increase performance to expose that column in the
database, so the Storage Stream does not need to be decompressed and filtered to get
that one property. DBTrace can be used to identify the column referenced in the Storage
Stream.)

Did not require a Storage Stream

Property: pxRDBRowWithoutStreamCount

This reading tracks the number of records (rows) returned by all RDB-List queries where
no data was requested from the Storage Stream.

Required the Storage Stream

Property: pxRDBRowWithStreamCount

This reading tracks the number of records (rows) returned by all RDB-List queries where
data was requested from the Storage Stream.

Rule Assembly Readings

These readings measure the the number of times that a pre-assembled Java class could
not be found for a Rule, so Rules Assembly was invoked, and a Java class was generated
for this requestor. In an active production system, the already-assembled Java class
should almost always be found by the system for a Rule, so this number should be low –
ideally zero.

If this number is non-zero, Rule assemblies are occurring. If this is the first time that
someone might have referenced a particular rule, a non-zero value for this reading is
acceptable. However, the higher this number is, the more Rule Assemblies are being
done. If the developer believes that Rules are being assembled which were previously

72 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

used (and should thus have already been assembled), there may be a situation where the
user population has many varied RuleSet Lists (which would prevent re-use of previously-
assembled Rules).

NOTE: These readings measure the time required to assemble the rules and generate
the Java code, not to compile that code. The compilation readings are measured
separately (see Java Compilation Reading). Also, not all Rule Assemblies require a
compile (see the Rules Assembly tech note for full details).

Rules Assembly can have a significant negative impact on the application response time.

CPU time performing Rule assembly

Property Name: .pxJavaAssembleCPU

This reading tracks the CPU time spent assembling rules and generating the Java classes
for Rule Assembly.

Elapsed time performing Rule assembly

Property Name: .pxJavaAssembleElapsed

This reading tracks the amount of elapsed (total) time spent assembling rules and
generating the Java Classes for Rule Assembly.

Rule assemblies

Property Name: .pxJavaAssembleCount

This is a Count reading that measures the number of times that a Java class was
generated (assembled) for this requestor.

NOTE: If there is an exception during assembly, the class is not built, but the counter
stays incremented.

Rule assemblies

see Rule Assembly Readings

Rule-resolved rules requested from database or cache

Property: pxRuleCount

This reading keeps track of the number of times an attempt is made to open a Rule during
Rule-Resolution processing (for any class descended from Rule- which use Rule
Resolution). This would include Rules which are inlined as part of the Rules Assembly
process - the counter will be incremented once for each Rule that has been inlined - and
Rules opened using a RuleSet Context (see the Rule Referencing document for details on
RuleSet Context). NOTE: This does NOT include rule-resolved Rules that are opened by
handle (not often done - usually Workbench or editors or Class Explorer).

CONFIDENTIAL 73
GLOSSARY: PAL Readings in Detail

Rule-resolved Rules retrieved from the cache

Property: pxRuleFromCacheCount

This reading measures the number of requested rule-resolved Rules found in the cache.
NOTE: The cached "not found" instances will increment this counter.

Rules executed

Property: pxRulesExecuted

This reading counts the number of Rules generated by Rules Assembly that are executed.
Only the Rules Assembly rules that are executed are included in the count (not rules that
are inlined into that code).

Services executed

Property Name: pxServiceCount

This reading counts the number of times a Rule-Service- rule is activated from outside the
system.

Streams executed

Property Name: pxRunStreamCount

This reading counts the number of times Streams (for example, instances of Rule-Obj-
HTML, Rule-Obj-Corr, Rule-Obj-XML, and Rule-Obj-JSP) are displayed for the requestor.

System Cache Status

Property: pxCacheEnabled

This reading is a boolean value which reports whether the database cache is enabled
(“On” or “Off”). If this value is set to Off, no values in the system will be cached.

It is not recommended to ever turn the cache off! If caching is turned off, then every
request will go directly to the database, causing a great deal of very expensive I/O
processing. And since every request goes to the database, although all the other caches
(such as the Rules Assembly cache, the Dictionary cache, etc.) are not turned off, they will
still be affected, as they are built based on the database cache.

74 CONFIDENTIAL
GLOSSARY: PAL Readings in Detail

Time of this PAL Reading

Property Name: pxPALWall

This reading reports the date and time of the PAL report request, and is displayed in GMT.

Time the Requestor Started

Property Name: pxRequestorStart

This reading reports the login date/time of the requestor.

Total CPU time for the reading(s)

Property Name: .pxTotalReqCPU

This reading measures the amount of time this requestor spent consuming CPU (up to the
point of the reading itself) for the particular action measured – either HTTP requests (from
client machine, using a browser) or Service requests from external systems.

Note that this is not an aggregate of the CPU counters, but a measurement of the CPU
time spent for the action. The specific CPU counters will not necessarily add up to this
number.

Total Elapsed time for the reading(s)

Property Name: .pxTotalReqTime

This reading measures the elapsed amount of processing time this requestor spent (up to
the point of the reading itself) for the particular action measured – either HTTP requests
(from client machine, using a browser) or Service requests from external systems.

Note that this is not an aggregate of the elapsed counters, but a measurement of the
elapsed time spent for the action. The specific elapsed counters will not necessarily add
up to this number.

Total Number of Rules Executed

Property: pxRulesUsed

This reading counts how many rules actually get executed during processing. Every time
a requestor requests a rule to run, this counter goes up by the total number of dependent
rules that the requested rule has used (an activity calls two properties, each of which runs
an HTML stream, which may again call one of those properties, which needs a model,
etc.)

CONFIDENTIAL 75
GLOSSARY: PAL Readings in Detail

Tracked property changes

Property Name: .pxTrackedPropertyChangesCount

The Declarative Processing subsystem checks every property whose values have
changed during processing, to see if any of these properties were used in any Declarative
Processing rules. If any of the changed properties were used in a Declarative Rule, that
property is tracked, to appropriately execute the affected Declarative Rules. This reading
counts those tracked properties.

If this reading shows hundreds (or even thousands) of Tracked Property Changes for one
interaction, investigate the defined Declarative rules. A common reason this number is
high is that a developer has defined a Declarative rule on a frequently-used property
(perhaps one defined on the class @baseclass or on Work-). It might be more efficient to
define the Declarative rule on a more specific version of this property.

In addition, this reading might be high if the Declarative rule will be calculated when
referenced (as opposed to on change). In this case, the tracked property will be the
Target property of the Declarative rule – not the input properties. If the Target property is
frequently used (such as the pxObjClass property), that will create a very high reading.
(See the Declarative Processing Tech Note for full details on how to correctly define
Declarative rules.)

V4 Instances Read
Property: pxInstancesReadWithV4Stream

This reading tracks the number of instances read from the database with a V4 Storage
Stream, which was the default storage type for Version 03-02 and earlier releases.

V5 Instances Read
Property: pxInstancesReadWithV5Stream

This reading tracks the number of instances read from the database with a V5 Storage
Stream, which is the default storage type for Versions 4.1 and 4.2.

V6 Instances Read
Property: pxInstancesReadWithV6Stream

This reading tracks the number of instances read from the database with a V6 Storage
Stream, which was a new storage type available in Version 4.2 that improves resource
optimization.

Whens executed

Property Name: pxRunWhenCount

This reading displays the number of When rules (Rule-Obj-When) which have been
invoked by this requestor.

76 CONFIDENTIAL

Das könnte Ihnen auch gefallen