Sie sind auf Seite 1von 3

Oracle R12 E-Business Standard Benchmark Overview

1. Benchmark Structure
The Oracle R12 E-Business Standard Benchmark follows the E-Business 11i (11.5.10) benchmark model by combining Online
transaction execution by simulated users with concurrent Batch processing to model a "Day-in-the-Life" scenario for a global
enterprise. Initially, benchmark kits are to be offered in 'Extra-Small,' 'Small,' and 'Medium' sizes with a 'Large' kit to follow

Partners can elect to execute the standard benchmark, which includes the OLTP and batch workloads or a partial benchmark
consisting of just OLTP execution (potentially for WAN testing) or just Batch execution (avoids the need for load driving systems).
2. Benchmark Execution
Partners executing this kit can obtain instructions, scripts, and a copy of an expanded Oracle database with sample data. They
provide their own hardware, diagnostic and analytic tools, and load driving system. Typically, partners tune the online transaction
execution and the two batch jobs separately before attempting an auditable run. Note that Oracle may be able to bundle the OATS
(Oracle Application Testing Suite) load tool with the Toolkit in the future.

The run starts with the ramp-up of the simulated online transaction users. When the desired number of users is 'up and running' in a
stable fashion, the Order-to-Cash batch job is started-initiating data sampling. Thirty minutes after the Order-to-Cash job starts, the
Payroll batch job is started. The payroll batch job can be started any time after thirty minutes from the time when Order-to-Cash
starts. Sixty minutes (one hour) after the Order-to-cash job starts, all batch execution must be complete and the data sampling
period ends. The online users are ramped down. This timeline is subject to change as concurrency issues and checkpoints are
evaluated. Changes would be communicated through the benchmark workgroup.

This timeline is modified appropriately for the execution of a partial benchmark.
3. Submitting Results
Partners are required to submit the recorded OLTP response times and the Batch execution times along with detailed supporting
information for auditing by Oracle, and by an independent third-party auditor. The collection of logs, output of audit scripts, etc. are
collected to insure transparency and reproducibility. Disclosure of tuning actions can also directly benefit Oracle's customer base.
4. R12 Vs 11.5.10
For continuity and trending, differences between the R12 and its predecessor benchmark-11.5.10 have been kept to a minimum.
However, partner input and Oracle directives have led to a few noteworthy updates.

Four of the ~30 OLTP transactions have been updated from 'Forms-based' to 'Web-based.' Three of the batch processes that had
been 'single-threaded only' are now multi-thread capable.

11i (11.5.10) runs were set up for 'maximum user load' (100%), '90% user load,' and '70% user load.' For R12 this is replaced with
'maximum user load' (100%), '50% user load' and 'single-user' reference times. The single user reference time is optional in
benchmark submissions until OATS can collect this data.
5. Posting Results
Once a partner's results have passed audit, they will be summarized in a report to be posted on Oracle's website. The report
highlights the specific results and provides supporting information about the workload, hardware implementation, system utilization,
software versions, tuning steps and so forth. Proper disclosure of how the results were achieved is essential to credibility in the
marketplace.
6. Performance Claims
Oracle recognizes that partners invest substantial resources in undertaking the R12 E-Business Standard Benchmark and will wish
to share their accomplishments with the marketplace. Nevertheless, Oracle policy is to maintain a level playing field and to insist on
civil discourse and technical transparency in performance claims. Performance claims can be made, using the following rules.
Primary Metrics for Comparison
The primary metrics for EBS R12 benchmarks are listed below. The primary metrics are the only metrics that can be published or
used in fence claims. No other benchmark performance metrics are allowed for claims or comparison.
Number of Online users
Number of Order Lines per Hour
Number of Checks per Hour
Minimum Data for Disclosure
The publication of a performance claim in marketing materials, or any other external publication, must include the corresponding
minimum data. Anyone can reference any other published benchmark as long as the minimum data is included in the reference and
the claim meets the criteria set in the fence claim section.
EBS version, database version, and OS version
Server configuration - Processor chips, cores, threads, frequency, cache and memory
Primary benchmark metrics
Model size (comparisons of results of different size models is not allowed)
Date that the claim is validated, as an "As of" date. Note that this is not the date that the benchmark is published, but is the
date that the claim was validated.
Fence Claims
The intent is to document a process for making comparisons or leadership claims with the goal of reducing incomplete or flawed
comparisons. Claims must only make reference to the primary metrics. Fence claims should be segmented into the following
categories:
Number of processor chips, cores or threads
Operating Systems
Windows
Linux
Unix
Price Performance Claims
Price performance claims are not allowed because pricing information is not part of the eBS benchmark submission. Therefore,
performance claims would lead to inaccurate or non-standardized price performance results.
Allowable Fence Claim Examples
A new record was set by achieving the highest Oracle eBS R12 large model size result
All 3 primary metrics must show highest result in order for this claim to be made by a vendor
Best 4-core performance on Oracle eBS R12 medium kit.
Model size and benchmark version is required
Highest Oracle eBS R12 medium size result on Linux
This claim is valid as operating system is a fence claim
No other Linux result of equal user count can be published for this claim to be used. If equal result is published then vendor
must use additional fence claim. i.e. processor chips, cores or threads
Per core leadership on Oracle eBS R12 Payroll Batch medium model size
This claim is valid as core is a fence claim. It must show the checks per hour in the minimum data disclosed. The number of
workers/threads should be documented for batch comparisons if the amount is different in the results being compared.
Best Linux per processor/chip result on Oracle eBS R12 small model size
This claim is valid as operating system and processor chip are fence claims
Highest per core Unix performance on Oracle RAC eBS R12 large model size
This claim is valid as RAC or single instance database is a fence claim
Not Allowed Fence Claim Examples
Best 8-core Oracle eBS R12 medium model sizebased on overall response time
Response time is not the primary metric and is therefore not allowable for publication.
Price performance leadership on Oracle eBS R12 small kit
Pricing data is not a metric in the OASB benchmark, and therefore is not allowable for publication
Best overall Oracle eBS result, or Achieves new Oracle world record
Too broad of a statement; would require mention of kit size and version
Performance claims that extrapolate higher user counts than audited
Claims can only be made on published results, no extrapolations or estimates can be published. i.e. The system ran 3000
users @ 45% utilization. This wouldn't allow a claim that the system could run 6000 users at 90%
eBS R12 benchmark results should not be compared to 11i or previous benchmark results.
RAC results should not be compared to single instance results.
Comparisons of performance results of different size benchmark kits are not allowed.
Highest users per processor chip at 500 users on Oracle eBS r12 small model size achieves leadership against a
competitor's 300 users per processor chip on Oracle eBS r12 medium model size. This would not be a valid or allowed claim
due to workload differences.

7. Oracle EBS Benchmark Workgroup Meetings:
Workgroup conference calls will be arranged and hosted by Oracle, on a quarterly minimum basis. These meetings will be the focal
point for Oracle to share news about benchmark and process changes with the members and for members to submit changes to the
overall benchmark process and to this document. Oracle will maintain a process for the members to propose changes. Oracle may
elect to implement changes or to submit the proposed change to a vote by the members. Each company that is an active member of
the workgroup will have one vote.

Das könnte Ihnen auch gefallen