Sie sind auf Seite 1von 53

Performance Tuning Point Solution

BUSINESS INFORMATION WAREHOUSE















Author Vikash C Agrawal
Version 1.0
Date Issue 3 May 2005










CONFIDENTIALITY
Wipro Limited, 2004, All Rights Reserved.
This document is proprietary to Wipro Technologi es, a division of Wipro Limited. You may not modify, copy,
reproduce, republish, upload, post, transmit or distribute any material from this document, in any form or by
any means, nor may you modify or create derivative works based on the text of any file, or any part thereof for
public, private or commercial use, without prior written permi ssion from Wipro Li mited.


SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Version Management:

Version Date Author(s) Summary of changes and/or approver
(where necessary)
1.0 3/5/2005 Vikash C Agrawal Initial Document









































SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice


Table of Contents
APPROACH TO PERFORMANCE PROBLEMS IN SAP BW......................................... 6
Define............................................................................................................................6
Measure.........................................................................................................................6
Analyze..........................................................................................................................6
Improve.........................................................................................................................6
Control ...........................................................................................................................6
EXTRACTION & DATA LOAD PROBLEMS ANALYSIS............................................... 7
Collection & Extraction ................................................................................................7
Transfer .......................................................................................................................7
Staging ........................................................................................................................7
Transformation ...........................................................................................................7
Loading .......................................................................................................................7
Typical Scenario I BAD Extraction & Loading Performance..................................................9
How to Identify High Extraction Time?...................................................................................9
How to Identify High Transfer Time?....................................................................................10
How to Identify High PSA Loading Time?.............................................................................10
How to Identify High Transformation Time? ........................................................................11
How to Identify High Data Load Time?.................................................................................12
QUERY PERFORMANCE ANALYSIS....................................................................... 13
Typical Scenario II BAD Query Execution Performance.....................................................13
WEB APPLICATION PERFORMANCE ANALYSIS...................................................16
Typical Scenario III BAD Web Application Execution Performance....................................16
Front-End Performance Analysis ...............................................................................17
Tools ...............................................................................................................................................17
Settings ..........................................................................................................................................17
Measurement .................................................................................................................................18
Cache Analysis on Web Query Performance ................................................................................21
Analysis of the SAP system ........................................................................................21
Determination and further analysis of the remaining time ........................................21
Back-end Response Time Analysis .............................................................................22
PERFORMANCE OPTIMIZATION SUGGESTIONS / RECOMMENDATION........... 24
A. GENERAL FACTORS ...................................................................................................24
1. Dimension tables ...........................................................................................................................24
2. MultiProviders ................................................................................................................................25
3. Navigational Attributes .................................................................................................................25
4. Time Dependent Characteristics ...................................................................................................26
5. Non-Cumulative Key Figures ........................................................................................................26
6. Hardware I mpact ..........................................................................................................................26
7. I T Landscape & Configuration ......................................................................................................27
8. Archiving ........................................................................................................................................28
9. Load Balancing ...............................................................................................................................28
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

10. Log Tables Reorganization ....................................................................................................29
11. Traces & Logs ............................................................................................................................29
B. EXTRACTION IN SOURCE SYSTEM .............................................................................29
Pre-Analysis of the Problem ..............................................................................................29
1. Setting of Extractors .....................................................................................................................30
2. I ndices on DataSource tables .......................................................................................................30
3. Customer Enhancements ..............................................................................................................30
4. Logistics Extractors .......................................................................................................................31
5. LI S I nfoStructures .........................................................................................................................31
6. CO Extractors .................................................................................................................................32
C. DATA LOAD PERFORMANCE .......................................................................................32
1. Upload Sequence ...........................................................................................................................32
2. PSA Partition Size ..........................................................................................................................32
3. Parallelizing Upload .......................................................................................................................33
4. Transformation Rules ....................................................................................................................33
5. Export DataSource .........................................................................................................................34
6. Flat File Upload ..............................................................................................................................34
7. Master Data Load Parallel Master Data Load...............................................................................35
8. Master Data Load Buffering Number Range................................................................................35
9. Master Data Load Change Run.....................................................................................................35
10. I nfoCube Data Load Dropping I ndices Before Loading..........................................................36
11. I nfoCube Data Load Buffering Number Range.......................................................................36
12. I nfoCube Data Load Compression Performance.....................................................................36
13. I nfoCube Data Load Roll-Up....................................................................................................37
14. I nfoCube Data Load Change Run............................................................................................37
15. I nfoCube Data Load Request Handling...................................................................................38
16. ODS Data Load ODS Objects Data Activation.........................................................................38
17. ODS Data Load Data Activation Performance and Flag BEx Reporting.............................39
18. ODS Data Load Unique Records...............................................................................................39
19. ODS Data Load Request Handling...........................................................................................40
D. QUERY PERFORMANCE ..............................................................................................40
1. Query Definition ............................................................................................................................40
2. Virtual Key Figures / Characteristics ...........................................................................................41
3. Query Read Mode ..........................................................................................................................42
4. Reporting Format ..........................................................................................................................42
5. Indices ............................................................................................................................................42
6. Compression ..................................................................................................................................43
7. Aggregates .....................................................................................................................................43
8. Aggregate Block Size .....................................................................................................................45
9. OLAP Engine ODS Objects.............................................................................................................45
10. OLAP Engine MultiProviders....................................................................................................45
11. OLAP Cache ...............................................................................................................................46
12. Hierarchies ................................................................................................................................49
13. Reporting Authorizations .........................................................................................................49
E. WEB APPLICATION PERFORMANCE ...........................................................................50
1. Reporting Agent Pre-Calculated Web Templates .....................................................................50
2. Web Application Definition Web I tems .....................................................................................50
3. Web Application Definition Stateless / Stateful Connection......................................................51
4. Web Application Definition HTTP / HTTPS...................................................................................51
5. Caching / Compression Portal iView Cache ..............................................................................51
6. Caching / Compression Compressing Web Applications and using Browser Cache ..............52
7. Network - Front-End implications on Network Load......................................................................52
F. DATABASE SPECIFIC PERFORMANCE ........................................................................53
1. Table Partitioning ..........................................................................................................................53
2. DB Statistics ...................................................................................................................................53
3. Disk Layout .......................................................................................................................................53
4. Raw Device / File System................................................................................................................53
5. Caching / Compression Portal iView Cache ..............................................................................53
6. Caching / Compression Compressing Web Applications and using Browser Cache ..............53
7. Network - Front-End implications on Network Load......................................................................53
G. TOOLS TO BE USED......................................................................................................53
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

1. Application Tools ...........................................................................................................................53
2. System Tools ..................................................................................................................................53


































SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

APPROACH TO PERFORMANCE PROBLEMS IN SAP BW
Define
i. Define the problem.
ii. For example Performance Problem with a Query of this specific Cube or Problem in
extraction of data for this particular DataSource of specific source system.
Measure
i. Which Indicators / parameters are to be measured / collected for the measurement of
above defined problem?
ii. Try to be as objective as possible but dont miss qualitative measures.
iii. For example Execution time for a query at present with these many input parameters
is X seconds.

Analyze
i. Analyze the problem with the help of present indicators, system tools, applications
tools in order to reach the cause of problem.
ii. Compare present indicators with available benchmarks if any.
Improve
i. Take the corrective action.
ii. Check the improvement in performance. It can be measured by the already taken
indicators coupled with qualitative features.
Control
i. Provide action points to ensure improved performance is maintained over a period of
time.
ii. For example archiving strategy etc.



ANALYZE

DEFINE

CONTROL


IMPROVE


MEASURE


PERFORMANCE PROBLEMS
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

EXTRACTION & DATA LOAD PROBLEMS ANALYSIS

In SAP BW data load architecture looks like as in picture below & it typically consists of
processes like extraction, transfer, transformation etc.

Extraction & loading process consists of
Collection & Extraction
i. SAP Content & Generic Extraction
Transfer
i. Transfer of data from Source system to BW
Staging
i. PSA
Transformation
i. Transfer & Update Rules
Loading
i. ODS
ii. InfoCubes
iii. Master Data

Goal of performance analysis of the whole process is
First tune the individual single execution and then the whole load processes.
Eliminating unnecessary processes.
Reducing data volume to be processed.
Deploying parallelism on all available levels.

In order to attain optimized performance in this area.
Loading Architecture


SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice




SAP Service API: Extraction / Load Mechanism









SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Typical Scenario I BAD Extraction & Loading Performance
End-users / Clients are complaining about bad Extraction & Loading performance for a specific
process

Following steps will help in deciding the problem area in the whole extraction & loading process &
take corrective action -
Step 1 - Analyze the system activity for this process of last few uploads. Check the specific
process & check the time consumed at various parts of process like time taken in extraction,
transfer, transformation etc. Use transaction RSMO.
Step 2 - Find which part of the process is taking more time i.e. extraction, transformation,
loading.
Step 3 - Analyze that particular process further with relevant tools as mentioned below. For
example is extraction real time consuming / resource constrained then further analysis can be
done with Extractor Checker (RSA3).
Step 4 - Go to the suggestions part of relevant process (see the document below - also
check the General Section) & apply the recommendations.
Step 5 - Check & verify the performance of problematic area & whole of extraction & loading
process.

Check the time taken in extraction, transfer, transformation and loading individually in order to
reach the resource killer area.
How to Identify High Extraction Time?


If the process has high Extraction time, it can be further analyzed by
Look for many long running processes in SM50 / SM51in Source System.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Check with Extractor Checker (RSA3) in Source system & check the performance of
extraction.
Use ABAP Runtime Analysis To check User Exits performance in Extraction.
Use SQL trace ST05 trace with filter on extraction user (ALEREMOTE) to identify expensive
SQL statements. Make sure that no concurrent extracting jobs run at the same time with this
execution.
How to Identify High Transfer Time?


How to Identify High PSA Loading Time?



SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

How to Identify High Transformation Time?




SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

How to Identify High Data Load Time?


Note - After identifying the problem area look into to the relevant section (General, Extraction &
Loading Below in this document) of this document for suggestions / recommendations.























SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

QUERY PERFORMANCE ANALYSIS
Typical Scenario II BAD Query Execution Performance
End users / clients are complaining about bad Query runtime for a specific query or couple of
queries

Following steps will help in deciding the problem area in the whole query execution process &
take corrective action -
Step 1 Analyze the system activity for last month or the time period users have been
complaining about.
Step 2 Identify the Cube with most activity.
Step 3 Identify the bad running queries.
Step 4 Identify the most time consuming part of the query
Step 5 Go to the suggestions part of relevant process (see the document below) & apply
the recommendation.
Step 6 Check & verify the performance of problematic area & in whole of query.


Check if the statistics of the Cube (on which bad query / queries are based) are active. Go to
(Administrator workbench) RSA1 Tools BW Statistics for InfoProvider Look for
desired Cube Switch on both the statistics.






SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice



Open transaction ST03 and change the view of ST03 from Administrator to Expert Mode.




Open BW System Load by month

Choose the latest Month (or the desired month)
Identify the Cube with the highest no of Navigational Steps.
Identify the query with the highest runtime and check where most time has been
consumed
i. OLAP Init
ii. Database (DB)
iii. OLAP time
iv. Front End
v. Check for ratio of Selected to Transferred

Parameter Value % Action
1 OLAP Init Look at the Query performance section
below.
2 DB Time DB time % > 30 have a look at ratio of
selected to transferred then decide.
3 OLAP Time Look at the Query performance section
below.
4 Front End Look at the Query performance section
below.
TOTAL TIME
1 No of records Selected
2 No of Records Transferred
Ratio of Selected to
Transferred
If this ratio is >10 the have a look at DB
time % - if DB time % is more than 30
then probably Aggregates will help.



SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice



Decide whether Aggregate will help
If in above analysis Summarization Ratio (records read to records displayed) >10 and
Percentage of DB time > 30% (Time spent on database is a substantial part of whole
query runtime) then go ahead & analyze the Aggregates. Also look at Aggregate section
in Query Performance Below.
Specific query also can be analyzed by the help of Query Monitor (Transaction RSRT).


SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice


Analyze specific query through various options like
Check for aggregate usage
Check for aggregation sought by the query



If in the analysis of various time consumptions, conclusion is reached that Aggregates will not
help then one can evaluate OLAP performance for a particular query by the help of Query
Monitor (RSRT). It will help in deciding whether activation OLAP Cache will help.
Query & analysis can also be done with the help of transaction SE16 table RSDDSTAT.
Following table can be helpful for selection of tool

Requirement? Suggestion!
1 Tools available to monitor the
Overall Query Performance?
i. BW Statistics.
ii. BW Work Load Analysis in ST03N (using
expert mode).
iii. Content of table RSDDSTAT.
2 Enabling of these tools? i. Turn on BW Statistics RSA1 Tools BW
Statistics for InfoCubes(Choose OLAP &
WHM for relevant Cubes)
3 Tools available to analyze a
specific Query?
i. Query Monitor RSRT
ii. Transaction RSRTRACE

Note - After identifying the problem area of Query look at the relevant part in Query Performance
analysis & General section below in this document. For example if analysis suggests aggregates
will not help in improving performance then look at the area like OLAP Cache, Front end format
etc. of Query performance parameter section.

WEB APPLICATION PERFORMANCE ANALYSIS
Typical Scenario III BAD Web Application Execution Performance

End users / clients are complaining about bad Web Query Runtime

SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Following steps will help in deciding the problem area in the whole Web Query execution process
& take corrective action -
Step 1 Analyze the system activity for the specific Web Query.
Step 2 Identify the Problem Area e.g. Web Query is consuming too much time in Front-end.
Step 3 Go to the suggestions part of relevant process (see the document below) & apply
the recommendation.
Step 4 Check & verify the performance of problematic area & in Web Query.

Runtime problems can occur, in particular, in the following places:

Front-end High CPU usage of the Web browser during page rendering.
Front-end network Long transfer times due to large transfer volumes.
ITS Agate High CPU usage due to intensive HTML_Business Processing.
SAP system (R/3, CRM, Workplace, etc.) Time-intensive SQL or ABAP processing.
Front-End Performance Analysis
Tools
The tools IEMON and HTTPMON can be used to establish the rendering time on the
Web browser and the data volume transferred between the Web browser and the Web
server per navigation step. IEMON and HTTPMON are available at Service.sap.com/bw
Performance. The IEMON tool only works in connection with the MS Internet Explorer.
Settings
The following settings are required:

i. Windows environment variables
a. Only need to adjust the environment variables for Internet accesses which take
place via a proxy server. This is the case when accessing the Internet from the
SAP-internal network.
b. Via the start menu, select Settings -> Control Panel -> System -> Environment
c. Set the variable HTTP_PROXY to http ://<proxy>:<port>, where <proxy> is the
name of the proxy server used and <port> is the port number used by the HTTP
protocol.
d. Example: variable: HTTP_PROXY , Value: http://proxy:8080
e. Use the variable NO_PROXY to specify addresses for which there should be no
access via the proxy server.
f. Example: variable: NO_PROXY , Value: *.sap.com,*.sap-ag.de

ii. Proxy settings in the Internet Explorer
a. Before you implement the following changes you should take a note of the
respective current settings, so that you can create these again later.
b. The result of these changes is that the data flow between the Web Browser and
the proxy server or ITS is channeled via HTTPMON, producing the following
Web browser HTTPMON ITS SAP system Proxy server Internet
c. In the Internet Explorer, select via the menu Tools Internet Options
Connections LAN Settings.
d. Select the option 'Use a proxy server'.
e. Enter 'local host' as the address and '8000' as the port. (This is the port which
HTTPMON uses for HTTP queries).
f. Deselect the option 'Bypass proxy server for local addresses'.
g. Click on the button 'Advanced'.
h. Select the option 'Use the same proxy server for all protocols'
i. Delete all entries in the textbox 'Exceptions'.
j. Confirm the changes with 'OK'.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice


iii. Browser Cache
a. In the Internet Explorer, select via the menu Tools Internet Options
General Delete Files and confirm with 'OK'.
Measurement
Start HTTPMON and IEMON to begin with the measurement of the Web application.
Enter the URL in the IEMON address field to call up the Web application. Then execute
the Web application in the IEMON browser window. Following every dialog step, click on
the button 'Reset Counters' in the HTTPMON window.

In order to take into account runtime differences by loading data for the first time in the
browser cache or in a buffer saver from another system involved (e.g. ITS caches, R/3
buffer), the transaction to be analyzed should be executed more than once.

i. Evaluation of the measurement result

a. The measurement results are written into the following log files in the TEMP
directory IEMON: C:\TEMP\IEMON.TXT; HTTPMON:
C:\TEMP\HTTPMON_LOG.TXT

ii. Determining the Browser Load Time and the Rendering Time

iii. The following times can be determined using the information in IEMON.TXT:

a. Browser Load Time Time taken to load data in the browser. The Browser Load
Time begins with the sending of a HTTP request from the browser to the Web
Server (log entry: Before Navigate) ends after transferring the answer of the Web
Server to the Browser (log entry: Navigate Complete).
b. Rendering Time Time to build (display) the next HTML page in the browser.
The rendering time begins with the end of the transfer of the Web server's
answer to the browser (log entry: Navigate Complete). It ends after the end of the
page building in the browser (log entry: Document Complete).
c. Example:

<<Start of the browser load time>>

14:39:37.755 x Before Navigate: http://www.acme.com/ --

14:39:37.817 -> Download Begin

14:39:37.927 Status Text Change: Finding site: localhost

14:39:37.927 Progress Change READYSTATE_LOADING -- 100 -- 10000

14:39:37.942 Status Text Change: Connecting to site 127.0.0.1

14:39:37.942 Status Text Change: Connecting to site www.acme.com

14:39:37.942 Progress Change READYSTATE_LOADING -- 100 -- 10000

14:39:38.348 Status Text Change: Start downloading from site:http://w

14:39:38.395 <- Download Complete

SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

14:39:38.427 Status Text Change:

14:39:38.427 -> Download Begin

14:39:38.489 Status Text Change: Opening page http://www.acme.com/..

14:39:38.520 Title: Acme!

14:39:38.536 * Navigate Complete: http://www.acme.com/

<<End of the Browser Load Time>>

<<Start of the Rendering Time>>

14:39:38.614 Progress Change READYSTATE_INTERACTIVE -- 805600 --

14:39:38.864 Status Text Change: Done

14:39:38.864 Progress Change READYSTATE_INTERACTIVE -- 1000000 --

14:39:38.864 Progress Change READYSTATE_INTERACTIVE -- -1 -- 1000000

14:39:38.864 <- Download Complete

<<End of the Rendering Time>>

14:39:38.880 Title: Acme!

14:39:38.880 # Document complete: http://www.acme.com/

14:39:38.880 Web document is finished downloading

14:39:38.880 Progress Change READYSTATE_COMPLETE -- 1000000 --

14:39:39.364 Progress Change READYSTATE_COMPLETE -- 0 -- 0

<<End of the dialog step>>

----------------------------------------------------------------------

<<Start of the next dialog step>>

14:39:44.973 x Before Navigate: http://www.acme.com/r/ci --

Load Time = Time at Status Text Change: Before Navigate:
Rendering Time = Time at Before Navigate: Document Complete: Download
complete
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice



d. The dialog step studied in the example produces a browser load time of 781ms
and a rendering time of 328ms.
e. The times measured consist of gross times (i.e. elapsed time). The rendering
time should roughly correspond to the CPU time used for the browser's page
building.
f. The browser load time contains the time usage of all components between the
browser and the SAP System, including the time usage in the front-end network
between the browser and the Web server.
g. For further analysis, the browser load time must be split down into its
components.

iv. Estimation of the front-end network time
Based on the bandwidth of the front-end network, the information in
HTTPMON_LOG.TXT can be used to determine a lower limit for the front-end
network time. By comparing the time stamp in IEMON.TXT and
HTTPMON_LOG.TXT you will find, in HTTPMON_LOG.TXT, the data volume
transferred between the browser and the Web server during a dialog step.

a. Example: the following two HTTP-GET queries belong to the dialog step looked
at above:

Time Sent Received HTTP Command

<<Start of the dialog step>

14:39:38 284 4834 GET http://us.a1.yimg.com/us.yimg.com/a/pr/pr

14:39:38 230 15650 GET http://www.acme.com/ HTTP/1.0

SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

<<End of the dialog step>>

-------------------------------------
Reset counters
-------------------------------------

<<Start of the next dialog step>>

b. The dialog step looked at in the example produces a transferred data volume of
284 + 4834 + 230 + 15650 = 20998 bytes.

c. Calculate a lower limit for the front-end network time as follows:

Front-end network time = transferred data volume / network bandwidth.
Cache Analysis on Web Query Performance
In order to take into account runtime differences by loading data for the first time in the
browser cache or in a buffer saver from another system involved, the transaction to be
analyzed should be executed more than once.
Record the Network Load of BW web application using IEMON connectivity to the BW system
via HTTPMON. Identify the savings using Browser caching of MIME objects. Start HTTPMON
& IEMON to begin with the measurement of Web application. Enter the URL in the IEMON
address field to call up the Web application. Then execute the Web application in the IEMON
browser window. In order to take into account runtime differences by loading data for the first
time in the browser cache the web application to be analyzed should be executed more than
once. Execute the web applications three times in the following order & record the network
load after each execution
In HTTPMON click the button Reset Counters.
In IEMON execute the BW Web application (uncached). Record the network traffic in KBs
sent KBs received.
In HTTPMON click the button Reset Counters.
In IEMON re-execute the BW Web application (cached).
Record the network traffic in KBs sent KBs received.
In HTTPMON click the button Reset Counters.
In the Internet Explorer, select via the menu tools internet options General Delete
files. Confirm with OK (Clear the Cache).
In IEMON re-execute the BW Web application (uncached again to validate that the cache
is working).
Record the network traffic in KBs sent KBs received.
In HTTPMON click the button Reset Counters.
Verify the caching impact via the output in the HTTPMON log file identifying the different
web application execution.
Analysis of the SAP system
Based on the time stamps in IEMON.TXT, the response time of the SAP system
belonging to the dialog step observed can be determined using transaction STAT or
STAD.
If the response time of the SAP system is significantly high, a detailed analysis of the
relevant ABAP program should be conducted (e.g. via SQL trace).
Determination and further analysis of the remaining time
The following applies for the remaining time which has not been resolved
Remaining time = Browser load time - Front-end network time - SAP system time.

SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

If the remaining time thus established should be significantly high, this may often be due
to CPU-intensive HTML business processing in the ITS AGate.

The CPU usage of the ITS AGate processor can be measured using the standard NT tool
PERFMON. However, when you do this you are measuring the total CPU usage
generated by all requests/threads. Even if you are measuring at thread level, it is not
possible to precisely allocate the CPU usage to a dialog step. It is only possible to
precisely measure the AGate process with PERFMON if you have ensured that the
AGate process is exclusively being used to process the dialog step being examined.
Back-end Response Time Analysis
For the backend response time analysis we rely on the contents of ABAP runtime traces.
In the context of Web reporting, the ABAP trace needs to be activated whenever a HTTP
request arrives at the corresponding service handler. This is done within transaction SICF
(see detailed description below). As ABAP tracing always has significant impact on the
response time, we recommend creating the front-end log file and the ABAP trace files in
separate executions of the query.
Activation of ABAP trace for Web Services
While IEMon.exe only logs information about the front-end response time, Web query
analysis also looks into the details of the query execution within the Web Application
Server. Detailed information is obtained from an ABAP trace which needs to be activated
when the HTTP request is being processed by WAS. ABAP tracing needs to be activated
in transaction SICF on node /sap/bw. Expand the default_host tree and mark the BW
entry as shown in the screen shot below. Then select Edit Runtime Analysis Activate.
Note: On systems with multiple application servers make sure to activate the trace on the
server that will handle the HTTP request.
Make sure that you activate the check box Only calls from SAPGUI-IP. With this check
box enabled, only those requests coming from the TCP/IP connection which hosts the
SAPGUI will cause a trace file to be created. If you do not check this box, the system
traces all other incoming HTTP requests as well, which may have a negative impact on
the overall system performance. Incoming http requests will only be traced after a new
connection has been established. While having open Web browser (or IEMon) shut the
browser down and restart it before start tracing the ABAP calls.



SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

An incoming HTTP request causes a trace file to be generated. This trace file can be
analyzed with transaction SE30. If multiple HTTP requests are processed (e.g. a
selection screen, or multiple Web queries from a Web template) each request generates
an individual trace file.
Note: On systems with multiple application servers make sure to call transaction SE30 on
the server that has handled the HTTP request and produced the trace file.
Open transaction SE30, select Other file , specify the user SAPSYS and choose the
proper trace file from the list. To find all relevant trace files, delete all previous trace files
before executing the Web query analysis. Double click the trace file to get an overview
screen, then press the Hit List button (F5) to see the detailed trace information.
Download this trace information into a local file onto PC (System List Save Local
File) as unconverted file.



Note - After identifying the problem area of Web Query look at the relevant part in Web
Application Performance Analysis & General section below in this document.














SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

PERFORMANCE OPTIMIZATION SUGGESTIONS / RECOMMENDATION
A. GENERAL FACTORS
1. Dimension tables
Point Do not combine dynamic characteristics in the same dimension in order to keep
dimensions rather small - As a general rule, it makes more sense to have many smaller
dimensions vs. fewer larger dimensions.
Advantage Query will need to access lesser data during retrieval hence resulting in
better query performance.
Indicator (Range, Limits etc) Dimension table size should be less than 10% of the
fact table. Transaction RSRV can be used to check the Fact to Dimension table ratio.
Tool Needed Transaction RSRV, SE38 ( report SAP_INFOCUBE_DESIGN)
Action Use Line Dimensions.
Concern
Relevant SAP Notes
In the data modeling phase, it is very important to determine if a dimension table will be
degenerated, and then explicitly set it as a line item dimension (parameter setting on the
InfoCubes dimension entry). In this case, the dimension table is omitted and dimension
entries in the fact table reference directly to the SID table. On the one hand, this saves one
table join at query runtime and, on the other hand, it saves the determination of the
dimension IDs at data load time.
Line-item dimensions arise in nearly every case where the granularity of the fact table
represents an actual working document like an order number, invoice number, sequence
number.

The current size of Dimension can be monitored in relation to the Fact table by running report
SAP_INFOCUBE_DESIGNS in transaction SE38 for live InfoCubes. This report shows the
size of fact table and its associated dimension tables. It also shows the Ratio Percentage
Fact to Dimension Size. Make sure that statistics are updated for the InfoCube prior to
running SAP_INFOCUBE_DESIGNS in transaction SE38. A dimension which is very large in
relation to a fact table should be a red flag as shown in picture below. Such issues usually
manifest themselves as poor query or load performance.


SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

2. MultiProviders
Point Use MultiProvider (or logical) partitioning to reduce the sizes of the InfoCubes;
e.g. define InfoCubes for one year and join them via a MultiProvider.
Advantage Parallel access to underlying basis InfoCubes, load balancing, resource
utilization, query pruning.
Indicator (Range, Limits etc)
a) No of Base Cubes used in MultiProvider should not be more than 10.
b) In table RSDDSTAT in transaction SE16 for queries (Based on MultiProviders)
for which the value of the QDBTRANS column >= 30000.
Tool Needed transaction SE16 to access table RSDDSTAT, Activation of BW
Statistics.
Action
a) If No of Base Cubes used in MultiProvider is more than 10 then switch from
parallel processing to serial processing. Reason being a larger number of base
infoProviders will likely result in a case where there are a lot more base
InfoProviders than there are available dialog processes, resulting in limited
parallel processing and many pipelined sub-queries. Also, while the overhead is
relatively small for combining the results of sub-queries at the sync point, if the
number of sub-queries is very large this overhead becomes more significant,
reducing efficiency of the overall operation.
b) In table RSDDSTAT in transaction SE16 for queries (Based on MultiProviders)
for which the value of the QDBTRANS column >= 30000 then switch from
parallel processing to serial processing.
c) Good performance can be built into design through the use of a MultiProvider as
a type of logical partitioning. This entails creating base InfoProviders that are
identical in structure, but contain data that is separated through design of the
dataflow. For instance, one can logically partition InfoCubes based on a
dimension characteristic, then combine the base cubes to form a MultiProvider.
Queries executed against the MultiProvider are then split into sub-queries by the
OLAP processor, with the sub-queries running against the base InfoCubes. This
benefits performance in that the sub-queries are run against InfoProviders
containing a relatively small number of records, and general scalability is
achieved via parallel processing.
Concern
Relevant SAP Notes 629541, 607164, 622841
3. Navigational Attributes
Point Dimensional Attributes generally have edge over Navigational Attributes in terms
of performance in query operations.
Advantage Fewer tables are joined to access the values of dimensional attributes than
navigational attributes; therefore less overhead is seen during query execution.
Indicator (Range, Limits etc)
Tool Needed
Action
a) The requirements should be carefully analyzed to determine whether a certain
attribute needs to be navigational or if dimensional or display attribute will suffice.
Concern
Relevant SAP Notes

Navigational attributes are part of the extended star schema. Navigational attributes require
additional table joins at runtime (in comparison to dimension characteristics) but usually the
decision for dimension characteristics or navigational attributes is based on business
requirements rather than on performance considerations
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

4. Time Dependent Characteristics
Point Time-dependent navigational attributes have impact on the use of aggregates.
Either no aggregates can be used or aggregates have to be realigned regularly when the
key date changes, which is an expensive process.
Advantage Aggregates can be used efficiently.
Indicator (Range, Limits etc)
Tool Needed
Action If time-dependent master data is really necessary, consider modeling the
attribute twice: as time-dependent and time-independent. Prefer the time-independent in
queries if possible.
Concern
Relevant SAP Notes
5. Non-Cumulative Key Figures
Point
a) InfoCubes containing non-cumulative key figures should not be too granular.
b) While using Non-Cumulative Key figures; use as few validity objects as possible.
Advantage
a) Lesser granularity will result in a fewer no of reference points which will not
impact aggregate build significantly.
b) For every entry in validity table a separate query is generated at query run time.
Indicator (Range, Limits etc) Non-Cumulative Key figure granularity, Objects in
validity tables.
Tool Needed Check the validity table & look at the relevancy of validity objects.
Action
a) Reference points can only be deleted by deleting an object key not specifying the
time period, i.e. all available records for this key are deleted. If e.g. old material
does not need to be reported on, delete these data via selective deletion (without
restriction in time).
b) Restrict the validity to a certain time period e.g. if a plant is closed, it should not
show up any stock figures. Use as few validity objects as possible.
Concern
Relevant SAP Notes

Non-cumulative key-figures are a special feature of InfoCubes. They are used when the key
figure does not make sense in an aggregated form for a specific dimension; e.g. stock figures
should not be aggregated over time. They are stored for every possible characteristic value
combination as reference point (ideally: current value) and all the changes over time. For
example calculating the stock value of last year can mean reading all the transaction data
between the current stock and last year. The reference point is updated when the InfoCube is
compressed. The finer the time granularity, the more records have to be read. Reference
points are generated for every characteristic value combination and they stay in the
compressed E-table until the data is deleted (deletion without a time restriction!). The more
granular the dimensions are defined, the bigger the E table will grow and the more expensive
aggregate builds will be.
6. Hardware Impact
Point The capacity of the hardware resources represents highly significant aspect of
the overall performance of the BW system in general. Insufficient resources in any one
area can constraint performance capabilities.
Advantage Optimized Resource utilization & performance trade off.
Indicator (Range, Limits etc)
a) No of CPUs
b) Speed of CPUs
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

c) Memory
d) I/O Controller
e) Disk Architecture
f) Client Hardware For BW Front End, SAP GUI.
Tool Needed QuickSizer ( available at service.sap.com/quicksizer)
Action
a) Use present status as Input to Quicksizer & evaluate the recommendation vis--
vis present hardware resources.
b) Evaluate the required configuration for BW Front end.
c) If possible have a look at the hardware strategy to judge whether the current
state is in sync with the strategy, evaluate & take action to minimize the gap.
While evaluating consider long term near line storage and archiving plan, as well
as granularity size goals for example maintaining two years of detail level data,
and five years summary data, PSA data plan etc.
Relevant SAP Notes 321973.
7. IT Landscape & Configuration
Point A BW environment can contain a DB Server and several Application Servers.
These servers can be configured individually (e.g. number of dialog and batch
processes), so that the execution of the different job types (such as queries, loading, DB
processes) can be optimized. The general guideline here is to avoid hot spots and
bottlenecks.
Advantage Optimized performance of IT Landscape & Configuration and keeping Total
Cost of Ownership (TCO) to a Minimum. Other benefits are
a) Elimination costly performance bottlenecks.
b) Improved response times and, as a result, acceptance by end-users.
c) Optimal use of Hardware investment.
d) Substantially reduced risk of costly downtime.
Indicator (Range, Limits etc) Follow the SAP EarlyWatch report.
Tool Needed SAP EarlyWatch Check
Action
a) Be sure that enough processes (of a certain type) are configured. A BW
application server does not need a UP2 process and at most would use 2 UPD
processes.
b) Operation modes can be switched to affect the allocations for dialog and batch
processing for different application servers. For optimizing the hardware
resources, it is recommended to define at least two operation modes: one for
batch processing (if there is a dedicated batch window) with several batch
processes and one for the query processing with several dialog processes. Note
that the data load in BW for extraction from R/3 runs in dialog processes.
c) An important point here is that sufficient resources should be available in order to
distribute the workload across several application servers. Monitor the activity on
different app servers to determine a strategy for optimizing the workload
distribution (using load balancing). Also, if the database server is tight on
resources, consider moving the central instance away from the database server,
in order to allocate maximum resources to database processes. Note that
database processes may utilize a significant number of CPUs, and thus you
should carefully monitor CPU utilization in the DB server.
d) USE THE SAP EarlyWatch SERVICES.
Concern
a) Different application servers have separate buffers and caches. E.g. the OLAP
cache (BW 3.x) on one application server does not use the OLAP cache on other
servers.
Relevant SAP Notes
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

8. Archiving
Point Data archiving allows to archive data from InfoCubes and ODS objects.
Advantage Data archiving simplifies InfoCube and ODS object administration and
improves performance by decreasing the volume of data.
Indicator (Range, Limits etc) Data Aging (Present Status)
Tool Needed Archiving Strategy (like Infocubes will have this much old data, ODS this
much etc.)
Action Evaluate the Present Status based on archiving strategy & rectify any gaps.
Concern
Relevant SAP Notes
9. Load Balancing
Point Load balancing provides the capability to distribute processing across several
servers in order to optimally utilize the server resources that are available.
Advantage An effective load balancing strategy can help to avoid inefficient situations
where one server is overloaded (and thus performance suffers on that server), while
other servers go underutilized.
Indicator (Range, Limits etc)
Tool Needed
Action
a) Logon load balancing (via group login): This allows distribute the workload of
multiple query/administration users across several application servers.
b) Distribution of web users across application servers can be configured in the BEx
service in SICF.
c) ODS Object Data Activation is definable for specific server groups (BW 3.x).
d) Process Chains can be processed on specified server groups (BW 3.x).
e) Extraction in the SAP source system can be processed on specified server
groups: RFC destination in BW to source system must be defined accordingly
(transaction SM59).
f) Data load in BW can be processed on specified server groups: RFC destination
in source system to BW must be defined accordingly (transaction SM59).
g) Data staging via XML over HTTP/SOAP can be processed on specified server
groups (BW 3.x).
h) Logon Server Groups can be defined in transaction SMLG and RZ12 and these
groups can be assigned to the mentioned processes; the data packages are sent
to the servers included in the server group.
i) In a complex IT environment with several application servers (according to the
hardware sizing recommendations), define suitable server groups, to which
specific tasks can be assigned. This would help to leverage hardware.
j) For even distribution of data load to BW across the BW application servers, set
the maximum number of logon users for the logon group to a low number (e.g.
5). Once one instance reaches 5 users, the subsequent logons were dispatched
to other instances until all instances have 5 users. Then, this 5 setting gets
ignored in SMLG whereas each instance alternates in accepting the next user.
Concern
a) In some cases, it is useful to restrict the extraction or data load to a specific
server (in SBIW in an SAP source system, or SPRO in BW), i.e. not using load
balancing. This can be used for special cases where a certain server has fast
CPUs and therefore you may want to designate it as an extraction or data load
server.
Relevant SAP Notes 493475, 561885
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

10. Log Tables Reorganization
Point Logs of several processes are collected in the application log tables. These
tables tend to grow very big as they are not automatically deleted by the system and can
impact the overall system performance.
Advantage Frequent reorganization of log table will lead to better overall system
performance.
Indicator (Range, Limits etc) Size of log tables.
Tool Needed
Action
a) Depending on the growth rate (i.e., number of processes running in the system),
either schedule the reorganization process (transaction SLG2) regularly or delete
log data as soon as you notice significant DB time spent in table BALDAT (e.g.,
in SQL trace).
b) Delete regularly old RSDDSTAT entries.
c) Table EDI40 can also grow very big depending on the number of IDOC records,
keep a track of it.
Concern
a) Note that the application log tables are client-dependent. Therefore, you must
delete data in each client.
Relevant SAP Notes 195157, 179046
11. Traces & Logs
Point SAP BW provides several possibilities for traces and logs. These traces usually
help find errors and monitoring the system. Keep in mind that these Traces and Logs
generate a system overhead.
Advantage Proper selection of Traces & Logs (ON/OFF) will lead to better trade off in
error tracking, Monitoring activities & system overhead.
Indicator (Range, Limits etc)
Tool Needed
Action
a) Make sure that only those traces are switched on that are really necessary.
Concern
a) If several traces and logs run in the background, this can lead to bad overall
performance and sometimes its difficult to discover all active logs. So make
sure to switch off traces and logs as soon as they are not used any more.
Relevant SAP Notes
B. EXTRACTION IN SOURCE SYSTEM
While loading data from the OLTP into the InfoCube or into a master data table in the BW system,
(performance) problems may occur.
Pre-Analysis of the Problem
In the InfoPackage, under "Processing", set option "Only PSA". Reload the data and then
load the data into the data target manually.
If the load times are caused by the first step, the cause is either a problem in the data transfer
from the OLTP to the BW system or the cause is a problem in the data extraction.
If the long load times are caused by the second step (manually triggered update of the data in
the BW system), analyze the load times of several data targets. The InfoCube may have a
complex update logic which may be the cause of the problem. Simulate the update in Online.
(This part will be handled in section Loading Data).
Simulation of the extraction (by using Transaction RSA3 in the OLTP) can also be use to
determine how long it takes the extractor to extract the data (without transfer into the BW
system). Here one can also debug (if required) or create an SQL trace by using ST05.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Carefully log the selections you used in RSA3. Please also check for possible entries in the
transaction log.
Background processes for extraction and update of the data can be monitored in the OLTP
and the BW via Transaction SM50.
1. Setting of Extractors
Point The size of the packages depends on the application, the contents and structure
of the documents. During data extraction, a dataset is collected in an array (internal table)
in memory. The package size setting determines how large this internal table will grow
before a data package is sent. Thus, it also defines the number of commits on DB level.
Moreover, the application server (group) where the (batch) extraction processes are
scheduled and the maximum number of parallel (dialog) processes can be defined.
Advantage Optimized extractor performance.
Indicator (Range, Limits etc)
Tool Needed Access to table ROIDOCPRMS
Action
a) The size of the packages depends on the application, the contents and structure
of the documents. Set up the parameters for DataSources according to the
recommendations as per application area in SAP Note 417307.
b) In general, small data package sizes are good for resource-constrained systems
and big sizes are good for large systems. Default setting is 10,000 KB and 1info
IDOC for each data packet. These sizes can be fine tuned per InfoPackages and
InfoCube. Typical global settings are 20-50000 KB, 10 or 15 frequency of IDOC
info packet and 2 to 4 parallel processes.
c) Distribute extraction processes to different servers to avoid bottlenecks on one
server.
Concern
a) Large package sizes are not advised if the BW system interfaces to a source
system over a WAN; large package sizes can be a special problem if network
traffic is a concern. In these cases a small package size is preferable.
Relevant SAP Notes 417307, 409641
2. Indices on DataSource tables
Point Indices can be built on DataSource tables to speed up the selection process.
Advantage Creation of Indices will improve performance by improving selection of data
based on selection conditions.
Indicator (Range, Limits etc)
Tool Needed
Action If a selection criterion in InfoPackage is defined & selection of data is slow then
building indices on the DataSource table in Source system will help.
Concern Do not create too many indices because every additional index slows down
the inserts into the table.
Relevant SAP Notes
3. Customer Enhancements
Point Customer Enhancements (Exits) are available for most of the extractors. Specific
coding can be used here to transform the data according to specific requirements.
Performance of this code will affect the performance of Extraction of data.
Advantage Following certain recommendations while writing ABAP Code will improve
performance.
Indicator (Range, Limits etc) Time spent in customer enhancement.
Tool Needed ABAP Trace, SQL Trace
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Action Be sure that the Customer enhancements are designed and implemented very
carefully according to the following recommendations
a) Try to avoid nested loops.
b) Avoid use of loops if possible.
c) Avoid selecting records that you are not going to use and use summary SQL
(SUM, GROUP BY/HAVING) statements instead of programming your own logic.
d) Buffering database table.
e) Try to access internal tables diligently. Try to avoid full table scans; instead use
sorted tables and binary search or hashed tables.
f) To access DB tables, try to specify the whole key in the WHERE statement (in
order to use the primary key) or that part of the key that is contained in one of the
secondary indices (in the same sequence). Try to avoid NOT in the WHERE
clause, as there is no index support.

Concern
Relevant SAP Notes
4. Logistics Extractors
Point There are number of update methods available with PI 2002.1 like direct delta,
queued delta, serialized delta etc. One should choose the method keeping in mind the
requirements & performance.
Advantage Improper selection of update method may lead to non-optimal
performance.
Indicator (Range, Limits etc)
Tool Needed
Action Reevaluate the requirements & selected update methods & rectify any
misalignment. Keep into consideration following points when making a decision -
a) Direct delta produces one LUW for every document and is only useful for very
few records (without collection process).
b) Queued delta collects up to 10,000 records and transfers when requested by the
collection process.
c) Un-serialized V3 updating collects records until the collection process is
scheduled; the sequence of the data is not kept.

Concern
Relevant SAP Notes 505700
5. LIS InfoStructures
Point Switching between extraction tables SnnnBIW1 and SnnnBIW2 requires deleting
the records after the extraction.
Advantage Deleting delta tables SnnnBIW1 or SnnnBIW2 during data transfer in the
'delta update' mode has a bad performance, that is, the deletion takes very long.
Dropping whole table will help in improving performance.
Indicator (Range, Limits etc) Time taken for Deletion of records form extraction
tables.
Tool Needed
Action
a) If the delta update is used in several clients, you have to create suitable indexes
and to check the update of the index statistics.
b) If however the delta update is only used in one client for each InfoStructure, the
complete delta table SnnnBIW1 or SnnnBIW2 can be deleted.
Concern
Relevant SAP Notes 190840
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

6. CO Extractors
Point Extraction of data using the CO delta DataSources takes a very long time. One
needs to analyze it using tools like SQL Trace.
Advantage Analysis will lead to real cause of problem & correction.
Indicator (Range, Limits etc) Check whether indexes 1 (for full update) or 4 (for Delta
init update or delta update) are being used.
Tool Needed SQL Trace
Action
a) Make sure that indexes 1 and 4 delivered by SAP on the database for the table
COEP are active in the form delivered in the standard system. In the standard
system, index 4 is not delivered as active and must be manually activated by the
customer when using the CO Delta DataSources.
b) Make sure that indexes defined by the customer do not interfere with the
selection. The customer is responsible for these indices.
Concern
Relevant SAP Notes 365762, 398041, 190038, 382329
C. DATA LOAD PERFORMANCE
1. Upload Sequence
Point The sequence of the load processes is usually defined within process chains
(BW 3.x) or event chains. This sequence can have a significant impact on the load
performance.
Advantage The master data load creates all SIDs and populates the master data
tables (attributes and/or texts). If the SIDs do not exist when transaction data is loaded
(i.e. transaction data is loaded before relevant master data) these tables have to be
populated during the transaction data load, which slows down the overall process.
Indicator (Range, Limits etc) Check whether transaction data is loaded before master
data loading.
Tool Needed
Action
a) Implement the most recent BW Hot Package and the most recent kernel patch in
the system.
b) Ensure all the relating master data is loaded before uploading the transaction
data. If no master data has been loaded yet, the upload can take up to 100
percent longer as SIDs must be retrieved for the characteristics, and new records
must be inserted into the master data tables.
c) If data in the DataTarget is to be replaced completely, first delete the data (in
PSA and/or DataTarget) and load afterwards. Small (or empty) PSA tables
improve PSA read times. Small (or empty) InfoCubes improve deletion,
compression time and aggregation time (thus affecting data availability).
Concern
Relevant SAP Notes 130253
2. PSA Partition Size
Point - PSA tables are partitioned (if available in the DBMS) automatically. In transaction
RSCUSTV6 the size of each PSA partition can be defined. This size defines the number
of records that must be exceeded to create a new PSA partition. One request is
contained in one partition, even if its size exceeds the user-defined PSA size; several
packages can be stored within one partition.
Advantage The PSA is partitioned to enable fast deletion (DDL statement DROP
PARTITION). Packages are not deleted physically until all packages in the same partition
can be deleted.
Indicator (Range, Limits etc) Check & compare package size and PSA partition Size.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Tool Needed - Transaction RSCUSTV6
Action
a) Set the PSA partition size according to the expected package sizes. If you expect
many small packages, set the partition size to a rather small value, so that these
partitions can be deleted quickly.
Concern Database Partitioning is not available for all DBMS. Check whether Database
involved has this feature.
Relevant SAP Notes 485878

3. Parallelizing Upload
Point Parallel processing is automatically initiated during an extraction from an SAP
system; the settings for the datapackage size directly influence the number of
datapackages that are likely to be sent in parallel. Moreover, a setting in the InfoPackage
defines the degree of parallelism of updating to PSA and DataTargets. Thus, data load
performance is fully scalable.
Advantage Parallel process will exploit the available resources for performance.
Indicator (Range, Limits etc) Check whether parallel processes are being used.
Tool Needed Access to relevant InfoPackage.
Action Use parallelism for uploading if resources are not constraint. Consider the
following
a) Flat Files - Split files for multiple InfoPackages, enable parallel PSA ->
DataTarget load process.
b) MySAP source system - Create several InfoPackages for the same or different
DataSources and then schedule them in parallel (this is user-controlled
parallelism).
c) Data load to several data targets - Use different InfoPackages (at least one for
each DataTarget).
d) Data Packets / Requests can be loaded into an InfoCube in parallel.
e) Data Packets / Requests can not be loaded into an ODS object in parallel
(because of overwriting functionality).
f) PSA and data targets can be loaded in parallel
Concern
a) If the data load takes place when there is little other activity in the BW system,
then optimal results are likely to be observed if the number of (roughly equally-
sized) InfoPackages is equivalent to the number of CPUs in the application
servers in the logon group handling the data load.
b) Delta uploads from one DataSource cannot be parallelized even if you
initialized the delta with disjoint selection criteria. One delta request collects all
delta records for this specific DataSource regardless of any selection criteria.
Relevant SAP Notes 130253
4. Transformation Rules
Point Transformation rules are Transfer Rules and Update Rules. Start routines enable
to manipulate whole data packages (database array operations) instead of changing
record-by-record. Standard functionalities are one-to-one-mapping, reading master data,
using transformation rules (in BW 3.x) and providing own ABAP coding.
Advantage Optimization in Transfer rules & Update rules.
Indicator (Range, Limits etc) Time spent in transformation rules.
Tool Needed Data Load Monitor (RSMO).
Action
a) In general it is preferable to apply transformations as early as possible in order to
reuse the data for several targets. Better use transfer rules (ONE transformation)
if you have to transform data for example to several DataTargets (the same
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

transformation for EACH DataTarget). Technical transformations (check for right
domain etc.) could even be done outside BW (e.g., in the ETL tool).
b) Be sure that customer enhancements are designed and implemented very
carefully according to the recommendations in the Customer Enhancement
Section above. If a lot of time is being spent in transformation rules (e.g., via
Data Load Monitor), check and improve the coding.
c) BW 3.x library transformations interpreted at runtime (not compiled). Usually the
impact is very little, but if you use lots of transformations for a huge number of
records, then it could be wiser to use ABAP coding instead.
d) Reading master data is a generic standard functionality. Depending on the
concrete scenario, it could be more efficient to buffer master data tables in own
ABAP coding, instead.
Concern
Relevant SAP Notes
5. Export DataSource
Point The Export DataSource (or Data Mart interface) enables the data population of
InfoCubes and ODS Objects out of other InfoCubes. The read operation of the export
DataSource is sequential.
Advantage The read operations of the export DataSource are single threaded (i.e.
sequential). During the read operations dependent on the complexity of the source
InfoCube the initial time before data is retrieved (i.e. parsing, reading and sorting) can
be significant. The posting to a subsequent data target can be parallelized by
ROIDOCPRMS settings for the myself system which will lead to better performance.
Indicator (Range, Limits etc) Check whether parallel processes are being used.
Tool Needed
Action
a) Use InfoPackages with disjoint selection criteria to parallelize the data export.
b) Complex database selections can be split to several less complex requests. This
should help for Export DataSources reading many tables.
c) If the population of InfoCubes out of other InfoCubes via Export DataSource
takes up too much time, try to load the InfoCubes from PSA.
d) BW uses a view on the fact table to read the data in BW 3.x. If a BW 3.x-Export
DataSource runs into problems due to reading from the view, turn it off (SAP
Note 561961).
e) Export DataSources can also make use of aggregates. Aggregates of source
InfoCubes can be used if the transfer structure definition matches the aggregate
definition. Delete unnecessary fields (and their mapping rules to the
communication structure) from the transfer structure definition of the generated
InfoSource.
Concern
Relevant SAP Notes 514907, 561961
6. Flat File Upload
Point There are several factors like file format, size etc which affect the flat file upload
performance. Managing these factors will lead to better upload performance.
Advantage Will lead to better loading performance for flat file upload.
Indicator (Range, Limits etc) Check the file format, location of file, size of package.
Tool Needed RSCUSTV6
Action -
a) Flat files can be uploaded either in CSV format or in fixed-length ASCII format.
For CSV format, the records are internally converted in fixed-length format, which
generates overhead.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

b) Files can be uploaded either from the client or from the application server.
Uploading files from the client workstation implies sending the file to the
application server via the network the speed of the server backbone will
determine the level of performance impact.
c) If possible, split the files to achieve parallel upload. Recommendation is as many
equally-sized files as CPUs are available.
d) The size (i.e., number of records) of the packages, the frequency of status IDocs
can be defined in table RSADMINC (Transaction RSCUSTV6) for the flat file
upload.
Concern
Relevant SAP Notes 130253
7. Master Data Load Parallel Master Data Load
Point For a time-consuming extraction of master data, it is advisable to extract the data
in parallel
Advantage Improved Master data load performance.
Indicator (Range, Limits etc) Check whether parallel process is being used.
Tool Needed Access to relevant InfoPackage.
Action
a) Create several InfoPackages that extract disjunct amounts of master data
records because of the corresponding selection criteria, and schedule them in
parallel.
b) Extraction runs in the source system in parallel and has to be serialized in the
BW system. According to the standard behavior of the BW, the first request
arriving in the BW system would set the lock and all further requests would
terminate the update due to the lock. Do the settings in table RSADMIN to cause
data packets that cannot be updated because of the lock to wait awhile and then
try to set the lock themselves (SAP Note 421419).
Concern
Relevant SAP Notes 421419
8. Master Data Load Buffering Number Range
Point The number ranges buffer for the SIDs resides on the application server and
reduces DB server accesses.
Advantage The number ranges buffer for the SIDs resides on the application server
and reduces DB server accesses hence improved master data load performance.
Indicator (Range, Limits etc) Function module RSD_IOBJ_GET to check the number
range buffer.
Tool Needed Function module RSD_IOBJ_GET.
Action While loading a significant volume of master data (e.g. initial load); increase the
number range buffer for this Infoobject. If possible, reset it to its original state after the
load in order to minimize unnecessary memory allocation.
Concern
Relevant SAP Notes 130253
9. Master Data Load Change Run
Point Please refer to Point 14 below InfoCubes Change Run.
Advantage
Indicator (Range, Limits etc)
Tool Needed
Action
Concern
Relevant SAP Notes
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

10. InfoCube Data Load Dropping Indices Before Loading
Point Creating and dropping indices in InfoCube maintenance affects the (bitmap in
ORACLE) indices on the dimension keys in the F fact table.
Advantage It is faster to rebuild the index at the end of the load process instead of
updating the index for each record loaded so dropping indices before loading will lead to
Improved InfoCube data load performance.
Indicator (Range, Limits etc) Check whether before loading data indices are being
dropped & recreated after data load.
Tool Needed Access to relevant Infocube.
Action
a) If uncompressed F-table is small, drop indices before loading. In this case, it is
faster to rebuild the index at the end of the load process instead of updating the
index for each record loaded. Be sure that only one process drops data before
several load jobs are scheduled.
b) Rebuild the indices immediately after the load process. It is recommended to
integrate these processes within the process chains.
Concern
a) Be careful dropping the indices on large F-fact tables as the rebuild might take a
very long time. Regular InfoCube compression (and hence small F fact tables)
reduces the index rebuild time.
b) If the indices are not available, querying the data is very slow. Monitor missing
indices to ensure that the indices are rebuilt following the data load.
Relevant SAP Notes 130253, 115407
11. InfoCube Data Load Buffering Number Range
Point The number ranges buffer for the SIDs resides on the application server and
reduces DB server accesses.
Advantage if the number range buffer for one dimension is set to 500, the system will
keep 500 sequential numbers in memory and need not access the database this will lead
to Improved InfoCube data load performance.
Indicator (Range, Limits etc) Function module RSD_CUBE_GET to check the
number range buffer.
Tool Needed Function module RSD_CUBE_GET.
Action If loading a significant volume of transaction data (e.g., initial load), increase the
number range buffer. If possible, reset it to its original state after the load in order to
minimize unnecessary memory allocation.
Concern
Relevant SAP Notes 130253
12. InfoCube Data Load Compression Performance
Point Compression transfers the data from the F fact table to the E fact table while
eliminating the request information in the InfoCube.
Advantage If request contains a disjoint set of keys (i.e., the same key only occurs
within one record), the compression can be optimized by omitting the UPDATE statement
and rather using only INSERTs into the E-fact table. It will lead to better compression
performance.
Indicator (Range, Limits etc) Check whether disjoint keys requests are available.
Tool Needed
Action
a) Avoid the update phase of the program to condense by filling field COMP_DISJ =
'X' in table RSDCUBE. Use Transaction SE16 to do this.
b) The program to condense inserts the records into the corresponding partition of
the E-table by using Array Insert. This method saves from attempting the update
to request 0 for each record.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Concern
a) Make sure that the data of a request is disjunct to the data of other requests. The
system does not make additional checks.
Relevant SAP Notes 375132
13. InfoCube Data Load Roll-Up
Point The roll-up adds newly loaded transaction data to the existing aggregates. When
aggregates are active, new data is not available for reporting until it is rolled up. The time
spent for the roll-up is determined by the number and the size of the aggregates; if
aggregates can be built from other aggregates, they are arranged in an aggregate
hierarchy.
Advantage Optimized Roll-Up performance.
Indicator (Range, Limits etc)
Tool Needed
Action Take the following hints into consideration in order to improve the aggregate
hierarchy and, thus, the roll-up: -
a) Build up very few basis aggregates out of the underlying InfoCube fact table.
b) Try for summarization ratios of 10 or higher for every aggregate hierarchy level.
c) Find good subsets of data (frequently used).
d) Build aggregates only on selected hierarchy levels (not all).
e) Build up aggregates that are neither too specific nor too general; they should
serve many different query navigations.
f) Monitor aggregates; remove those aggregates that are not used frequently
(except basis aggregates).
g) DB statistics for aggregates are created at regular points. This is necessary when
queries are executed during the roll-up. Under certain circumstances (no queries
during roll-up), the roll-up in BW 3.x can be improved by forgoing the DB
statistics and building up at the end of the roll-up process (SAP Note No
555030).
h) In certain cases In BW 3.x, aggregate compression can also be deactivated. If
requests in the aggregates are not compressed regularly then time used to
delete & rebuilding the indices can be significant (SAP Note No 582529).
Concern
a) Roll-Up is not possible when the change run is running.
Relevant SAP Notes 555030, 582529
14. InfoCube Data Load Change Run
Point The Change Run adapts all aggregates for newly loaded master data and
hierarchies. During the Change Run, all aggregates containing navigational attributes
and/or hierarchies are realigned. Newly loaded master data and hierarchies are not
available before they are activated via the Change Run.
Advantage Optimized Change Run performance leading to better upload performance.
Indicator (Range, Limits etc)
Tool Needed
Action
a) Like the roll-up, the Change Run runtime is significantly better when the
aggregates are related to each other in a good aggregate hierarchy.
b) Apply the Change Run ONCE for all newly loaded master data for a given period.
Do not schedule it for the load of every master data object.
c) The Change Run can be improved analogue to the aggregate roll-up (SAP Note
176606).
d) Try to build basis aggregates (rather large aggregates directly filled out of the
InfoCube) without master data references (i.e., without navigational
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

attributes/hierarchies). Then there is no need of adjusting these aggregates when
master data changes.
e) In the customizing, you can define a threshold (percentage of changed master
data) that decides between delta aggregation and aggregate rebuild. Meaning:
unless the threshold is reached, delta records are created for all changes to the
aggregate and posted to the aggregate as additional records. If the threshold is
exceeded, then the aggregates a dropped and rebuilt from scratch. Test results
have led to the recommendation to set the threshold parameter to approximately
30% (READ the Concern Section before applying this).
f) Use the parallel change run if your hardware resources are sufficient and if your
aggregates are distributed among several InfoCubes. (SAP Note 534630).
g) DB statistics for aggregates are created at regular points. This is necessary when
queries are executed during the change run. Under certain circumstances (no
queries during change run), the change run in BW 3.x can be improved by
forgoing the DB statistics run and rebuilding at the end of the change run (SAP
Note 555030).
h) If complex master data with lots of navigational attributes is used, the activation
of master data (and, hence, the change run) could take longer. Complex SQL
statements can be split into several less complex statements to improve the
activation of master data (SAP Note 536223).
i) The Change Run process can be monitored by means of program
RSDDS_CHANGERUN_MONITOR. This can only be used when the change run
is actually running. Use the change run monitor to check which aggregates must
be adjusted and which have already been adjusted (SAP Note 388069).
Concern if the InfoCube contains non-cumulative key figures with exception
aggregation (MIN, MAX), all aggregates are rebuilt (delta aggregation not possible).
Relevant SAP Notes 176606, 619471, 534630, 555030, 536223, 388069.
15. InfoCube Data Load Request Handling
Point The frequency of data loads determines the number of requests in the InfoCube.
Many requests in an InfoCube result in an administration overhead as all requests and
their interdependencies must be checked for every data load and when accessing the
InfoCube administration. Moreover for hash partitioned InfoCubes, a DB partition is
created for every request. Partition handling for several thousand partitions is usually
impacting DB performance.
Advantage Better performance
Indicator (Range, Limits etc) No of Request in an Infocube greater than 10000 or
less.
Tool Needed Access to relevant InfoCube.
Action
a) Keep this in mind the no of requests, when designing operational reporting with
very high data load frequencies.
b) If using hash partitioning, compress the InfoCube regularly to avoid too many
partitions in the F fact table.
c) If data load frequency is rather high and is likely to have more than 10,000
requests in one InfoCube, please apply SAP Note 620361 (Performance Data
Load / administration data target, many requests) to avoid performance issues
with the InfoCube administration and data load.
Concern
Relevant SAP Notes 620361
16. ODS Data Load ODS Objects Data Activation
Point ODS Object Data has to be activated before it can be used (e.g. for reporting, for
filling the change log, etc.)
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Advantage Better ODS data activation performance
Indicator (Range, Limits etc) Check whether parallel processes are being used.
Tool Needed
Action
a) Data Activation can be processed in parallel and distributed to server groups in
the BW customizing (transaction RSCUSTA2) in BW 3.x. There are parameters
for the maximum number of parallel activation dialog processes, minimum
number of records per package, maximum wait time in seconds for ODS
activation (if there is no response of the scheduled parallel activation processes
within this time, the overall status is red) and a server group for the RFC calls
when activating ODS data.
b) Use parallelism wherever possible and wherever your system landscape allows
this. Parallel ODS Object Data Activation (BW 3.x) can improve activation time
significantly.
c) To improve the deletion, manually partition the active data-table on DB level (see
partitioning as database-dependent feature). Partitions can be deleted very
quickly via DROP PARTITION statement (instead of DELETE FROM).
Concern Parallel processes need more hardware resources. Be sure that hardware
capacity can cope with the number of parallel processes.
Relevant SAP Notes
17. ODS Data Load Data Activation Performance and Flag BEx Reporting
Point If you set the flag BEx Reporting, SIDs instead of characteristic key values are
stored; this improves the reporting flexibility but slows down the upload. In BW 3.x, the
SIDs are determined during activation, in BW 2.x during data load. Note that in BW 3.x,
the SIDs (for BEx Reporting enabled ODS objects) are created per package; hence,
multiple packages are handled in parallel by separate dialog processes.
Advantage Better ODS data activation / loading performance.
Indicator (Range, Limits etc) No of queries on ODS & status of BEx Reporting flag,
check if there is any mismatch.
Tool Needed Access to relevant ODS definition & Queries if any.
Action
a) Do not set the indicator for BEx Reporting if only using the ODS object as a data
store. Otherwise, SIDs are created for all new characteristic values by setting this
indicator.
b) If reporting requirements on ODS objects are very restricted (e.g., display only
very few, selective records), use InfoSets on top of ODS objects and disable the
BEx Reporting flag.
Concern
Relevant SAP Notes 565725, 384023
18. ODS Data Load Unique Records
Point For unique record keys (e.g., sales documents), the option unique data records
can be set in BW 3.x.
Advantage If the BW 3.x-ODS object has unique record keys, the activation process
need not perform an update statement (the process tries to update an existing record
first; if this update fails i.e., there is no record with the same key the record is
inserted), but it can rather directly insert the new record.
Indicator (Range, Limits etc) Check whether unique record keys are active if
applicable.
Tool Needed Access to relevant ODS.
Action If load only unique data records (that is, data records with a one-time key
combination) into the ODS object, the load performance will improve the 'Unique data
record' indicator is set in the ODS object maintenance.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Concern If unique data records is turned on, BW cannot guarantee unique records;
this must be guaranteed from outside the BW system.
Relevant SAP Notes 565725
19. ODS Data Load Request Handling
Point The frequency of data loads determines the number of requests in the ODS
Objects.
Advantage Many requests in an ODS Object result in an administration overhead as all
requests and their interdependencies must be checked for every data load and when
accessing the ODS Object administration. Handling this will improve ODS data load
performance.
Indicator (Range, Limits etc) Check likeliness of no of requests crossing 10000.
Tool Needed
Action
a) Keep this in mind the no of requests, when designing operational reporting with
very high data load frequencies.
b) If data load frequency is rather high and is likely to have more than 10,000
requests in one InfoCube, please apply SAP Note 620361 (Performance Data
Load / administration data target, many requests) to avoid performance issues
with the ODS Object administration and data load.
Concern If the data load takes place when there is little other activity in the BW
system, then optimal results are likely to be observed if the number of (roughly equally-
Relevant SAP Notes 620361
D. QUERY PERFORMANCE
1. Query Definition
Point
a) Queries can be defined centrally or individually. The ad-hoc query designer also
allows web front-end users to define individual queries. When a query is new, it
has to be generated during the first call. This time is contained in the OLAP-Init
time.
b) Queries can be defined on top of all InfoProviders including InfoCubes, Remote
Cubes, MultiProviders, ODS Objects etc with varied level of performance
possibilities.
c) In the past, the common requirement of end users was pure reporting, i.e. having
a large list of figures representing all necessary information within this one list.
Powerful multi-dimensional query capabilities can make this time-consuming
searching and the special expert knowledge redundant, but rather pinpointing the
necessary information to basically all types of end users and let the user navigate
to find even more details.
d) Restricted key figures/Selections/Filters allow the definition of key figures for
specific characteristic values.
e) Calculation of non-cumulative key figures takes a long time, if the reference point
must be calculated (i.e. InfoCube is not compressed) AND, subsequently, the
values have to be calculated.
f) Queries on MultiProviders usually access all underlying basis InfoProviders.
g) The Cell Editor enables to overwrite certain cells with results of specific queries.
For every cell which is defined, a new query is started.
Advantage
a) Centrally managed queries need not to be regenerated thus keeping OLAP-Init
time low.
b) Knowing performance optimization limitations of queries will help in better
definition of queries.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

c) Let the user reach his level of details; will reduce the efforts to go the detail level
even if user doe not need it.
d) While using restricted key figures, using inclusion rather than exclusion will
improve performance.
e) Using regular InfoCube compression & tight restriction on time restriction will
improve performance by reduced effort on reference point calculations.
f) Queries on MultiProviders usually access all underlying basis InfoProviders.
Selecting InfoProviders based on the key figures of query will restrict the access
to other basis infoProviders leading to better performance.
g) Having too many cell calculations will hamper query performance.
Indicator (Range, Limits etc)
Tool Needed
Action
a) Define queries centrally.
b) Keep in mind only InfoCubes (and MultiProviders on top of them) are optimized
in terms of query performance. Remote Cubes and Virtual InfoProviders with
services access data sources in a remote system, which adds additional network
time to the total runtime. ODS objects and InfoSets should be reported on only
very selectively, i.e. only some very specific records should be read, and only
little aggregation and little navigation should be required.
c) Incorporate multi-dimensional reporting into all your queries. Start with a small
set of data and allow the users to drill down to get more specific information. In
general, keep the number of cells that are transferred to the front-end, small.
d) It is better to use inclusion of characteristic values instead of exclusion.
e) Be sure that the InfoCube is compressed while using non-cumulative key figures.
At query time, use tight restrictions on time characteristics; ideally request only
current values. If possible, split non-cumulative key figures with last or first
aggregation (aggregates can be used) from those with average aggregation
(aggregates can not be used) in different queries. Suppress sum lines if not
needed. Do not use partial time characteristics (e.g. FISCPER3) if not needed.
f) If a query on a MultiProvider reads only selective key figures and some
InfoProviders do not contain any of these key figures, use 0INFOPROVIDER
manually to include only the required basis InfoProviders.
g) Be cautious with too many cell calculations. Keep in mind that every cell, which is
defined in the cell editor, causes a comparable load like a normal query
h) For all calculations that have to be performed before aggregation (like currency
conversion), consider if it could be done during data load.
Concern Do not use ODS objects for multi-dimensional queries. Instead, include ODS
objects in drill-down paths to access very few detailed records.
Relevant SAP Notes 189150
2. Virtual Key Figures / Characteristics
Point Key figures and characteristics can be included in queries, which are not
available in the InfoProvider. These are called virtual key figures/characteristics and their
source is coded in a customer exit.
Advantage Customer exit source code should be optimized so that it does not
consume much resource.
Indicator (Range, Limits etc)
Tool Needed
Action Please refer to Customer Enhancement section in Extraction.
Concern
Relevant SAP Notes
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

3. Query Read Mode
Point - The read mode determines how the OLAP processor gets data during navigation.
One can set the mode in Customizing for an InfoProvider and in the Query Monitor for a
query. The read mode defines if a query reads only the necessary data for the first
navigation step (and reads the database for every additional navigation step) or if it reads
everything in one go (and does not access the database for any additional navigation
steps). It can also be set up if hierarchies are read in one go.
Advantage Proper setting here can optimize the database access & data retrieval.
Indicator (Range, Limits etc) Check the read mode of the query & its relevance.
Tool Needed
Action
a) For most of the queries it is reasonable to load only that number of records from
the DB that is really required. So generally set queries to read when navigating
and expanding hierarchies (default setting)
b) Only choose a different read mode in exceptional circumstances. The read mode
Query to Read All Data At Once may be of use in the following cases:
The InfoProvider does not support selection. The OLAP processor reads
significantly more data than the query needs anyway.
A user exit is active in a query. This prevents data from already being
aggregated in the database.
Concern
Relevant SAP Notes
4. Reporting Format
Point For formatting Excel cells, formatting information must be transferred. If the cell
format often changes (e.g., too many result lines), it might be reasonable in terms of
performance to switch the formatting off.
Advantage After switching it off formatting information is not transferred, the time
consumed in front-end can be reduced.
Indicator (Range, Limits etc)
Tool Needed
Action If query has significantly high front-end time, check whether the formatting is the
reason. If so, either switch it off or reduce the result lines.
Concern
Relevant SAP Notes
5. Indices
Point Indices speed up accesses to individual records and groups of records
significantly.
Advantage Indices speed up data retrieval for a given selection tremendously. If no
indices are available, full table scans have to be performed.
Indicator (Range, Limits etc)
Tool Needed Access to InfoCube Maintenance, DB02 & RSRV, SE11, ST05.
Action
a) For querying data, the necessary indices must be available and up-to-date. Make
sure that the indices for InfoCubes are available on the reporting InfoCube.
b) Status of indices can be checked in the InfoCube maintenance and also in
transactions DB02 and RSRV.
c) ODS objects need to be indexed manually depending on the reporting
requirements. This can be done in the ODS object maintenance.
d) If you report on ODS objects and the performance is bad due to selections on
non-indexed characteristics, build up secondary indices.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

e) Master Data Indexing
S (SID-) Table (consists of master data key, the SID and some flags)
i. Unique B-Tree index on master data key (= primary key).
ii. Unique B-Tree index on SID.
X- / Y- Tables (consists of SID of characteristic, object version and all
SIDs of the navigational attributes)
i. Unique B-Tree index on <SID, object version> ( = primary key)
Additional Master Data indexing on the X- and Y- tables can be useful.
These indices have to be created in transaction SE11.
If accesses on master data (navigational attributes) are slow, check the
execution plan (ST05). If the plan is not optimal for the access to
navigational attributes, create indices on the X- (time-independent) or Y-
table (time-dependent) of the respective characteristic. Check if the
indices improve performance; if not, delete them again.
Concern
a) Indices have to be updated during the data load; thus, they decrease data load
performance. So, indices should be designed carefully.
b) Indices on master data can slow down the data load process. Delete seldom-
used indices.
Relevant SAP Notes 402469, 383325
6. Compression
Point Compression transfers the data from the F fact table to the E fact table while
eliminating the request information in the InfoCube. It aggregates records with equal keys
from different requests.
Advantage
a) After compression InfoCube content is likely to be reduced in size, so DB-time for
queries should improve.
b) The reference point of non-cumulative key figures is updated when the InfoCube
is compressed. This reduces the OLAP time of queries on these InfoCubes.
c) Customized partition settings (0CALMONTH or 0FISCPER) are only valid in the
compressed E-table. Compression prevents the F-table (which is partitioned by
request ID) from containing too many partitions. The DB-internal administration of
some thousands (as a rule of thumb) partitions decreases the overall
performance for an InfoCube.
Indicator (Range, Limits etc) Non Compressed Requests
Tool Needed
Action Compress those requests in the InfoCube that are not likely to be deleted as
soon as possible. Also compress aggregates as soon as possible.
Concern
a) Compression only aids in query performance. It adds another step to the overall
data load process and takes up resources.
b) Individual requests cannot be accessed and deleted any more after compressing.
c) If requests that have been deleted are rolled up and compressed in the
aggregates, these aggregates have to be dropped and rebuilt completely. This
can take a long time depending on the size of the aggregate and associated fact
table.
d) For DBMS supporting range partitioning, the E fact table is optimized for
reporting with respect to database partitioning and the F fact table is optimized
for data load and deletion.
Relevant SAP Notes
7. Aggregates
Point
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

a) Aggregates are materialized, pre-aggregated views on InfoCube fact table data.
They are independent structures where summary data is stored within separate
transparent InfoCubes. The purpose of aggregates is purely to accelerate the
response time of queries by reducing the amount of data that must be read in the
database for a given query navigation step. In the best case, the records
presented in the report will exactly match the records that were read from the
database.
b) Aggregates can only be defined on basic InfoCubes for dimension
characteristics, navigational attributes (time-dependent and time-independent)
and on hierarchy levels (for time-dependent and time-independent hierarchy
structures). Aggregates can not be created on ODS objects, MultiProviders or
Remote Cubes.
c) Queries may be automatically split up into several sub-queries, e.g. for individual
restricted key figures (restricted key figures sales 2001 and sales 2002). Each
sub-query can use one aggregate; hence, one query can involve several
aggregates.
d) If an aggregate has less than 15 components, BW 3.x puts each component
automatically into a separate dimension that will be marked as line item (except
package and unit dimension); these aggregates are called flat aggregates.
Hence, dimension tables are omitted and SID tables referenced directly. Flat
aggregates can be rolled up on the DB server (i.e., without loading data into the
application server). This accelerates the roll up (hence the upload) process.
Advantage The purpose of aggregates is to accelerate the response time of queries by
reducing the amount of data that must be read in the database for a given query
navigation step. Hence improving the query performance.
Indicator (Range, Limits etc) Summarization Ratio i.e. 10 times more records are
read than displayed >10 & Percentage of DB time i.e. time spent on database is
substantial part of DB time >30%.
Tool Needed Transaction ST03, RSRT, table RSDDSTAT.
Action
a) Define aggregates for queries that have high database read times and return far
fewer records than read.
b) SAP BW can propose aggregates on basis of queries that have been executed.
Do not activate all automatically proposed aggregates without taking into account
the costs of roll-up/change run.
c) There are some restrictions for aggregates on InfoCubes containing key figures
with exception aggregation. The characteristic for the exception aggregation
must be contained in all aggregates.
d) Try to create aggregate relatively small compared to parent InfoCube.
e) Try for Summarization ratio of 10 or higher.
f) Build aggregates on some hierarchy level not on all levels.
g) Aggregates should be neither too specific nor too general i.e. should serve many
different query navigations.
h) Check the validity of existing aggregates regularly; drop the ones which are not
being used & create relevant ones.
Concern
a) The more aggregates exist, the more time-consuming is the roll-up process and
thus the data loading process; the change run is also affected.
b) Adjustment of aggregates on time-dependent master data can be very
expensive.
c) If you use elimination of internal business volume in SAP BW 3.x, aggregates
must contain the two depending characteristics (sender and receiver) in order to
be used in these queries.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

d) InfoCubes containing non-cumulative key figures with exception aggregation
(MIN, MAX) cannot use the delta change run: all aggregates are rebuilt. Please
see section Change Run as well.
Relevant SAP Notes 544521, 125681, 166433
8. Aggregate Block Size
Point Data of large InfoCubes is read in several blocks (rather than in one large block
that needs more memory / temporary tablespace than available). The block size is
defined system-wide in customizing; blocks are distinguished by characteristic values.
Aggregate Block Size has nothing to do with DB block size; instead it is like a periodic
commit on the aggregates process. This allows completion of the aggregate build
process under cases of memory constraint.
Advantage When building aggregates from large InfoCubes, use the block size in
order to limit the resource consumption and, thus, to prevent resource bottlenecks.
Indicator (Range, Limits etc)
Tool Needed Setting can be done at BW Customizing Implementation Guide >
Business Information Warehouse > General BW settings > Parameters for aggregates.
Action Define the required block size with the program RSDDK_BLOCKSIZE_SET. In
this case, the block size is the number of records that you want to process at the same
time. On the one hand, the number of records should not be too low so that too many
read accesses are not executed. As a result, the runtime is extended. On the other hand,
the number of records must not be too high: otherwise the resource limits will quickly be
reached. A block size of 10,000,000 records is preset.
Concern While building aggregates from large InfoCubes, use the block size in order to
limit the resource consumption and, thus, to prevent resource bottlenecks. But keep in
mind, that this may be detrimental to the overall build time.
Relevant SAP Notes 484536
9. OLAP Engine ODS Objects
Point Reporting on ODS objects is usually used in drill-down paths to retrieve a few
detailed records.
Advantage Accessing selective records usually requires (secondary) indices to avoid
full table scans i.e. to enhance query performance. Secondary indices accelerate the
selective reading from an ODS object. This also improves the update from the ODS
object.
Indicator (Range, Limits etc)
Tool Needed SQL Trace
Action
a) Selection criteria should be used for queries on ODS objects. The existing
primary index is used if the key fields are specified. As a result, the characteristic
that is accessed more frequently should be left justified.
b) If the key fields are only partially specified in the selection criteria (recognizable
in the SQL trace), the query runtime may be optimized by creating additional
indexes. These secondary indexes can be created in the ODS object
maintenance.
Concern Indexing speeds up querying but slows down data activation.
Relevant SAP Notes 384023, 565725
10. OLAP Engine MultiProviders
Point MultiProviders are used transparently for reporting. MultiProviders are used for
logical partitioning. Queries on MultiProviders are split into parallel (sub-) queries on the
basis InfoProviders by default and united at a defined synchronization point. One can
manually switch all queries for a MultiProvider to serial processing.
Advantage These are the advantages of MultiProviders
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

a) Local queries (on each InfoProvider) vs. global queries (on MultiProvider, parallel
execution).
b) Independent (and parallel) data load into individual InfoProviders.
c) Small total data volumes: less redundancy, less sparsely filled and less complex.
Indicator (Range, Limits etc)
Tool Needed
Action
a) If in a very rare case a (parallel) query on a MultiProvider takes up too much
memory resources on the application server, switch to serial query processing. In
BW 3.x, the BW system determines whether a query should be executed in
parallel or sequentially. (SAP Note 607164).
b) If you use MultiProviders as a logical partitioning on homogeneous InfoProviders,
be sure to define constants in the individual InfoProviders to avoid unnecessary
accesses. As a general rule of thumb, recommendation is up to 10 Sub-
InfoProviders, that are assigned to a MultiProvider and that are accessed
simultaneously for one query; if using more than this, the overhead in combining
results might get too big. (Of course, depending on system resources and
configuration, more than 10 Sub-InfoProviders can still result in good
performance).
c) Please check Parallel Processing Section as well.
Concern
a) In BW 2.x, parallel queries on MultiProviders can only use one aggregate for the
whole MultiProvider query.
b) If a MultiProvider contains at least one basis InfoProvider with non-cumulative
key figures, all queries are processed sequentially.
Relevant SAP Notes 607164, 622841, 449477, 327876, 629541
11. OLAP Cache
Point
a) The OLAP Cache can help with most query performance issues. For frequently
used queries, the first access fills the OLAP Cache and all subsequent calls will
hit the OLAP Cache and do not have to read the database tables. In addition to
this pure caching functionality, the Cache can also be used to optimize specific
queries and drill-down paths by warming up the Cache; this fills up the Cache in
batch to improve all accesses to this query data substantially.
b) In general, the OLAP Cache can buffer results from queries and can provide
them again for different users and similar queries. It can re-use the cached data
for the same query call with the same selection parameters or real subsets of
them. The subset principle only works if the respective characteristics are part of
the drill-down.
c) The OLAP Cache stores the query results with their navigation statuses in the
memory of the application server; alternatively, the data can also be stored in
database tables and files. When the buffer (Export/Import Shared Memory)
memory overruns, it stores the displaced data depending on the persistence
mode on the database server. There are five Cache modes:
i. Cache inactive, Memory
ii. Cache without swapping
iii. Memory Cache with swapping
iv. Cluster/Flat file Cache per application server
v. Cluster/Flat file Cache cross-application server
d) With the last option, dedicated queries can be stored on the database server
(table or file) i.e., independent from an application server. Note that the storage
format of Cache entries is highly compressed.
e) One can define the size of the OLAP Cache in memory (only for the two Memory
Cache modes), the persistence mode (i.e. displaced page entries are sourced
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

out to a file/DB table) and can turn the Cache on/off globally, for InfoProviders
and for queries; the InfoProvider setting defines the default value for new
queries; after switching off, all existing queries still use the Cache. Note: The
actual memory requirements are usually lower than the total size of
(uncompressed) runtime objects due to compression.
f) The OLAP Cache for queries on a specific InfoCube is invalidated when new
data is uploaded into the respective InfoProvider (invalidations also for master
data/hierarchy change run, currency conversion and more). Hence, the data
displayed is always up-to-date and consistent with the original data in the
InfoProvider. The OLAP Cache for a specific query gets also invalidated when
the query is re-activated.
g) The OLAP Cache is also used for queries on transactional InfoCubes and for
pre-calculated web templates.
Advantage Optimized Query Performance.
In the picture below Query Execution Order of Data Availability Check is explained
that how and in which order the data availability is checked during a query execution.
a) The first thing when running a query or a navigational step is that it will be
checked if the data is still available in the local cache. That means, for
example, if having a very large query and then restricted to a fixed value
within that query, it would use the local cache because that data is already
available, and it has just to restrict from that data to give a sub-selection of
that data in new navigation step. That will be taken out of the local OLAP
cache.
b) The next step will be to check if this is not available, is it available maybe in
the global OLAP cache. Has this query already been run by some other user
or in some other session with those selection criteria? Is the data available
in the global OLAP cache? If yes, data is retrieved from there.
c) If no, in case of an InfoCube, say, as an InfoProvider, it will be checked are
aggregates available for this query? If no, it will go through the InfoProvider
itself on the database, in this example a basic InfoCube.


























BW Server
Front-end
InfoProvider
on Database
Current Query View
Analyzer Display
and Manipulation
Query
Definition (BEx)
Local OLAP Cache
Aggregates
on Database
(if applicable)
Global OLAP Cache
OLAP Processor OLAP Processor
1
2
3
4
BW Server
Front-end
InfoProvider
on Database
Current Query View
Analyzer Display
and Manipulation
Query
Definition (BEx)
Local OLAP Cache
Aggregates
on Database
(if applicable)
Global OLAP Cache
OLAP Processor OLAP Processor
1
2
3
4
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Indicator (Range, Limits etc) Frequency of query execution, DB Time < .10 sec,
Frequency of data loads, Frequency of query changes / creation.
Tool Needed Transaction RSRT, RSRTRACE.
Action
a) OLAP Cache should be used whenever possible. However, for queries that are
executed only once (or very rarely) or for very small queries (e.g., DB time < 0.10
sec), the cache overhead could lead to slightly worse performance.
b) Check if the Shared Memory size (rdsb/esm/buffersize_kb) is bigger than the
OLAP Cache size. By default, the Shared Memory size is very small. Start with
an initial size of 100MB.
c) Reporting Agent or the OLAP trace can be used to warm up the OLAP Cache
though purpose of reporting agent is not this.
d) To use the OLAP cache with virtual characteristics/key figures, first make sure
that the data is ALWAYS consistent, i.e. that the customer exit is not using
frequently updated tables. Please note that the Cache is not invalidated if the
data in these tables changes. Please refer SAP Note 623768 (Extended Cache
Functionality).
e) To start with use cache in memory (Global Cache 200MB, export/import shared
memory 100MB) with persistence (swapping) on file or cluster tables. Keep
reviewing use of cache & make adjustment later.
f) Follow the table below for Cache Settings Dependencies -





























Concern
a) If all front-end users are authorized to create queries, note that the OLAP Cache
for a particular query gets invalidated each time the query is changed and re-
activated and it might be useless eventually.
+ + + + + + + - - -
Slow I/O
+ + + + + + - -
Large result set in queries
+ + +
+ + +
+ + +
- -
+ +
+ + +
-
Memory
cache
with
swap
+ + +
+ + +
+ + +
-
+ + +
+ + +
-
Cluster /
file cache
for each
application
server
+ + + - -
High amount of different
queries
+ + +
+ + +
+ + +
- - -
+ + +
-
Memory
cache
without
swap
+ + + - - -
High load on InfoProvider
database tables
- + +
(Many) Ad-hoc queries
+ - - -
User groups assigned to
application servers
+ + + - - -
Low query performance
(without caching)
+ + + - - -
High amount of active users
and query navigations
- +
Frequent changes of data
Cluster / file
cache
across
application
servers
Cache
inactive
Cache Setting
Aspect
+ + + + + + + - - -
Slow I/O
+ + + + + + - -
Large result set in queries
+ + +
+ + +
+ + +
- -
+ +
+ + +
-
Memory
cache
with
swap
+ + +
+ + +
+ + +
-
+ + +
+ + +
-
Cluster /
file cache
for each
application
server
+ + + - -
High amount of different
queries
+ + +
+ + +
+ + +
- - -
+ + +
-
Memory
cache
without
swap
+ + + - - -
High load on InfoProvider
database tables
- + +
(Many) Ad-hoc queries
+ - - -
User groups assigned to
application servers
+ + + - - -
Low query performance
(without caching)
+ + + - - -
High amount of active users
and query navigations
- +
Frequent changes of data
Cluster / file
cache
across
application
servers
Cache
inactive
Cache Setting
Aspect
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

b) Queries containing virtual characteristics/key figures do not use the OLAP Cache
by default. As the OLAP Cache cannot control other database tables, which
might be read within the customer exits, and hence cannot invalidate, if there are
any changes in those tables, this default setting guarantees data consistency.
Relevant SAP Notes 623768, 456068.
12. Hierarchies
Point The Hierarchy Table Buffer buffers hierarchy node calculations that have already
been calculated. A separate table is created for each hierarchy node, and is managed by
an entry in buffer table RSDRHLRUBUFFER. If queries are used in a system on many
different hierarchies or very large hierarchies with different nodes, entries in the hierarchy
buffer may be frequently displaced. As a result, hierarchy nodes frequently have to be
recalculated.
Advantage Queries with several or large hierarchies might be optimized by reducing
hierarchy node recalculation.
Indicator (Range, Limits etc)
a) Many existing buffer entries are in constant use. Check this as follows
i. In transaction se16, display table RSDRHLRUBUFFER without
restricting the number of hits.
ii. Sort the entries by timestamp.
iii. If more than 90% of the tables display a timestamp with today's or
yesterday's date, and if the table contains more than 170 entries, it may
be useful to increase the buffer size.
b) Entries with long queue wait times appear in the system log within a short period
of time. These entries specify the total length of time that a work process has
waited for queue locks since it was started. Here take note of when the system
was last started. Wait times of 3600s or more are not a cause for concern in
systems that that have been running for months.
c) Hierarchies are continually being recalculated.
Tool Needed Access to table RSDRHLRUBUFFER & relevant logs.
Action Change the size of the hierarchy buffer.
Concern
Relevant SAP Notes 584216
13. Reporting Authorizations
Point The reporting authorizations can restrict critical data to certain users/user groups.
On a high level, InfoCubes and queries can be restricted; for more detailed
authorizations, also characteristics, characteristic values, hierarchy levels and key figures
can be restricted. From the performance point of view, data that has to be read is
restricted by authorizations or unauthorized requests are directly aborted.
Advantage tradeoff in Performance & Security.
Indicator (Range, Limits etc)
Tool Needed
Action
a) In terms of performance, it is better to use higher level authorizations. In some
cases, it could be helpful to define separate queries or InfoCubes for different
authorization groups.
b) Strike a proper balance between performance and security.
Concern
a) Complex authorizations especially on hierarchy levels can slow down the query
processing, as large amounts of information must be read and checked.
b) In complex HR scenario or multidivisional International Corporation where only
some people can see global views and others cant, authorizations are inevitable
- but keep in mind that the level of authorization detail can impact performance.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Relevant SAP Notes
E. WEB APPLICATION PERFORMANCE
1. Reporting Agent Pre-Calculated Web Templates
Point The Reporting Agent allows among other features the Pre-Calculation of Web
Templates. Pre-calculation is a set of techniques to distribute the workload of running the
report to off-peak hours, and have the report result set ready for very fast access to the
data.
a) Following output formats are available
Data Only data is pre-calculated.
HTML for Web Browser / HTML for Pocket IE.
Excel.
b) Following access modes are available
NEW (default) Uses current data.
STORED Uses pre-calculated data (if no pre-calculated data available,
this results in an error).
HYBRID Uses pre-calculated data if available; if not, it uses current
data.
STATIC Uses pre-calculated HTML pages (if no pre-calculated pages
are available, this results in an error).
STATIC_HYBRID Uses pre-calculated HTML pages if available; if not,
it uses HYBRID access mode.
Advantage Pre-Calculation of Web Templates reduces server load significantly and
provides faster data access; data that goes to many web applications is re-used.
Indicator (Range, Limits etc)
Tool Needed
Action
a) If some of the end users are pure information consumers, i.e., require only static
information, providing pre-calculated web templates to them is the right solution.
This skips the complete query processing time. Rendering time will still be
needed unless templates are downloaded.
b) Pre-Calculated Web Templates can also be useful if
Web Reports are accessed by Many Users.
Web Reports that are static and involve Limited Navigation.
Web Reports that should be made available offline.
Concern
a) The only navigation that is possible within pre-calculated web templates is
filtering through drop-down list boxes. In this case pre-calculation needs to take
place via control queries.
b) Navigation like drill-down and slice and dice within pre-calculated Web
Templates is not possible by default. Navigation has to be activated by loading
the data from up-to-date data. This can be done via linking the URL of the
active query to a self-defined button.
Relevant SAP Notes 594372, 510931
2. Web Application Definition Web Items
Point Web items define the display and navigation tools that are provided within the
web application. They obtain data from the InfoProviders and make them available as
HTML. The number and type of web items can influence the performance of web
applications.
Advantage Proper selection of web items & their relevant setting will lead to better
performance.
Indicator (Range, Limits etc)
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Tool Needed
Action
a) For web items with filtering functionality (dropdown boxes, hierarchical dropdown
menus, radio buttons), there are three read modes that affect performance:
Dimension Tables (all data that are in the InfoCube),
Master Data Tables
Booked values (including authority checks).
b) For web items with filtering functionality, read mode dimension tables is usually
the best option with respect to performance. Look for the proper setting relevancy
before deciding.
Concern
Relevant SAP Notes

3. Web Application Definition Stateless / Stateful Connection
Point The web templates properties define the connection type of the web applications
to the BW server. The connection can be established for the entire period of navigation
(Stateful) or it can be built up every time a new navigation step takes place (Stateless). In
the latter case, the status information of the current navigation step must be transferred.
Advantage Stateful connections are faster with respect to navigation performance; but
they must allocate working memory for every connection for the navigation period. On the
other hand Stateless queries can reduce hardware requirements with the trade off of
slower navigation performance. Selecting the proper connection will lead to optimized
hardware & Web Application performance.
Indicator (Range, Limits etc)
Tool Needed
Action Select the proper Connection Type keeping in mind Web Application navigation
requirement & usage type.
Concern
Relevant SAP Notes
4. Web Application Definition HTTP / HTTPS
Point Transferred data can be encrypted to provide higher security from unintended
data access.
Advantage Measurements have proven that there is no measurable difference
between the two (HTTP / HTTPS) protocols in terms of performance.
Indicator (Range, Limits etc)
Tool Needed
Action For safe data transfer, HTTPS protocol is used rather than HTTP.
Concern
Relevant SAP Notes
5. Caching / Compression Portal iView Cache
Point When running in SAP Enterprise Portal, the Portal iView Cache can be used to
cache certain iViews on the Portal server.
Advantage The iView Cache is another layer in addition to Aggregates, OLAP Cache
and Pre-Calculation this brings cached data nearer to the front-end and accelerates
response times significantly.
Indicator (Range, Limits etc)
Tool Needed
Action While activating Portal iView Cache consider that the use case for the Portal
Cache is typically the information consumer who wants to browse over several pages
very quickly and expects them to be pre-retrieved in the cache.
SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

Concern
a) Portal iView cache invalidation is definable for a time period (e.g. in x hours). If
new data is loaded into BW, the portal cache is not invalidated. Be sure that the
Portal pages stored in the iView cache contain exactly the data you are
expecting.
b) Be careful using shared cache and personalization. Data in cache can be
accessed regardless of authorizations and personalization
Relevant SAP Notes 599270, 567746
6. Caching / Compression Compressing Web Applications and using Browser Cache
Point The transferred data (including MIME types like images, Cascaded Style Sheets
and Java Scripts) can be compressed to reduce the network load, and the Browser
Cache (on the front-end PC) can be used to cache image/gif files.
Advantage
a) The number of transferred bytes is reduced and, particularly in WANs, the overall
Query Performance can be significantly improved.
b) The number of protocol roundtrips can be reduced to that number that is
necessary to transfer the data itself.
Indicator (Range, Limits etc)
Tool Needed
Action
a) The Browser Cache is automatically active. The Cache will be noticeable after
the first query execution; then non-dynamic data like images are in the cache and
then another call of a query deploys the cache (on the same front-end PC).
b) Profile parameter icm/HTTP/server_cache_0/expiration defines how long the
information resides in the browser cache.
Concern
a) HTTP Compression should be automatically switched on as of BW 3.0B SP9.
b) Check if http 1.1 is activated in the browser settings.
Relevant SAP Notes 550669, 553084, 561792
7. Network - Front-End implications on Network Load
Point The communication between application server and front-end can be expensive,
especially in WAN environments.
Advantage Excel-based BEx Analyzer and web front-end cause different network
traffic in terms of transferred data and protocol roundtrips. The number of roundtrips in
Browser base reporting using http is significantly less than BEx Analyzer resulting in
better query performance.
Indicator (Range, Limits etc)
Tool Needed
Action
a) Particularly in a WAN, use the Web front-end instead of the Excel-based BEx
Analyzer as the number of round trips is significantly reduced.
b) Several Compression and Caching features are only available for the web front
end.
c) While using Excel-based BEx in a WAN, use a Windows Terminal Server in the
LAN of the BW to reduce the protocol overhead to the clients in the WAN.
Concern
Relevant SAP Notes


SAP Business Information Warehouse
Enterprise Application Solutions SAP Practice

F. DATABASE SPECIFIC PERFORMANCE
1. Table Partitioning
2. DB Statistics
3. Disk Layout
4. Raw Device / File System
5. Caching / Compression Portal iView Cache
6. Caching / Compression Compressing Web Applications and using Browser Cache
7. Network - Front-End implications on Network Load
G. TOOLS TO BE USED
1. Application Tools
Upload Monitor transaction RSMO
SAP Statistics transaction ST03N, table RSDDSTAT and function module
RSDDCVER_RFC_BW_STATISTICS
Query Monitor RSRT
Query Trace Tool RSRTRACE
Analysis & Repair of BW Objects RSRV
2. System Tools
Process Overview transaction SM50 / SM51
Work Load Monitor transaction ST03
SQL Trace transaction ST05
ABAP Runtime analysis transaction SE30
OS Memory & Buffer Monitor transaction ST06 / ST02
Database Monitor and Table / Index Overview
Performance Analysis of Web Applications transaction RSRTRACE

Das könnte Ihnen auch gefallen