Sie sind auf Seite 1von 17

http://wiki.sdn.sap.

com/wiki/display/ABAP/ABAP+Performance+and+Tuning
https://wiki.sdn.sap.com/wiki/display/ABAP/Advanced+Performance+Optimization+Tec
hniques
http://forums.sdn.sap.com/forum.jspa?forumID=234&start=0

ABAP Performance Guidelines

Topic Objectives
a) Describe Efficient Programming
b) Describe the Factors Impacting Application Performance
c) Describe the General Performance Guidelines

All ABAP programs should be developed using the most efficient means possible.
Program efficiency guidelines are included in the EBS ABAP Programming Standards.
Developing efficient programming involves the efficient usage of programming
standards, SAP ABAP Workbench tools, and ABAP performance monitoring tools.

What is "Efficient Programming"?


Efficient programming involves solving a problem as fast as possible while using
system resources as sparingly as possible.

Factors Impacting Application Performance


a) Program structure.
b) Programming language commands / syntax.
c) SQL statements.
d) Input/Output operations.
e) Database organization.
f) Data volumes / archiving strategy.
g) System sizing / configuration.
h) System configuration / load balancing.

Of the factors listed above, the 1st four are of particular concern to the ABAP
programmer.
Program Structure - The degree of modularization (i.e. use of subroutines, function
modules, etc.) and the degree of nesting (i.e. nested loops and control statements)
impacts program performance.
Programming Language Commands/Syntax - Certain ABAP statements are more
expensive than others. Use of an inefficient statement when a more efficient
alternative is available affects program performance.
SQL Statements - Efficiency of SQL statements can dramatically effect program
performance. Care should be taken to code the most efficient SQL possible.
Input/Output Operations - The amount of reading and writing to datasets within a
program will impact program performance. Care should be taken to eliminate any
unnecessary input/output operations.

The remaining factors tend to be the purview of the BASIS team and are typically not
addressed by an ABAP programmer.
Database Organization - The way that the database is organized on the physical disk
affects performance.
Data Volumes / Archiving Strategy - The amount of data stored within each database
table affects program performance.
System Sizing / Configuration - The hardware used needs to be sized appropriately
to support the number of users and the expected data volumes of performance will
be significantly impacted.
System Configuration / Load Balancing - The appropriate number of application
servers should exist to support processing needs. Users and background tasks
should be appropriately distributed across servers.

General Performance Guidelines


1.Keep the amount of data transferred between database and application small.
2.Keep the number of transfers between database and application small.
3.Keep the data to be searched small.
4.Take the load off of the database where possible.

Keep the Amount of Data Transferred Small


Use data selectively.
a) No single line updates.
b) Use aggregate functions.
c) SELECT with DISTINCT.

Using data selectively means that only the data of interest should be selected.

Selecting extraneous data unnecessarily increases the amount of data that must be
transferred between the database and the application.

One way to appropriately limit the amount of data selected is to select individual
fields of interest as opposed to whole rows.

The WHERE clause limits the amount of data selected. Unless all data within a table
is of interest, a WHERE clause should be specified. A WHERE clause should be
structured to take advantage of table INDEXES (i.e. primary key fields should be
specified, in order, to the extent possible).

It is preferable to update the database with the syntax UPDATE ... WHERE as
opposed to performing a single-line update (SELECT...UPDATE...ENDSELECT).

Aggregate functions (SUM, MIN, MAX, AVG, etc.) are the preferred method for
performing mathematical and statistical functions on database records.

Performing these functions programmatically would required transferring too much


data from the database to the application.Aggregate functions limit the amount of
data transferred between the database and the application.

The SELECT DISTINCT statement requests that the result set be unique and therefore
result in a considerable reduction in the amount of data transported from the
database to the application server.

SELECT field1 field2


INTO (ifield1, ifield2) FROM dbtable
WHERE condition.
...
ENDSELECT
SELECT field1 field2
INTO (ifield1, ifield2) FROM dbtable
WHERE condition.
...
ENDSELECT

Use Data Selectively


1.Use WHERE clause
2.Use SELECT SINGLE
3.Use Field Lists

SELECT * FROM dbtable.


WHERE field1 = value 1
AND field2 >= value2.
...
ENDSELECT
SELECT SINGLE * FROM dbtable
WHERE condition.
SELECT field1 field2
INTO (ifield1, ifield2) FROM
dbtable
WHERE condition.
...
ENDSELECT

Note on use of WHERE clause - be aware of the table type for the database table
being accessed when constructing a WHERE clause. (To find out the table type use
transaction SE12).

For TRANSP (transparent) and POOL tables, the SELECT statement should be a fully
qualified as possible using the WHERE clause. This includes data fields that may not
be a part of the key in addition to the key fields. This allows the database to
evaluate the records and return only those records matching the selection criteria.

For CLUSTER tables, the opposite is true. When working with CLUSTER tables, only
fields that are part of the key should be used to qualify SELECT statements. For
CLUSTER tables, use the CHECK statement to eliminate records after the selection
has been narrowed via the WHERE clause for key fields. This is because CLUSTER
tables cannot be processed in the database directly, as can TRANSP and POOL tables.
Forcing the database to unpack and check fields (as with SELECT statements
containing non-key fields in the WHERE clauses) is less efficient, in most cases, than
qualifying only with key fields and letting ABAP check non-key fields once the data is
returned.

It is always more efficient to specify as many as the tables' key fields as possible
when reading a database table. The SELECT SINGLE statement can be used to return
a single record only if the full table key is specified.
Select statements that query only the necessary columns from a table are more
efficient than SELECT *.
Only the fields that are needed should be selected and stored for later use. Single
line updates (i.e. selecting data and updating that data within a loop) is inefficient
and should be avoided.
No Single-line Updates
SELECT * FROM dbtable.
WHERE condition.
Dbtable-field = dbtable-field + delta.
UPDATE dbtable.
ENDSELECT
UPDATE dbtable
SET field = field + delta
WHERE condition.
UPDATE dbtable FROM TABLE itable
WHERE condition.

Use of aggregate functions is much faster than computing the aggregates within the
program. Network traffic is also significantly reduced by using aggregates. Use
Aggregate Functions
Total = 0
SELECT * FROM dbtable.
WHERE condition.
Total = total * dbtable-amount.
ENDSELECT.
SELECT SUM(amount)
INTO total FROM dbtable
WHERE condition.

The SELECT DISTINCT statement requests that the result set be unique and therefore
result in a considerable reduction in the amount of data transported from the
database to the application server.
Use SELECT DISTINCT
SELECT DISTINCT field1
INTO ifield1 FROM dbtable
WHERE field2>value2.
....
ENDSELECT.

Database calls can be saved by performing array processing for SELECT, INSERT,
UPDATE and DELETE using internal tables, or with UPDATE/DELETE statements which
select a set of records using a condition. The simplest and unfortunately most widely
spread way to read data from logically linked tables is with a nested SELECT
statement. This solution has the great disadvantage that a SELECT is executed in the
database and this possibly also on the network for each record that is processed in
the outer SELECT loop. If the hit list defined by the outer selection is very large, there
will be extremely high
communication costs for the application and database servers.

The SELECT...FOR ALL ENTRIES permits array-oriented processing on the database.


This syntax permits more than one record to be read from the database at one time.
This provides a distinct advantage over the nested SELECT approach.
Use of a subquery, the dataset is filtered within the database instead of within the
ABAP program, thereby limited the amount of data transferred between the database
and the application.

Keep the Number of Data Transfers Small


1.Use array operations.
2.Avoid nested SELECTs.
3.Use FOR ALL ENTRIES
4.Use subquery
5.Use Joins
6.Use VIEWs

A join has an advantage in that only one stateme nt is executed in the database.
When defined properly (i.e join on key fields only, using an index, including on the
necessary fields, limiting the number of tables) a join can offer a performance
increase. Two types of joins are provided by SAP, INNER JOIN and LEFT OUTER JOIN.

Views are created in the ABAP dictionary. Essentially, they are reusable INNER JOIN
definitions that can be buffered in a manner similar to internal tables.

Array operations (i.e. specifying data to be inserted or deleted in an internal table


and then processing the INSERT or DELETE with a single SQL statement) is much
more efficient that embedding the INSERT or DELETE within a loop.

Use Array Operations


LOOP AT itable.
INSERT INTO dbtable VALUES itable.
ENDLOOP
INSERT dbtable FROM TABLE itable
ACCEPTING DUPLICATE KEYS.
* Array operations are also possible with SELECT,
UPDATE and DELETE.

Nested SELECT statements should be avoided because they are very slow. Several
cursors in the database have to be maintained. Additionally, they are harder to
debug because the cursor tends to be lost easily during debug mode, causing the
program to abend. A common method for avoiding nested SELECT statements is to
read data directly into an internal table and then use LOOP...ENDLOOP to process the
internal table. However, this is not always the answer. The correct method (nested
SELECT statements v. internal tables) can only be chosen after performing runtime
analysis on both options. In many cases, a combination of the two methods will work
best. The key benefit to using SELECT...INTO an internal table v. using nested SELECT
statements is that the number of accesses to the database table is significantly
reduced. However, if the data being selected will only be used once and it is a small
amount of data, the use of an internal table may be overkill.

Avoid Nested Selects


SELECT * FROM dbtable1
WHERE condition.
SELECT * FROM dbtable2
WHERE condition.
....
ENDSELECT.
....
ENDSELECT.

NOTE: When using the syntax FOR ALL ENTRIES, it is important to first ensure that
the internal table is not initial. If there are no entries in the internal table, that
portion of the WHERE clause will be ignored.
Use FOR ALL ENTRIES and Subqueries
SELECT field1 FROM dbtable1
INTO TABLE itable
WHERE condition.
IF NOT itable IS INITIAL.
SELECT * FROM dbtable2
FOR ALL ENTRIES IN itable
WHERE field2 = itable-field1
...
ENDSELECT.
ENDIF.
SELECT * FROM dbtable2
WHERE field2 IN
(SELECT field1 FROM dbtable1
WHERE condition).
...
ENDSELECT.

A view should be created when appropriate to eliminate several nested select


statements. A view is a virtual table that is not stored on disk but is instead derived
from one or more tables. JOINS can also be used when appropriate to eliminate
several nested selects.

Use Joins and Views


SELECT dbtable1~field1 dbtable2~fieldn
FROM dbtable1 INNER JOIN dbtable2
ON (dbtable1~field1 = dbtable2~field1)
WHERE condition.
...
ENDSELECT.
SELECT * FROM v_dbtable1_2
WHERE condition.
...
ENDSELECT.

All database systems cope best with EQ conditions which are linked with AND. Use as
may EQ conditions as possible in each SQL statement in order to limit the amount of
data to be searched. The OR condition and the use of range tables can adversely
impact performa nce. The NOT condition should be used very carefully. There is no
index support for the
NOT condition. Indexes should be designed in order to have the greatest impact on
performance. Table design should equal index design. Most selective fields should be
listed first. Small indexes Not too many indexes!
Keep the Data to be Searched Small
a) Use as many EQ in WHERE clause as possible.
b) Be careful with OR and range tables.
c) Be careful with NOT.
d) Index design.

When selecting data from cluster and pool tables, the WHERE clause should be
limited to key fields, in sequence. For example, if dbtable1 is keyed by f1, f2, f3 and
f4, and only the values of f1, f2 and f4 are available, it is better to ignore f4 and
construct the WHERE clause using f1 and f2 only. When selecting data from a cluster
or pool table where the key field values are not available, it is better to select the
entire table into an internal table within the ABAP program.

Use as many EQ in WHERE clause as possible


SELECT * FROM dbtable1
WHERE field1 = value1
AND field3 = value3
...
ENDSELECT
SELECT field2 FROM dbtable2
INTO TABLE itable2
WHERE field1 = value1.
SELECT * FROM dbtable1
FOR ALL ENTRIES in itable2
WHERE field1 = value1
AND field2 = itable-field2
AND field3 = value3.
...
ENDSELECT.

Avoid the use of ranges in selects. While this is unrealistic in many cases, a limit of
8K oracle select statement exists and large ranges will cause ABAP programs to
abend in production. (As time goes on the range values tend to grow and larger and
larger SQL statements are created.) Avoid the use of NOT as index support is not
provided.
Be careful with OR, Range Tables and NOT
SELECT * FROM dbtable1
WHERE field1 = value1
AND field2 IN itable
AND field3 = value3
...
ENDSELECT
SELECT * FROM dbtable
WHERE field1 = value1
AND field2 = value2
AND NOT (field3 IN value3a, value3b).
...
ENDSELECT

An index is a search help to find data in the database. Every database table has a
primary index consisting of all key fields.
A database can have secondary indexes. These are defined in the ABAP dictionary.
Up to 16 indexes can be created - be careful!! When to create a secondary index.
Fields are selected without index support. Only a small part of the table (<5%) is
selected. WHERE clause is easy (ANDs only). Often do SORTs without index support.
Index Design
a) Index must be selective.
b) Keep the number of fields small.
c) Most selective fields first.
d) Create the index for the main case.

In order to get the best usage of an index, specify all index fields within the WHERE
clause "without gaps" (i.e. if the 1st and 3rd fields of an index are specified, only the
1st one will be used, if the 1st and 2nd fields are specified, both fields will be used).
The columns specified in the WHERE clause should also be in the same order as they
are specified in the index. This allows the ABAP SQL interpreter to more quickly
convert a SELECT into an Oracle statement and chose the correct index.

Index Usage Guidelines


1.Specify field values without gaps.
2.Do not use an OR operator for a field which is relevant for the index.
3.Use FOR ALL ENTRIES IN instead of IN, as IN is interpreted like OR.
4.Construct VIEWs with index usage in mind.
5.Understand how NULL values are handled for the specific DBMS in use.

ORDER BY v. SORT: Sorts should be done in ABAP and not in the database. Sorts on
the database server will effect all users. The exception is huge sets of data (e.g. >
10MB). In the case of large data sets, the DBMS can use an index and process more
efficiently. Also, if an index can be used for the sorting, there is hardly any additional
cost for sorting on the database server.

DISTINCT v. ABAP SORT + DELETE ADJACENT DUPLICATES: Use of DISTINCT requires


sorting on the database server and is relatively expensive if no index can be used.
Therefore the DISTINCT specification only makes sense if the full key is not specified
or when accessing via views.

Take load off database where possible


a) Table buffering.
b) Avoid repeated reading of data.
c) ORDER BY v. SORT
d) DISTINCT v. DELETE ADJACENT
e) Logical Databases

To avoid expensive type conversions, always specify a type for formal parameters in
subroutines.
ABAP "Expensive" Things to Avoid
a) Nested loops
b) Deeply nested internal tables
c) Huge internal tables.
d) SORT without using BY
e) APPEND/COLLECT...SORTED BY
f) READ TABLE t WITH key K.
g) Insert of reports dynamically.
h) Type Conversions
i) Fields with type P

A binary search breaks a table into logical units based on the key defined. This allows
the READ to jump through the table to find the required entry instead of sequentially
reading each table row until the required entry is found. (Note: The
internal table must be sorted in order to use the binary search qualifier).

Use CONTINUE, CHECK and EXIT to terminate loops and loop passes appropriately
and to eliminate any unnecessary processing. Use work areas instead of header lines
(implicit work areas) when working with internal tables.

Guidelines for Internal Tables


a) Read with index or binary search
b) No nested loop
c) EXIT from loop
d) Fill table sorted instead of using SORT
e) Choose suitable INITIAL SIZE parameter
f) Use explicit work areas.
g) DELETE explicit lines instead of looping.
h) APPENDING/copying instead of looping.
i) MODIFY...TRANSPORTING instead of looping.

Buffering Overview
There are three types of buffering that can be used with SAP database tables:
Resident/Full Buffering - With resident buffering, the entire table is stored in the
buffer. As soon as a read access is made to the table, all records are transferred to
the buffer. Tables best suited to this type of buffering are small, frequently read and
rarely updated. Generic Buffering - With generic buffering, only records containing a
key that corresponds to a record that has already been read are loaded into the
buffer. As soon as a read access is made to the table, all records whose key values
match the specified key values of the record being read are loaded into the buffer. A
table should be buffered generically if usually only certain areas of the table are
required. The individual generic areas are treated like independent tables which are
fully buffered. Partial Buffering / Single Record Buffering - With this kind of buffering,
only the records of a table which are actually accessed are loaded into the buffer.

The following statements do not take advantage of any SAP table buffering. What
this means is that even if a table is buffered (either resident/full, generic, or partial),
executing one of the above statements against that table will bypass the buffer and
go directly to the version of the table stored in the database in order to resolve the
statement.
SQL Statements Bypassing Buffer
1. SELECT ...BYPASSING BUFFER
2. Any SELECT from a VIEW (except a projection view).
3. SELECT FOR UPDATE...
4. Aggregate functions (e.g. COUNT, MIN, MAX, SUM, AVG).
5. SELECT DISTINCT...
6. WHERE...IS NOT NULL
7. ORDER BY (other than primary key)
8. GROUP BY or HAVING
9. Sub queries
10. Joins
11.Any native SQL Statement (EXEC SQL)

ABAP Performance and Tuning


• Added by Rich Heilman, last edited by Moshe Naveh on Jul 27, 2010 (view
change)
• What tools can be used to help with performance tuning?
• What are the steps to optimise the ABAP Code?
• What is the difference between SELECT SINGLE and SELECT ... UP TO 1 ROWS?
• Which is the better - JOINS or SELECT... FOR ALL ENTRIES...?
• Does SAP publish guides and cookbooks on performance monitoring and
testing?
• Avoid use of nested loops
What tools can be used to help with performance tuning?

ST05 is the performance trace. It contain the SQL Trace plus RFC, enqueue and buffer
trace. Mainly the SQL trace is is used to measure the performance of the select
statements of the program.

SE30 is the Runtime Analysis transaction and can be used to measure the application
performance.
One of the best tools for static performance analyzing is Code Inspector (SCI). There
are many options for finding common mistakes and possible performance
bottlenecks.
back to top

What are the steps to optimize the ABAP Code?


1. DATABASE
a. Incorrect has nearly no effec
b. Use WHERE clause in your SELECT statement to restrict the volume of
data retrieved. Very important !!
c. Design your Query to Use as much index fields as possible from left to
right in your WHERE statemen
d. Use FOR ALL ENTRIES in your SELECT statement to retrieve the
matching records at one shot
e.

f. Avoid using nested SELECT statement and SELECT within LOOPs,


better use JOINs or FOR ALL ENTRIES. Use FOR ALL ENTRIES when the
internal table is already there or the end of some processing. Try JOINs
if the SELECT are right behind each other
g. Incorrect with database commands very small, avoid in buffer
accesse
h. Avoid using SELECT * and Select only the required fields from the
table. Small effec
i. For existence check, use SELECT UP TO 1 ROWS, and
no SELECT COUNT and also not the combination count plus UP
TO 1 ROW
j. Avoid using ORDER BY in SELECT statements if it differs from used
index (instead, sort the resulting internal table), because this may
add additional work to the database system which is unique, while
there may be many ABAP servers
k. INDEX: Creation of Index for improving performance should not be
taken without thought. Index speeds up the performance but at the
same time adds two overheads namely; memory and insert/append
performance. When INDEX is created, memory is used up for storing
the index and index sizes can be quite big on large transaction tables!
When inserting new entry in the table, all the index's are updated.
More index more time. More the amount of data, bigger the indices,
larger the time for updating all the indices
l. Avoid Executing a identical Select (same SELECT, same
parameter) multiple times in the program
m. Avoid using join statements if adequate standard views exist no
performance impact
2. TABLE BUFFER:
a. Defining a table as buffered (SE11) can help in improving the
performance but this has to be used with caution. Buffering of tables
leads to data being read from the buffer rather than from table. Buffer
sync with table happens periodically, only if something changes
which is happen rarely. If this table is a transaction table chances
are that the data is changing for a particular selection
criteria, therefore application tables are usually not suited for
table bufferung. Using table buffering in such cases is not
recommended. Use Table Buffering for configuration data and
sometimes for Master Data . Also, when using a table with buffering
ensure that the general criteria which is used for buffering is also being
used (????). If the criteria of buffering is not same as the one used in
your code, it has no effect .
b. Avoid using complex Selects on buffered tables-, because SAP may not
be able to interpret this request, and may transmit the request to the
database- The code inspector tells which commands bypass the buffer
3. Internal tables
a. Often no practical, use sorted tables for the inner operation
and your nested loop is fine.
b. Use assign instead of into in LOOPs for table types with large work
areas
c. When in doubt call transaction SE30 and check your code. Check
YOUR code, not the examples!
d. Use READ TABLE BINARY SEARCH with large standard
tables speed up the search. Be sure to sort the internal table before
binary search. This is a general thumb rule but typically if you are sure
that the data in internal table is less than *50 *entries you need not do
SORT and use BINARY SEARCH no overhead, just no gain!
e. See b.
4. Miscellaneous
a.
b. PERFORM : When writing a subroutine, always provide type for all the
parameters. This reduces the overhead which is present when system
determines on it's own each type from the formal parameters that are
passed.

The following may be followed, though not very much faster, and they may make the
program less readable, incorrect IF/ENDIF offer checks which can not be written as
CASE.

1.
2.
back to top
What is the difference between SELECT SINGLE and SELECT ... UP TO 1
ROWS?
• SELECT SINGLE and SELECT UP TO n ROWS return the first matching
row/rows for the given condition. It may not be unique, if there are more
matching rows for the given condition.

• In order to check for the existence of a record then it is better to use SELECT
SINGLE than using SELECT ... UP TO 1 ROWS since it uses low memory and
has better performance.
• With ORACLE database system, SELECT SINGLE is converted into SELECT ...
UP TO 1 ROWS, thus they are exactly the same in that case. The only
difference is the ABAP syntax prevents from using ORDER BY with SELECT
SINGLE, but it is allowed with SELECT ... UP TO 1 ROWS. Thus, if several
records may be returned and we want to get the highest record for example,
SELECT SINGLE cannot be used, but SELECT ... UP TO 1 ROWS WHERE ...
ORDER BY ... may be used.
back to top
Which is the better - JOINS or SELECT... FOR ALL ENTRIES...?

The effect of FOR ALL ENTRIES needs to be observed first by running a test program
and analyzing SQL trace. Certain options set by BASIS can cause FOR ALL ENTRIES to
execute as an 'OR' condition. This means if the table being used FOR ALL ENTRIES
has 3 records, SQL Trace will show 3 SQL's getting executed. In such a case using
FOR ALL ENTRIES is useless. However of the SQL Trace shows 1 SQL statement it's
beneficial since in this case FOR ALL ENTRIES is actually getting executed as an IN
List.

JOINS are recommended to be used till 5 joins. If the JOIN is being made on fields
which are key fields in both the tables, it reduced program overhead and increases
performance. So, if the JOIN is between two tables where the JOINING KEYS are key
fields JOIN is recommended over FOR ALL ENTRIES.

You can use for all entries to reduce the database hits, and use non-key fields.

Here is a code with join :


SELECT A~VBELN A~KUNNR A~KUNAG B~Name1
into table i_likp
FROM LIKP AS A
INNER JOIN KNA1 AS B
ON A~kunnr = B~KUNNR.
* For with limited data using for all entries:
* Minimize entries in I_likp by deleting duplicate kunnr.
LOOP AT i_likp INTO w_likp.
w_likp2-KUNAG = w_likp-KUNAG.
APPEND w_likp2 TO i_likp2.
ENDLOOP.
SORT i_likp2 BY kunnr.
DELETE ADJACENT DUPLICATES FROM i_likp2 COMPARING kunnr.
* GET DATA FROM kna1
IF NOT i_likp2[] IS INITIAL.
SELECT kunnr name1
INTO TABLE i_kna1
FROM kna1
FOR ALL ENTRIES IN i_likp2
WHERE kunnr = i_likp2-KUNNR.
ENDIF.
back to top
User Collect Statement to do Sum in the internal table.

Instead of using logic to do summation use collect statement.


Avoid use of nested loops
For example: if there is a loop like this. Condition added, otherwise there is no
optimization:
loop at itab1.
loop at itab2 where f1 = itab1-f1.
....
endloop.
end loop.

in the production environment it may be possible that such a loop takes a lot of time
and dumps.

Instead we can use ... BINARY SEARCH added, otherwise no improvement!!!


SORT itab2 BY f1.loop at itab1.
Read table itab2 with key f1 = itab1- BINARY SEARCH. "f1 is any field of itab1
if sy-subrc = 0.
idx = sy-tabix.
loop at itab2 from idx.
if itab2-f1 <> itab1-f1.
exit.
endif.
....
endloop.
endif.
endloop.

If you have a sorted table - the internal table can be read like this:
types: begin of itab,
f1 type mara-matnr,
.... not only the keyfield !!
end of itab.
data: itab2 type sorted table of itab with unique key f1,
loop at itab1.
LOOP AT iatb2 WHERE f1 = itab1. "f1 is any field of itab1
....
endloop.
endif.
endloop.

Introduction
Performance tuning focuses on improving the execution time of a program without changing the
overall functionality. This page presents some advanced performance improvement techniques.
Performance Tuning Techniques

a) Use of Hash Tables

Hash tables are useful when there is a requirement of READ operations on a large DATASET with full
table key. The response time for a key access is constant in case of Hash tables and is independent
of the number of the entries in the table.
Pre Requisites: The entries in the table should be unique and the key is to be defined for the
table during the declaration of the internal table.
b) Buffered Reading (Using special Function Modules)
SAP recommends use of Buffered Reading (SAP Note - 332856) in cases where a program
contains several accesses to a Master table with a fully specified primary key. SAP has provided
special Function Modules which store the result of the last database request in the memory. You can
use this stored value again without needing to access the database.
Note 332856 - Reading buffered master data for cust.-spec. enhancements
c) Using Oracle hints
In some scenarios, the Oracle Optimizer is not able to choose the correct access path (index) for
retrieval of the data from the database for your query. In such cases, we can force the Optimizer to
use the required index, by providing the details in the query itself, using "Oracle Hints". (SAP Note -
772497)
Note 772497 - FAQ: Oracle Hints
d) Avoiding high buffer reads
In order to find the inefficient SQL Statements using excessive CPU or doing excessive I/O, use
the transaction ST04. A query is inefficient if both Buffer Gets/Row and Buffer Gets/Exec are high. A
Buffer Get is when Oracle references a page of Memory in the Oracle database memory. A high ratio
means that oracle is searching a large amount of data to find the results. (SAP Note - 766349)

Note 766349 - FAQ: Oracle SQL optimization


e) Activating Buffering for tables

In case of small master data tables which are accessed frequently and updated rarely, it is advisable
to activate Buffering.
Fully Buffered - Very small tables with many different accesses
Single Record Buffer - For tables where there are frequent single-record accesses (with SELECT
SINGLE ...).

f) Complete Specification of the Primary Key Fields

If we take an Exmple of some of the Tables such as VBFA or BSEG or MSEG, then there are more
than 1 Primary Key fields present in the respective Tables. In order for the SELECT Query to Run
faster, it is important to include all the Primary Key Fields (As many as Possible - Recommended to
Use All the Fields) in the WHERE Clause so that the Query runs Faster and thus improving the
Performance.

Also, adhering to the ground Rule of Specify the Field List after the SELECT Statement in the Order
that they are existing in the Actual Table would be much faster than just throwing a Zigsaw Puzzle in
the SELECT Query with the Fields not in the Order.

heck the following Links

https://forums.sdn.sap.com/thread.jspa?messageID=1591512&tstart=0#1591512
https://forums.sdn.sap.com/thread.jspa?messageID=1429297&tstart=0#1429297
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
check the below link
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
See the following link if it's any help:
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp

Check also http://service.sap.com/performance


and

books like
http://www.sap-press.com/product.cfm?account=&product=H951
http://www.sap-press.com/product.cfm?account=&product=H973
http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp

Performance tuning for Data Selection Statement

http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm

Run Time Analyser


http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm

SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm

CATT - Computer Aided Testing Too


http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm

Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm

Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm

Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm

Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm

ECATT - Extended Computer Aided testing tool.


http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
https://forums.sdn.sap.com/thread.jspa?threadID=84514
https://forums.sdn.sap.com/thread.jspa?threadID=23912
https://forums.sdn.sap.com/thread.jspa?threadID=142272
https://forums.sdn.sap.com/thread.jspa?threadID=131727
https://forums.sdn.sap.com/thread.jspa?threadID=84583
https://forums.sdn.sap.com/thread.jspa?threadID=145177
https://forums.sdn.sap.com/thread.jspa?threadID=148874
https://forums.sdn.sap.com/thread.jspa?threadID=151144
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction
SCI , which is SAP Code Inspector.

how to improve the performance of the abap programTools for Performance Analysis

Run time analysis transaction SE30


SQL Trace transaction ST05
Extended Program Check (SLIN)
Code Inspector ( SCI)

Run time analysis transaction SE30

In Transaction SE30, fill in the transaction name or the program name which needs to be
analyzed for performance tuning.
For our case, let this be “ZABAP_PERF_TUNING”

After giving the required inputs to the program, execute it. After the final output list has been
displayed, PRESS the “BACK” button.

On the original SE30 screen, now click on “ANALYZE” button.

The percentage across each of the areas ABAP/ Database/System shows the percentage of total
time used for those areas and load on these areas while running the program . The lesser the
database load faster the program runs.

SQL Trace – ST05

Starting the Trace:


To analyze a trace file, do the following:
...
Performance Trace in the ABAP Workbench or go to Transaction ST05.
The initial screen of the test tool appears. In the lower part of the screen, the status of the
Performance Trace is displayed. This provides you with information as to whether any of the
Performance Traces are switched on and the users for which they are enabled. It also tells you
which user has switched the trace on.→Choose the menu path Test
Using the selection buttons provided, set which trace functions you wish to have switched on
(SWL trace, enqueue trace, RFC trace, table buffer trace).
If you want to switch on the trace under your user name, choose Trace on.
If you want to pass on values for one or several filter criteria, choose Trace with Filter.
Typical filter criteria are: the name of the user, transaction name, process name, and program
name.
Now run the program to be analyzed.
Stopping the Trace:
To deactivate the trace:
...
Performance Trace in the ABAP Workbench.
The initial screen of the test tool appears. It contains a status line displaying the traces that are
active, the users for whom they are active, and the user who activated them.→Choose Test
Select the trace functions that you want to switch off.
Choose Deactivate Trace.
If you started the trace yourself, you can now switch it off immediately. If the performance trace
was started by a different user, a confirmation prompt appears before deactivation-

Analyzing a Sample trace data:


PREPARE: Prepares the OPEN statement for use and determines the access method.
OPEN: Opens the cursor and specifies the selection result by filling the selection fields with
concrete values.
FETCH: Moves the cursor through the dataset created by the OPEN operation. The array size
displayed beside the fetch data means that the system can transfer a maximum package size of
392 records at one time into the buffered area.

Extended Program Check


Code Inspector ( SCI)

for the above 2 u can directly exectute your program and you will find all the errors and warnings
try to make all the points as zero then the performance will be very good

Das könnte Ihnen auch gefallen