Sie sind auf Seite 1von 6

OPTIMIZATION IN ESSBASE:

Application Performance Optimization can be done by the following


techniques
1.

Designing of The Outline using Hour Glass Model

2.

Defragmentation

3.

Restructuring

4.

Compression Techniques

5.

Cache Settings

6.

Intelligent Calculation

7.

Uncommitted Access

8.

Data Load Optimization

Designing of The Outline using Hour Glass Model: Outline should be designed
in such a way that dimensions are placed in the following order - largest dense to
smallest dense, smallest sparse to largest sparse followed by Attribute Dimensions.
Using hourglass model improves 10% of calculation Performance of the cube.
Defragmentation: Fragmentation is caused due to the following
1.

Frequent Data Load

2.

Frequent Retrieval

3.

Frequent Calculation

We can check whether the cube is fragmented or not by seeing its Average
Clustering Ratio in the properties. The Optimum clustering value is 1, If the average
clustering ratio is less than 1, then the cube is fragmented which degrades the
performance of the cube.
There are 3 ways of doing Defragmentation:
1.
Export Data of the application in to text file, then clear data and reload the
data using text file without using Rules file.
2.

Using MAXL Command:

Maxl>Alter Database Appname.DB name Force Restructure

3.

Add and Delete One Dummy Member in the Dense Dimension .

Restructuring: There are 3 types of Restructure.


1.

Outline Restructure

2.

Sparse Restructure

3.

Dense Restructure/Full Restructure

Outline Restructure: When we rename any member or add Alias to any member
then outline Restructure would Happen.
.OTL file is converted to .OTN which in turn converts in to .OTL again.
.OTN file is a temp file deleted by default after restructure
Dense Restructure(Full Restructure): If a member of a dense dimension is
moved, deleted, or added, Essbase restructures the blocks in the data files and
creates new data files. When Essbase restructures the data blocks, it regenerates
the index automatically so that index entries point to the new data blocks. Empty
blocks are not removed. Essbase marks all restructured blocks as dirty, so after a
dense restructure you must recalculate the database. Dense Restructuring, the
most time-consuming of the restructures, can take a long time to complete for large
databases.

Sparse Restructure: If a member of a sparse dimension is moved, deleted, or


added, Essbase restructures the index and creates new index files. Restructuring
the index is relatively fast; the time required depends on the index size.
Compression Techniques: There are 4 types of Compressions. They are
1.

Bitmap Compression

2.

RLE Run length Encoding

3.

ZLIB

4.

No Compression.

Caches: There are 5 types of caches.


1.

Index cache

2.

Data Cache

3.

Data File Cache

4.

Calculator Cache

5.

Dynamic Calculator Cache

Index Cache: Index Cache is a buffer in a memory that holds Index Files (.IND).
Index cache should be set equal to the size of the index file.
Note: Restart the database in order to make the new cache settings come in to
effect.
Data Cache: Data cache is a buffer in a memory that holds Uncompressed Data
Blocks.
Data cache should be 12.5% of the PAG file memory, by default it is set to 3MB.
Data File Cache: Data file cache is a buffer in memory that holds compressed data
blocks.
Size of the Data file cache should be size of the PAG File memory. It is set to 32MB
by default. Max. Size for data file cache is is 2GB
We can use only either Data cache/ Data file cache most of the developers prefer
Data cache in Real time.
Calculator Cache: It is basically used to improve the performance of calculation.
WE set the calculator cache in calculation scripts.
Set cache High|Low|Off; ----- command used in calc scripts to set the cache.
We set cache value for calculator cache in Essbase.cfg file.
We need to restart the server to make the changes in calculator cache after setting
it in config file.
Dynamic Calculator Cache: The dynamic calculator cache is a buffer in memory
that Essbase uses to store all of the blocks needed for a calculation of a Dynamic
Calc member in a dense dimension (for example, for a query).
Intelligent Calculation: Whenever the Block is created for the 1st time Essbase
would treat it as Dirty Block. When we run Calc all/Calc dim Essbase would calculate
and mark all blocks as Clean blocks. Subsequently, when we change value in any
block the block is marked as Dirty block. when we run calc scripts again only dirty
blocks are calculated it is known as Intelligent Calculation.
By default Intelligent calculation is ON. To turn off the Intelligent Calculation use
command SET Update Calc OFF; in scripts .

Uncommitted Access: Under uncommitted access, Essbase locks blocks for write
access until Essbase finishes updating the block. Under committed access, Essbase
holds locks until a transaction completes. With uncommitted access, blocks are
released more frequently than with committed access. The Essbase performance is
better if we set uncommitted access. Besides, parallel calculation only works with
uncommitted access.

Data Load Optimization: Data load optimization can be achieved by the


following.
1.
2.

Always load the data from the Server than file system.
The data should be at last after the combinations.

3.
Should use #MI instead of 0s. If we use 0 uses 8 bytes of memory for each
cell.
4.

Restrict max Decimal Points to 3 -- 1.234

5.
Data should be loaded in the form of Inverted Hourglass Model.(Largest sparse
to Smallest Sparse followed by smallest Dense to Largest Dense data)
6.

Always Pre-Aggregate data before loading data in to Database.

DL Threads write (4/8): Used for Parallel Data loads. Loads 4 records at a time for
32-Bit system and 8 records for 64-Bit system.
By default Essbase Loads data Record by Record which would consume more
time resulting in consuming huge time for data loads.

Optimization Techniques in Essbase

The best technique to make large data loads faster is to have the optimal order of
dimensions in source file, and to sort this optimally, order the fields in your source file (or
SQL statement) by having hourglass dimension order, you data file should have dimensions
listed from the bottom dimension upwards. Your dense dimensions should always be first,
and if you have multiple data columns these should be dense dimension members. This will
cause blocks to be created and filled with data in sequence, making the data load faster and
the cube less fragmented.
As a part of Optimization we need to re-order the dimensions as follows
Large members Dense dimension

Small members Dense dimension

Small members Sparse dimension

Large members Sparse dimension

Attribute dimensions.

Calculation order of the dimensions.


Dimension tagged accounts if it is dense.

Dense dimensions in outline or CALC DIM statement order.

Dimensions tagged as Accounts if it is sparse.

Sparse dimensions in outline order or CALC DIM statement order.

Two-pass calculations on members in the Accounts tagged dimension.


Here are some more optimization techniques used in Essbase For data loading:

Grouping Sparse Member Combinations

Positioning Data in the Same Order As the Outline

Loading from the Essbase OLAP Server

Making the Data Source As Small As Possible

Making Source Fields As Small As Possible

Managing Parallel Data Load Processing

For Calculation:
Using Parallel Calculation

Using Formulas

Managing Caches to Improve Performance

Using Two-Pass Calculation

Aggregating #MISSING Values

Removing #MISSSING Blocks

Few Optimization Techniques in Essbase


With the essential features available in Essbase you can load the huge data to the Essbase
cubes, Run the reports and you can perform the complex calculations also,
As you keep on adding the different features to your application the performance will get
reduce. As i said Essbase came up with different features along with the different
performance tuning techniques which makes the application best optimized.
The optimization can be done at many places such as
Outline Optimization:
1) Arrange the dimension in "Hour Glass Model"
The Outline should starts with dense dimension with highest stored members and it keep
going till the dense dimension with least stored members and then starts with sparse
dimension with least stored members and it keep going till the sparse dimension with
highest stored members.
2) Use the member storage properties efficiently.
If the dimension is to just host the different types of data such as scenarios, here there is
no point in rolling up the lower values to higher level, in this situation you can tag the

dimension as "Label Only" and assign the no consolidation operator to the members under
it.
Some calculations really not required to stored the results in database at this point of time
tag the concern members with "Dynamic Calc" property.
Data Load Optimization:
1) In data file, the fields should starts with sparse dimension members and then dense
dimension members and then the data field.
2) If the same field is repeating in all the records in the data file, then try to ignore that field
from fetching itself and keep that member in the "Header Definition", why means to save
the buffer memory and it will increase data load process.
Report Script Optimization:
1) In the report script first specify the sparse dimensions and then dense dimensions, why
means :Sparse dimension creates the data blocks within which the data cells are available,
so specifying the dense first does not make sense. So to speed up the process specify the
data blocks first(Sparse dimension) and then data cells (Dense dimensions).
2) The dimensions which are not required to display in the report put them in the page.
3) Use the special commands to increase the report performance
SUPMISSINGROWS : To Suppress the data missing rows.
SUPHEADING : To Suppress the headings.
SUPBRACKETS: To Suppress the brackets around the negative values.
SUPEMPTYROWS: To Suppress the empty rows.
Calculation Script Optimization:
1) Use the set commands to increase the calculation performance.
SET MSG SUMMARY : Set the message level to summary.
SET AGGMISSG ON : To avoid the aggregation of #missing values.
SET CACHE HIGH : To increase the bugger size.
SET NOTICE LOW : To set the notices to low.
2) Perform the calculations on only required part of database using FIX command.