Sie sind auf Seite 1von 2

Essbase Optimization

In the last 6 years I have worked on direct essbase cubes & planning essbase cubes but
still could not find the definition of "Optimization of ESSBASE", but let me put my simple
version of Essbase Tuning & Optimization in simple terms.



Moving on Essbase cube performance is based on the Design of the Cube mainly
Dimensions, Hirerachies, Member types Stored/Dynamic Members. If you compare this with
Relational Databases you have various optimization techniques there is very little to do with
the data like data is huge or data is less.In Essbase its completely opposite, the amount of
data decides the fate of the cube. The way essbase cube works is blocks and indexes so
every cube has different performance implications.
So it will be Dumb to say Increasing my CPU/RAM in the Essbase server will improve the
cube performance.
SOME THEORETICAL GYAN BEFORE WE GO INTO REAL BUSINESS......
1)Minimize the Number of Dimensions by avoiding dimensions that do not offer descriptive
data points. This will Reduce complexity and size of database
2)Consider using Attribute dimensions, means adding dimensionality without increasing the
size of databse. You can view multiple calculations and can be used in calculations and
member formulas.
3)Sequence of Dimension Ordering
i)Largest to smallest Dense Dimensions - Defines the data block and must reside at the top
of the outline
ii)Smallest to Largest Aggregating Sparse Dimensions - Should reside directly below the
last Dense dimension in the outline. Gives them an ideal location within the database for
optimized calculation performance.
iii)Non-aggregating Sparse Dimensions - I will say these dimensions organize the data into
logical slices like Scenario,Version and Period. These dimensions are out of scope in
calculation
cache as i typically freeze them in FIX statement and more over the data is often more
dispersed within the database.
ENOUGH of theory lets now talk SERIOUS ESSBASE BUSINESS.....one thing guys
OPTIMIZATION is compliated business, but mind you it comply's to "NEWTON'S THIRD
LAW".
1) Basic, simple yet powerful..........
i)Periodically reset a database Maxl alter database appname.dbname reset



ii)Explicit Restructure alter database DBS-NAMEforce restructure
iii)Delayed Free Space Recoveryalter database DBS-NAMErecover freespace
2) Compression-Each block will use one type of compression below
None
zLib - Good for sparce data
Index Value Pair - cant assign directly but good for large blocks with sparse data
Bitmap - Good for non-repeating data
RLE(Run Length Encoding)Good for data with zeros and data that repeats.
Now you can tell how to tune compression, re-organizing dimensions and setting
compression to RLE to reduce database size. Consider using RLE, because it will allow
each block to be RLE, Bitmap, or Index-Value Pair as needed
3) Caches
Index Cache
Last index page into RAM, next out of RAM as cache is filled
Default is 1024
Generally, set to hold index in RAM
Cache can be too big if index is huge
Data Cache
Last block into RAM, next out of RAM as caches are filled
Default is 3072
Cache can be too big
Uncompresses block in RAM (using more data cache)
Factors Affecting Cache Sizing are Database size,Block size,Index size,Available
memory,Data distribution,Sparse / dense configuration,database necessities like complexity
of calculations .
4)Priority for Memory Allocation
1.Index Cache
Default is Buffered I/O: 1024KB (1048576bytes) & Direct I/O: 10240KB
(10485760bytes). From the optimization stand point Combined size of all essn.ind files, if
possible; otherwise, as large as possible, Do not set this cache size higher than the total
index size, as no performance improvement results



2.Data File Cache
Default is Direct I/O: 32768KB (33554432bytes). To optimize Combined size of all essn.pag
files, if possible; otherwise as large as possible
3.Data Cache
Default is 3072 KB (3145728bytes). To optimize 0.125 * Combined size of all essn.pag
files, if possible; otherwise as large as possible

Das könnte Ihnen auch gefallen