Beruflich Dokumente
Kultur Dokumente
InfoObject Maintenance
If you mark the characteristic as an exclusive attribute, it can only be used as a display attribute
for another characteristic, but not as a navigational attribute. In addition, the characteristic cannot
be transferred into InfoCubes. But it can have master data.
If this indicator is set, a transfer routine is defined for the InfoObject. This routine is integrated
into all transfer rules where the InfoObject is contained in the communication structure. With
data transfers, the logic contained in the individual transfer rules (from transfer structure to
communication structure) runs first. The transfer routine is then carried out on the value of the
corresponding field in the communication structure for every InfoObject that has a transfer
routine and is contained in the communication structure.
In the transfer routine, you can define DataSource-independent code that you only have to
maintain once, but is valid for all transfer rules.
The key is comprised of the characteristic value, compounding characteristics (if any),
OBJVERS (which can be A active or M- modified), CHANGED (which can be D deleted or I
to be inserted)
Q Table For time independent attributes. Similar to the P table, but with an addition of Date To
in the key.
X Table This table is for the time independent navigational attributes. The important fields are
as shown below
The key is comprised of the SID of the characteristic value, Compounding characteristics (if any)
and then the SIDs of the navigational attributes.
Y Table This is for the time dependent navigational attributes. The structure is similar to the X
table above, with the addition of the Date to field in the key.
Hierarchy Tables
H Table
This table contains the structure of the hierarchy. The fields of the table are as shown
Hierarchy nodes (like characteristic values) are not stored with actual values, but with master
data IDs (SIDs) in aggregates. The conversion of hierarchy nodes into SIDs is stored in this
table. Contrary to characteristic values, hierarchy nodes get negative SIDs.
Once master data is loaded, a display attribute can still be converted into a navigational attribute,
but not vice versa. The general rule is if the data structure would be enhanced, then the change
can be done, but if the fields of the data structure would need to be deleted (for e.g. converting
from a time dependent display attribute to time independent) then this is not possible without
deleting the master data.
If an ODS object is stored for a characteristic in order to check the characteristic values, the valid
characteristic values are determinedfrom the ODS object and not from the master data of the
characteristic in the update and transfer rules. The ODS object must havethe characteristic itself
and all the fields from compounding as key fields.
Like warehouse stock for e.g. in such a case, the stock changes with respect to time. Hence there
is a particular stock value at a particular point in time, though the values cannot be aggregated
meaningfully.
Hence we can use Non-cumulative Key Figures to model such key figures.
There are two possibilities using Non Cumulative key figures:
1. Non cumulative with non cumulative change
In the above case, once the intialization is done, and then the changes are uploaded into the
cube. By default for the non-cumulative key figures, the aggregation is set to SUM. As shown
below in such a case, the Non-cumulative change ZNBASEQTY is the key figure which
actually stores the changes. When the non cumulative key figure is included in an infocube
for e.g then actually the change key figure is included and the non-cumulative kf is calculated
at olap run time.
Inflow is the positive qty, and the outflow is the negative qty.
For more information please also see the link below:
http://help.sap.com/saphelp_nw04/helpdata/en/80/1a62dee07211d2acb80000e829fbfe/frames
et.htm
While scheduling the infopackage, you need to select the Opening Bal for the initial upload of
the data.
If you select Initialization run for non-cumulatives, the InfoSource for constructing a noncumulative is brought into use. If you remove the selection, the structure may not contain any
non-cumulative values.
2. Infocube
Important Numbers
13 possible dimensions, in addition to the default dimensions of Package, Units and Time. Each
dimension can contain 248 characteristics. Maximum key figures possible are 233 in the
infocube.
High Cardinality
While defining dimensions, if the Cardinality Height indicator is flagged, when the
characteristics may have a large number of unique values, leading to the dimension size of
around 10 to 20 % of the fact table, B tree indices are created instead of Bitmap indices.
Transactional Infocube
A transactional InfoCube is a special Basis Cube, especially developed for Strategic Enterprise
Management (SAP SEM). The system accesses data in such a cube transactionally, in other
Words, data is written to the InfoCube (possibly from more than one user at the same time) and
instantly read again when required. Standard Basis Cubes are not suitable here.
3. Data Extraction
Once the datasource is activated in RSA6, and then replicated in BW, the transfer structure is still
not created. The transfer structure is created only when the transfer rules are activated. When this
is done, the transfer structure is created in both the BW system as well as the source system. The
transfer structure is similar to the extrat structure, but does not contain the fields which are
hidden in the extract structure.
Extract Structure
As seen below, hidden fields are not a part of the transfer structure.
Transfer rules are source system specific, while update rules are specific to the data
targets.
If you set this flag a check for referential integrity is performed for this InfoObject, against the
master data table or ODS object. The InfoObject is checked for valid characteristic values.
Dependencies in the InfoObject you can set which object (master data table or ODS object) the
InfoObject is to be checked against.
If there is no SID for the characteristic values uploaded, then the following error may crop up
during extraction.
1. Always update data even if no master data exists for the data.
In this case, data is uploaded even if there is no master data maintained for the characteristic
values. The SIDs are generated while uploading the data. The master data can be uploaded
afterwards.
In this case, if no SIDs exist for the characteristic values in the master data, then the
extraction terminates with an error. For e.g. see below
Error Handling
Repair Request
Note
Posting such requests can lead to duplicate data records in the data target.
In the start routine, the table DATAPAK of type TAB_TRANSTRU which contains the records
extracted. These records can be processed in the start routine. Abort <> 0 will skip this
datapackage.
As seen above for the transfer routine for the characteristic ZORDTYPE, the entire transfer
structure is available in the TRANS_STRUCTURE, RECORD_NO gives the current record no
in the data package. The result takes the computted value of ZORDTYPE.
As seen above, the entire DATA_PACKAGE is available in the form of an internal table, where
the calculations can be carried out. Number of records in the RECORD_ALL parameter. If the
Abort <> 0, then the update process is cancelled.
Update rules
Update Type
Addition the key figure is added for the records with the same key. For ODS objects this can
also be overwrite, in which case the record overwrites the record with the same key in the ODS
object.
No Update In this case, the key figure is not written to the data target
Update Method
1. The key figure can be directly updated from the infosource key figure
2. The key figure can be filled with a formula, in which system fields can be used as well as
the other fields from the infosource.
3. Routine Update routine could be used
As seen above the COMM_STRUCTURE contains the details of the record, RECORD_NO
contains the current record number, RECORD_ALL contains the total number of records in the
datapackage.
If the flag Return table is clicked, then the routine will not a single record in the form of
RESULT, but it will be multiple records in the form of an internal table RESULT_TABLE
6. Initial Value
This can be usefull if the granularity of the infocube in terms of time is less than the granularity
of the infosource time field. In such a case, the key figure is distributed equally across the
complete time period. For e.g. in the above case, the record for the key figure in the infosource is
Calmonth = 04.2007 Key figure = 1000 KG
When time distribution is applied to this record in the update rules with the time characteristic in
the infocube is 0CALDAY, and then in such a case the single record is distributed evenly across
the month. So we have the following records in the infocube
Once an infosource is created with a direct update for an infoobject, while selecting the source
system in the infosource maintainance, the system creates the three datasources for attributes,
texts and hierarchies (if there are texts and hierarchies)
Cannot delete the attribute if it exists in the transfer rules, and hence first need to delete the
atrribute from the transfer rules. Otherwise there is no problem, the master data is not deleted,
but only the attribute column. Activation might take some time since the master data tables are
regenerated. (BW 350)
Problem is with compounding characteristic, when you try to remove this. Since a part of the key
is lost, hence data deletion is necessary before removing the compounding characteritic.
Following is a sample message while trying to activate the characteristic after deleting the
compounding characteristic.
Again no problems. The data is not deleted and the master data tables are regenerated.
Adding to an existing dimension is all right. The dimension table is adjusted with an additional
column, but this column has initial values for the old data. There is no necessity of deletion of
the data.
Adding to a new dimension is all right. The new dimension table is created with initial values as
well as the fact tables are adjusted. There is no necessity of deletion of the data.
This is again not a problem. The data is not deleted and is retained. The old records have the
initial value in the key figure
This is not allowed. Only when the infocube fact table data is deleted does it allow to do the
above things.
Note that any changes to the structure of the infocube deactivate the update rules. Hence
the update rules need to be activated and readjusted and also maybe the transfer rules.
Aggregates
Change run
When an aggregate is defined on the navigational attribute of a charateristic, then if the after a
master data load you see that the attribute values of this navigational attribute have changed, then
in such a case the loaded master data is not available for reporting unless the attribute change run
is executed. If you try to activate the master data, you get the following popup
The master data cannot be activated directly since attributes of the characteristic ZNORDER are
used in aggregates.
Procedure
Start the change run via Admin. Workbench->Tools->Apply Hierarchy/Attribute Change, to
activate the master data. This means that the aggregates are also adjusted.
Select the infoobject for which the attribute change run needs to be executed and then schedule.
The aggregates are readjusted in such a case.
Once the change run is executed, the aggregate is readjusted to make sure it is consistent with the
changed attribute values. For e.g
Pior to the attribute change, Process Order = 1000, its navigational attribute is Order Type =
YGT1.
The fact table of the infocube, which contains the Process Order looks like this (simplified)
Process Order
KF1
KF2
KF3
10000
1000
2000
3000
10001
1000
2000
3000
10002
1000
2000
3000
10003
1000
2000
3000
Order Type
10000
YGT1
10001
YGT2
10002
YGT3
10003
YGT2
The fact table of the aggregate would look like (simplified view)
Package DimID
Order Type
KF1
KF2
KF3
YGT1
1000
2000
3000
YGT2
2000
4000
6000
YGT3
1000
2000
3000
Suppose the attribute Order Type of PO = 1000 changed from YGT1 to YGT3. Now the attribute
table will look like
Process Order
Order Type
10000
YGT3
10001
YGT2
10002
YGT3
10003
YGT2
Hence after the change run, the aggregate should look like
Package DimID
Order Type
KF1
KF2
KF3
YGT2
2000
4000
6000
YGT3
2000
4000
6000
As seen above the following update modes are available when uploading data using the file
interface
1) Full Upload (ODS Object, InfoCube, InfoObjects)
The DataSource does not support a delta update. If you choose this method, the file will always
be copied completely. This method can be used for ODS objects, InfoCubes, and InfoObjects
(attributes and texts).
2) New Status for Modified Records (Delta only with ODS Objects - FIL0)
The DataSource supports both full update and delta update. Each record to be loaded provides
the new status for all key figures and characteristics. This method can only be used for loading
into ODS objects. i.e.the records are supposed to contain only after images and hence cannot be
uploaded correctly in an infocube.
3) Additive Delta (InfoCube and ODS - FIL1)
The DataSource supports the additive delta update as well as the full update. The record to be
loaded for additive key figures provides only the modification to the key figure. This method can
be used for both ODS objects and InfoCubes. The records are supposed to contain both before
and after images.
0RECORDMODE
This attribute describes how a record is updated in the delta process. The various delta processes
support different combinations of the seven possible characteristic values. If a DataSource
implements a delta process that uses several characteristic values, the record mode must be a part
of the extract structure and the name of the corresponding field has to be entered in the
DataSource as a cancellation field (ROOSOURCE-INVFIELD).
The seven characteristic values are as follows:
1) ' ': The record delivers an after image.
The status is tranferred after something is changed or added. You can update the record into an
IncoCube only if the corresponding before image exists in the request.
2) 'X': The record delivers a before image
The status is transferred before data is changed or deleted. All record attributes that can be
aggregated have to be transferred with a reverse +/- sign. The reversal of the sign is carried out
either by the extractor (default) or the Service API. In this case, the indicator 'Field is inverted in
the cancelation field' must be set for the relevant extraction structure field in the
DataSource.These records are ignored if the update is a non-additive update of an ODS object.
The before image is complementary to the after image.
3) 'A': The record delivers an additive image.
For attributes that can be aggregated, only the change is transferred. For attributes that cannot be
aggregated, the status after a record has been changed or created is transferred. This record can
replace an after image and a before image if there are no non-aggregation attributes or if these
cannot be changed. You can update the record into an InfoCube without restriction, but this
requires an additive update into an ODS Object.
4) 'D': The record has to be deleted.= o ns = "urn:schemas-microsoft-com:office:office" />
Only the key is transferred. This record (and its DataSource) can only be updated into an ODS
Object. (The record is deleted from the active table, but in the change log you have the exact
reverse image, with negative signs to cancel the records updated into a subsequent infocube for
e.g.)
5) 'R': The record delivers a reverse image.
The content of this record is the same as the content of a before image. The only difference is
with an ODS object update: Existing records with the same key are deleted. (For ODS object the
behaviour is similar to the recordmode = D, the record mode is deleted and the before image is in
the change log, with the recordmode = R)
6) 'N': The record delivers a new image.
The content of this record is the same as for an after image without a before image. When a
record is created, a new image is transferred instead of an after image. The new image is
complementary to the reverse image.
The table RODELTAM determines which characteristic values a delta process uses (columns
UPDM_NIM, UPDM_BIM UPDM_AIM, PDM_ADD UPDM_DEL and UPDM_RIM). The
table ensures that only useful combinations of the above values are used within a delta
process.When extracting in the 'delta' update mode in the extracted records for the indicator, a
DataSource that uses a delta process can deliver only those characteristic values that are
specified in the delta process.
When a datasource is delta enabled, it means that a delta update is possible for this datasource.
This datasource does not supply all the records, but only the new or the changed records in
subsequent extractions. How these new or changed records are supplied to BW depends on the
delta process. The delta processes are maintained in the table RODELTAM and could have the
following default values:
Delta only with Full Upload (ODS or InfoPackage Selection)
While updating into an ODS object, the 0RECORDMODE is taken into consideration. Checked
the generated program, and it appears that the 0RECORDMODE is of no consequence while
uploading data into an infocube but only in an ODS object. If the ODS object data field is set to
Overwrite, then the before images in the extracted records are ignored in the update rules
otherwise if set to Addition, then the before and after images are also taken into consideration.
Program RSODSACT1 is used to activate the ODS object data. It calls the function
RSSM_PROCESS_ODSACTIVATE to activate the data.
A set of template programs are used to read/write into ODS objects. Please see the RSTMPL*
programs in SE38 to get a more detailed idea.
The template used to generate the activation program for the ODS is
RSDRO_ACTIVATE_TMPL
The recordmode is the field which indicates what kind of a record it is. Whether it is a deletion
image, or a reverse image or before or after image. This seems to be of significance while
uploading data into an ODS object, which automatically takes care of updating the change log,
so that this data can be uploaded into a subsequent cube. It could also be uploaded into another
ODS object and hence the recordmode comes into picture, but the recordmode is not used when
updating the data into the cube.
ODS Object BEx reporting flag
With this indicator you determine whether the ODS Object</ is immediately available for BEx
queries.
If the indicator is switched off then no SIDs for the new characteristic values have to be taken
when activating the data in the ODS object. This improves the performance of the activation.
Switch off this indicator for all ODS objects that are essentially used for further processing into
other ODS objects or InfoCubes. It is still possible to define InfoSets with the ODS object and to
carry out queries on it.