You are on page 1of 25

How does a datasource communicates "DELTA" with BW?

swapna gollakota Business Card Intelligence (BI)


Company: satyam computer services ltd Posted on Dec. 27, 2007 10:10 AM in Beginner, Business

Subscrib e Print Permali nk

Is it not interesting to observe the data source behavior which is having delta capability on how it sends the data to BW? If you say yes....Here is the blog which has captured the entire movie on how a data source communicates the delta with BW.

What is Delta?
It is a feature of the extractor which refers to the changes (new/modified entries) occured in the source system.

How to identify?
In ROOSOURCE table, key in the data source name and check the field "DELTA". If the field is left blank,it implies that the datasource is not delta capable.

The field 0RECORDMODE in BW determines how a record is updated in the delta process.

Now the question is how this delta will be brought to BW? Using one of the following ways: ABR: After before and Reverse image AIE: After image ADD: Additive image

ABR: After before and Reverse image


Example: Logistics What is it? Once a new entry was posted or an exiting posting was changed at R/3 side, an after image shows the status after the change, a before image shows the status before the change with a negative sign and the reverse image also shows the negative sign next to the record while indicating it for deletion. This serializes the delta packets. What update type ( for key figures) it supports?


YES

Addition Overwrite Does it support loading to both infocube and ODS (DSO)? Technical name of the delta process:ABR Brief overview: You will find two types of ABR delta processes in RODELTAM table depending on serialization.

ABR with serialization "2" means serialization is required between requests while sending data but not necessarily at data package level. ABR1 with serialization "1" means no serialization.

Since it can be used for both infocube and ODS, let's consider a scenario where in the loading happens directly to ODS (Advantage: we can track the record changes in change log table) For ODS/DSO, the field ROCANCEL which is part of the data source holds the changes from R/3 side. ROCANCEL serves the same purpose at R/3 side which its counterpart 0RECORDMODE does at BW side. This field for the Data Source is assigned to the Info Object 0RECORDMODE in the BW system. Check the mapping in Transfer rules(applicable to BW 3.5):

Note:0STORNO AND 0ROCANCEL both are one and same.


In our case, ODS is set to additive mode so that the data source sends both before and after image Incase if it is set to overwrite, it sends only after image How it works? Let's check the new entry in ODS.

Note:I have taken an example of ODS which contains CRM data. Now in the source sytem, the value of CRM gross weight (CRM_GWEIGH) has been changed to 5360 In order to reflect this change, data source will send two entries to BW: One is before image with negative sign to nullify the initial value

And the other one is after image entry (modified value)

Upon activation , the after image goes to active table.

After image delta process:


Example: FI-AP/AR

What update type (for key figures) it supports? Overwrite Only Does it support loading to both infocube and ODS (DSO)? No, only ODS/DSO Technical name of the delta process: AIE Brief overview: We have after images with (AIM/AIMD) or without delta queue (AIE/AIED) .

Here, serialization is required between requests, because the same key can be transferred a number of times within a request.

How it works? Initially the target (ODS) was loaded as shown:

The value of CRM gross weight (CRM_GWEIGH) has been changed to 5360 in the source sytem. This time,data source sends only one entry i.e. after image which will hold the change.

The final entry after activation in the active table:

Additive delta process:


Example: LIS datasources What update type( for key figures) it supports?

Addition Only Does it support loading to both infocube and ODS (DSO)? YES Technical name of the delta process: ADD Brief overview: In RODELTAM table,we have two types of additive delta processes:

ADD without delta queue and ADDD with delta queue

The extractor provides additive deltas that are serialized on a request basis. This serialization is necessary since the extractor provides each key once in a request, and changes to the non-key fields would otherwise not be transferred correctly. How it works? Check the initial entry in ODS.

The value of CRM gross weight (CRM_GWEIGH) has changed to 5360 Here, the data source sends an entry with value 1,267 with + sign.

Upon activation, check the final entry in active table which is the result of (4093+1267=5360) KG

swapna gollakota is knowledge seekar who wants to explore SAP BI world to the extent possible Add to: del.icio.us | Digg | Reddit I hope this blog has helped you in refreshing your idea on delta process. Comment on this weblog Showing messages 1 through 31 of 31.

Titles Only Main Topics overwrite vs. additive 2008-08-06 04:29:08 Thomas Balle Business Card [Reply] Hi,

Oldest First

very nice blog! In this blog the following in stated:

"In our case, ODS is set to additive mode so that the data source sends both before and after image In case if it is set to overwrite, it sends only after image" However, in another blog (https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/5049) it says: "In the above example Material (1) and Plant (1) has the before image with record mode "x"(row 3 in the above Fig) And all the key figures will be have the "-" sign as we have opted to overwrite option and the characteristics will be overwritten always." In case you choose overwrite, will both before and after images be sent to BW? This blog says no, the second blog I mentioned says yes... Thanks for your help! Thomas

overwrite vs additive mode 2008-08-06 04:27:12 Thomas Balle Business Card [Reply] Hi,

very nice blog! In this blog the following in stated: "In our case, ODS is set to additive mode so that the data source sends both before and after image In case if it is set to overwrite, it sends only after image" However, in another blog (https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/5049) it says: "In the above example Material (1) and Plant (1) has the before image with record mode "x"(row 3 in the above Fig) And all the key figures will be have the "-" sign as we have opted to overwrite option and the characteristics will be overwritten always." In case you choose overwrite, will both before and after images be sent to BW? This blog says no, the second blog I mentioned says yes... Thanks for your help! Thomas

Hi 2008-06-25 22:35:18 Vijaya Salagundi Business Card [Reply] Hi Swapna,

THis is blog is very much useful. Thanks & Regards, Vijaya

In ABR,To ODS,When to use Additive,When Overwrite? 2008-05-19 04:18:09 m arjun44 Business Card [Reply] Hi Swapna Your Blog is very useful I have a small doubt: In ur example of ABR,Even if ODS is set to Additive/Overwrite, Then same result we r getting

Additive:4093->5360 N 4093 X 40935360 ----5360 -----Overwrite:N 4093 After Image 5360, which overwrites the previous value ----5360 ------

Then when we use Additive ,and when we use Overwrite? Plz clarify Thanks

In ABR,To ODS,When to use Additive,When Overwrite? 2008-05-19 04:33:59 swapna gollakota Business Card [Reply] Hi Arjun,

Incase of additive Delta mode,in changelog you will can track all the changes pertaining to a record as you mentioned when it comes to decide between Additive/Overwrite,it depends on what type of target you are loading. if it is infocube with ABR delta method, you have only one choice i.e additive Incase od ODS/DSo it is purely based on yor business requirement ,you can select either of the methods cheers, Swapna.G

In ABR,To ODS,When to use Additive,When Overwrite? 2008-05-19 04:18:07 m arjun44 Business Card [Reply] Hi Swapna Your Blog is very useful I have a small doubt: In ur example of ABR,Even if ODS is set to Additive/Overwrite, Then same result we r getting Additive:4093->5360 N 4093 X 40935360 ----5360 -----Overwrite:N 4093 After Image 5360, which overwrites the previous value ----5360 ------

Then when we use Additive ,and when we use Overwrite? Plz clarify Thanks

Delta Field is 'E' 2008-05-08 02:42:59 Vishnuvadhan K Business Card [Reply] If i check the Delta Field status for this Data source 0CUSTOMER_ATTR, it is showing 'E'. please sugest what it means.

Regards, Vishnu

Delta Field is 'E' 2008-05-19 05:10:37 swapna gollakota Business Card [Reply]

Hi

You can check it in RODELTAM table E:Unspecific Delta Via Extractor (Not ODS-Compatible) it is purely your business specific..

cheers, Swapna.G

Delta Field is 'E' 2008-05-08 01:52:44 Vishnuvadhan K Business Card [Reply] Hi Swapna,

It's a nice blog. I have a different scenario here. We load our ZCUSTOMER/ ZPARTNER from data source 0CUSTOMER_ATTR . There are some fields added to this data source but this fields are not delta enabled i.e. when these fields change in R3 it does not write to Change Pointers ( and hence no delta ). For this reason every week we have to do a full load which takes very long. Can you suggest how to check & make these new fields delta enabled? Regards, vishnu

Record mode doubt 2008-04-26 18:00:50 RAJESH GUNDOJI Business Card [Reply] CAn you tell me difference between Delete 'D' & Reverse Image 'R' for 0RECORDMODE. If both mode s do deletion why they gave us 2 options?

Thanks Rajesh

Record mode doubt 2008-04-26 19:09:49 swapna gollakota Business Card [Reply]

Hi Rajesh, Not all datasources are capable of sending reverse images except for ABR

for AIE and ADD delta processes,deletion of a request will be represented by "D" where as for ABR the deletion process happens by sending a reverse image

cheers, Swapna.G

Record mode doubt 2008-04-27 10:57:46 RAJESH GUNDOJI Business Card [Reply] please can you tell me why ABR sending Reverse image when there is option to send Delete ..

any particular reason ? Thanks rajesh

Mapping for 0RECORDMODE 2008-02-28 01:43:53 Bobs Thomas Business Card [Reply] Hi Swapna,

Your blog is very informative. I have a samll doubt, you have mentioned that 0RECORDMODE needs to be mapped to 0STORNO if the Delta is ABR. You have not mentioned anything as to which info object it needs to be mapped in case of AIE.I am not sure as I dont have ROCANCEL in the table on which I have built a custom data source. Please let me know your suggestions... Thanks a lot.

Mapping for 0RECORDMODE 2008-02-28 22:50:18 swapna gollakota Business Card [Reply] Hi, one point on which i want to emphssis is irrespective of delta method(ABR/AIE/ADD),in the transfer rules/transformation 0RECORDMODE needs to be mapped to 0rocancel. all the changes happened at sourcesytem should be communicated to BW 0rocancel in the sourcstem side and 0record mode from Bw side will serves the purpose

Regards, Swapna.G

0RECORDMODE - Is it required? 2008-02-25 20:45:20 Anup Kulkarni Business Card [Reply] Hi Swapna,

Please correct me if I am wrong. All the scenarios explained by you above should run perfectly fine even if 0RECORDMODE is not mapped in transfer rules as long as ' ' image comes after 'X' image. Not mapping 0RECORDMODE in transfer rules defaults value ' ' to this field which is nothing but after image. 0RECORDMODE is useful only when you want to capture deletion. For example, an 'R' image is not followed by a ' ' image in data source. In such scenario, if 0RECORDMODE is not mapped, ODS with Overwrite setting overwrites NEGATIVE 'R' value in its existing record which is wrong result. So can I conclude that mapping 0RECORDMODE in transfer rules is useful only for ODS with OVERWRITE setting and only when data source is sending 'R' and 'D' images as well. Are there any other scenarios where it is useful? Thanks, Anup

0RECORDMODE - Is it required? 2008-02-28 22:48:27 swapna gollakota Business Card [Reply] Hi, one point on which i want to emphssis is irrespective of delta method(ABR/AIE/ADD),in the transfer rules/transformation 0RECORDMODE needs to be mapped to 0rocancel. all the changes happened at sourcesytem should be communicated to BW 0rocancel in the sourcstem side and 0record mode from Bw side will serves the purpose

0RECORDMODE is mandatory and is th one only identifying tooll to capture all the modifications happened to one particular record. Regards, Swapna.G

0RECORDMODE - Is it required? 2008-02-29 00:40:40 Anup Kulkarni Business Card [Reply] Yes, I agree that it is always safe to map 0recordmode.

Now consider a scenario in which data source sends only new or changed records and 0recordmode is not mapped in transfer rules(every record is interpreted as after image ' ' by data target). Then I revisit all the scenarios explained by you with this presumption. The results obtained will still be perfectly fine irrespective of delta method(ABR/AIE/ADD). So, mapping of 0recordmode doesn't matter as long as your data source is sending only new or changed records. The problem arises when you want to capture the deletion! Am I right? Regards, Anup

0RECORDMODE - Is it required? 2008-03-02 22:01:33 swapna gollakota Business Card [Reply] Hi,

Frankly i am not sure the senario where in 0recordmode is not used ,thus causing the system to take after image as defualt value. (please mention the OSS note /book etc..you referred to cross check this) even if we assume the above statement is correct..the filed value"after image" should be related to some info object..i mean what is responsibile even for producing after image (what is that default filed/info object name)...? Please provide the details..i will verify and let you know? Regards, Swapna.G

0RECORDMODE - Is it required? 2008-03-03 21:56:00 Anup Kulkarni Business Card [Reply] Every ODS/DSO has a field 'recordmode'. You can verify this in /BIC/A<ODS>00 table. If 'recordmode' is present in communication structure, it will be automatically mapped in update rules. Developer's responsibility is to map it only in transfer rules. Mapping in update rules happens automatically.

If 'recordmode' is not present in communication structure, every record will have ' ' against field 'recordmode' in ODS thus making it an after image. This is my interpretation(please correct me if I am wrong). I assume this because, master data for every info-object has a default record which is ' '. Please let me know what will it be if not ' '. Further, recordmode 'R' and 'D' has same effect on ODS - Delete record. You can verify this by writing following code in start routine in update rule loop at data_package. if <your condition>. move 'R' to data_package-recordmode. modify data_package. endif. endloop. Upon activation, records in <your condition> will not be present in ODS. This is what makes me interpret that "mapping 0recordmode in transfer rules helps you only when data source sends deleted records. If your data source sends only new or changed records, mapping and not mapping this field in transfer rules will have same effect. And in both the cases, you will not have any data inconsistency in your ODS." Finally, it is always safe to map 0recordmode in transfer rules if your data source sends it. No harm in mapping a field :-).

Regards, Anup

0RECORDMODE - Is it required? 2008-04-28 02:03:55 sunmit bhandari Business Card [Reply] Hi Anup ,

Would having record with ' ' record mode work safely in case of Additive image process , considering that the record incoming to the ODS would not be having 'A' in its record mode would it be added to the previous value. Regards, Sunmit.

0RECORDMODE - Is it required? 2008-03-25 02:41:34 swapna gollakota Business Card [Reply] Hi Anup,

sorry for the late reply.... First of all let me appreciate your analysis...it motivates me to further dig my analysis and unearth the technical concepts of it... i will certainly come back to you about the details ..with explanation Regards, Swapna.G

Rocancel 'U' 2008-01-28 15:27:00 Derya Akcakaya Business Card [Reply] Thanks for the nice info. What does 'U' in rocancel means?

when using 0storno and Addition in ods setting 2008-01-16 03:36:55 gunasekhar raya Business Card [Reply] Hi,

i have a small doubt. i mapped to rocancel 0storno is it wrong or right. we can map either 0storno or 0recordmode.plz give me clarification on this. my another doubt is when i can use the addition option in ODS setting.

when using 0storno and Addition in ods setting 2008-01-17 04:37:06 swapna gollakota Business Card [Reply]

Hi,

In the transfer rules you need to map 0record mode with 0STORNO or 0ROCANCEL as i mentioned in the blog 0STORNO AND 0ROCANCEL both are one and same A2:you can use the addition option in two cases; Datasource supports ABR delta type (which you can check in ROOSOURCE) table Datasource supporting ADD delta type

ABR-and ODS/DSO update type 2007-12-28 22:11:31 Sreedhar M Business Card [Reply] Hi Swapna,

Nice blog, i have few doubts, as you mentioned in delta type ABR (After, Before, Reverse) images, in the example you have shown the datasource only sends two records one is after image and the other is before image, then my doubt is where is the reverse image. The second one is you also mentioned that based on the ODS/ DSO update of key figures like if we set it to overwrite then the datasource will send After image and if we set it to addition it will send after and before image. if based on the ODS settings the datasource sends the images then whether the datasource is having AIE or ABR the records it will send depend on the ODS settings.

Please clarify. thanks and regards Sreedhar

ABR-and ODS/DSO update type 2008-01-08 04:51:27 swapna gollakota Business Card [Reply] Hi sreedhar,

A1:Reverse image comes into picture when there is a requirement to delete the request. say, in our case we have the record as shown below: RECORD CRM_SALESORG RECORDMODE CRM_GWEIGHT 203 14000420 N 4,093 Now in the source sytem the record was deleted,this change will be reflected in BW as shown below. RECORD CRM_SALESORG RECORDMODE CRM_GWEIGHT 203 14000420 R 4,093-

if you observe closely,you will see this is some what resembles before image (observe the negative sign) BUT with record mode "R"

A2:Based on ODS keyfigure update mode(overwrite/addtion) the datsource sends the after/after and before images. This flexibility is only possible if the datasource is ABR capable. incase of After image,it is mandatory that the ODS update mode should be set to "overwrite"

Hope you are clear.. please post if you have any other query. Regards, Swapna.G

Doubt!! 2008-04-15 08:32:16 Prince Joseph Business Card [Reply] Hi Swapna,

I notice that for certain Master data attributes, there are 2 datasources available. The difference I could notice is that, the Delta process for one would be 'A - ALE Update Pointer (Master Data)' and for the other one, it would be AIMD. For example, for Business Partner there are 2 datasources. 0UCBU_PART_ATTR - Delta process = A 0UCBU_PART_ATTR_2 - Delta process = AIMD. Since I think that the DS, 0UCBU_PART_ATTR_2 is the more recent version, I activate that. And while doing that I get a warning message saying 'Datasource with Delta process AMID requires a cancellation field.' What is this message? In your experience, which one would you chose? Are there any implications? Your opinion would be helpful. Regards, Prince

Doubt!! 2008-04-18 02:29:34 swapna gollakota Business Card [Reply] Hi Prince,

ususally for masterdata the conventional delta update was ALE

the problem with this delta mode is ,we don't have repeat delta option incase if the delta fails ,here we are forced to do init with data

with AIMD :After Images with Deletion ID Using Delta Queue (e.g. BtB) as the description suggests the delta process uses delta queue and here you need to specify the deletion ID/cancellation field(INVFIELD)as well

check this sap note: If a DataSource implements a delta process that uses several characteristic values, the indicator must be a part of the extract structure and be entered in the DataSource as a cancellation field (ROOSOURCE-INVFIELD). cheers, Swapna.G

0 Record Mode 2007-12-28 05:15:47 Aravinda Ganguri Business Card [Reply] Hi Swapna,

its intresting. I have one simple question.sounds silly, are we using the 0recordmode in 3.5 version???

0 Record Mode 2007-12-28 17:46:17 swapna gollakota Business Card [Reply] Hi Aravinda,

Sorry for the delay in reply.since i was in off i couldn't able to answer you. the answer is YES,0recordmode is the fundmental requirement to carry on delta.It helps in identifying delta .You can check the different types of it as explained in this blog.

0 Record Mode 2007-12-28 10:40:01 Raj Singh Business Card [Reply] Yes, we have 0RECORDMODE in 3.5.

SAP NetWeaver 7.0 BI: new DSO " WriteOptimized DataStore"


Martin Mouli Business Card
Company: Rapidigm - A Fujitsu Consulting Company Posted on Aug. 24, 2007 10:11 AM in Business Intelligence (BI),

Subscrib e Print Permali nk

Business Process Expert, Master Data Management (MDM), SAP Developer Network
This blog describes a new DataStore in BI 7.0, "Write-Optimized DataStore" that supports the most detail level of tracking history, retaining the document status and a faster upload without activation. In a database system, read operations are much more common than write operations and consequently, most of database systems have been read optimized. As the size of the main memory increases, more of the database read requests will be satisfied from the buffer system and also the number of disk write operations when compared to total disk operations will relatively increase. This feature has turned the focus on write optimized database systems. In SAP Business Warehouse, it is necessary to activate the data loaded into a Data Store object to make it visible for reporting or to update it to further InfoProviders. As of SAP NetWeaver 2004, a new type of Data Store object was introduced: the Write-Optimized DataStore object. The objective of this new DataStore is to save data as efficiently as possible to further process it without any activation, additional effort of generating SIDs, aggregation and data-record based delta. This is a staging DataStore used for a faster upload. In BI 7.0, three types of DataStore objects exist: 1. 2. 3. Standard DataStore (Regular ODS). DataStore Object for Direct Updates ( APD ODS). Write-Optimized DataStore (new).

In this weblog, I would like to focus on the features, usage and the advantages of Write-Optimzied DataStore. Write-Optimized DSO has been primarily designed to be the initial staging of the source system data from where the data could be transferred to the Standard DSO or the InfoCube. o The data is saved in the write-optimized Data Store object quickly. Data is stored in at most granular form. Document headers and items are extracted using a DataSource and stored in the DataStore. o The data is then immediately written to the further data targets in the architected data mart layer for optimized multidimensional analysis. The key benefit of using write-optimized DataStore object is that the data is immediately available for further processing in active version. YOU SAVE ACTIVATION TIME across the landscape. The system does not generate SIDs for write-optimized DataStore objects to achive faster upload. Reporting is also possible on the basis of these DataStore objects. However, SAP recommends to use Write-Optimized DataStore as a EDW inbound layer, and update the data into further targets such as standard DataStore objects or InfoCubes. Fast EDW inbound layer - An Introduction Data warehousing has been developed into an advanced and complex technology. For some time it was assumed that it is sufficient to store data in a star schema optimized for reporting. However, this does not adequately meet the needs of consistency and flexibility in the long run. Therefore data warehouses are structured using layer architecture like Enterprise data warehouse layer and Architectured data mart layer. These different layers contain data at different levels of granularity as shown in Figure 1.

Figure 1 Enterprise Data Warehouse Layer is a corporate information repository The benefit of Enterprise Data warehouse Layer includes the following: Reliability, Trace back - Prevent Silos o 'Single point of truth'. o All data have to pass this layer on it's path from the source to the summarized EDW managed data marts.Controlled Extraction and Data staging (transformations, cleansing) o Data are extracted only once and deployed many. o Merging data that are commonly used together. Flexibility, Reusability and Completeness. o The data is not manipulated to please specific project scopes (unflavored). o The coverage of unexpected adhoc requirements. o The data is not aggregated. o Normally not used for reporting, used for staging, cleansing and transformation one time. o Old versions like document status are not overwritten or changed but useful information may be added. o Historical completeness - different levels of completeness are possible from availability of latest version with change date to change history of all versions including extraction history. o Modeled using Write-Optimized DataStore or standard DataStore. Integration o Data is integrated. o Realization of the corporate data integration strategy. Architectured data marts are used for analysis reporting layer, aggregated data, data manipulation with business logic, and can be modeled using InfoCubes or Multi Cubes. When is it recommended to use Write-Optimized DataStore Here are the Scenarios for Write-Optimized DataStore. (As shown in Figure 2). o Fast EDW inbound layer.

o SAP recommends Write-Optimized DSO to be used as the first layer. It is called Enterprise Data Warehouse layer. As not all business content come with this DSO layer, you may need to build your own. You may check in table RSDODSO for version D and type "Write-Optimized". o There is always the need for faster data load. DSOs can be configured to be Write optimized. Thus, the data load happens faster and the load window is shorter. o Used where fast loads are essential. Example: multiple loads per day (or) short source system access times (world wide system landscapes). o If the DataSource is not delta enabled. In this case, you would want to have a WriteOptimized DataStore to be the first stage in BI and then pull the Delta request to a cube. o Write-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object. Subsequently, the data can be updated to further InfoProviders. You only have to create the complex transformations once for all incoming data. o Write-optimized DataStore objects can be the staging layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders. o If you want to retain history at request level. In this case you may not need to have PSA archive; instead you can use Write-Optimized DataStore. o If a multi dimensional analysis is not required and you want to have operational reports, you might want to use Write Optimized DataStore first, and then feed data into Standard Datastore. Typical Data Flow using Write-Optimized DataStore

Figure 2 Typical Data flow using write-optimized DataStore. Functionality of Write-Optimized DataStore (As shown in Figure 3). Only active data table (DSO key: request ID, Packet No, and Record No): o No change log table and no activation queue. o Size of the DataStore is maintainable. o Technical key is unique. o Every record has a new technical key, only inserts.

o Data is stored at request level like PSA table. No SID generation: o Reporting is possible(but not optimized performance) o BEx Reporting is switched off. o Can be included in InfoSet or Multiprovider. o Performence improvement during dataload. Fully integrated in data flow: o Used as data source and data target o Export into info providers via request delta Uniqueness of Data: o Checkbox Do not check Uniqueness of data. o If this indicator is set, the active table of the DataStore object could contain several records with the same key. Partitioned on request ID (automatic). Allows parallel load. Can be included in Process chain with out activation step. Support Archive. You cannot use reclustering for write-optimized DataStore objects since this DataStore data is not meant for querying. You can only use reclustering for standard DataStore objects and the DataStore objects for direct update. Write-Optimized DataStore is partitioned on request ID (automatic), you may not need to create additional partition manually on active table. Optimized Write performance has been achieved by request level insertions, similarly like F table in InfoCube. As we are aware, F fact table is writeoptimized while the E fact table is read optimized.

Figure 3 Overview of various DataStore objects types in BI 7.0 To define Write-Optimized DataStore, just change Type of DataStore Object to Write-Optimized as shown in Figure 4.

Figure 4 Technical settings for Write-Optimized DataStore. Understanding Write-Optimized DataStore keys: Since data is written into Write-optimized DataStore active-table directly, you may not need to activate the request as is necessary with the standard DataStore object. The loaded data is not aggregated; the history of the data is retained at request level. . If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, the aggregation of data can take place later in standard DataStore objects. The system generates a unique technical key for the write-optimized DataStore object. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD) as shown in Figure4. Only new data records are loaded to this key. The standard key fields are not necessary with this type of DataStore object. Also you can define Write-Optimized DataStore without standard key. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical key. Semantic Keys can be defined as standard keys in further target Data Store. The purpose of the semantic key is to identify error in the incoming records or duplicate records. All subsequent data records with same key are written to error stack along with the incorrect data records. These are not updated to data targets; these are updated to error stack. A maximum of 16 key fields and 749 data fields are permitted. Semantic Keys protect the data quality. Semantic keys wont appear in database level. In order to process error records or duplicate records, you must have to define Semantic group in DTP (data transfer process) that is used to define a key for evaluation as shown in Figure 5. If you assume that there are no incoming duplicates or error records, there is no need to define semantic group, its not mandatory. The semantic key determines which records should be detained when processing. For example, if you define "order number" and item as the key, if you have one erroneous record with an order number 123456 item 7, then any other records received in that same request or subsequent requests with order number 123456 item 7 will also be detained. This is applicable for duplicate records as well.

Figure 5 Semantic group in data transfer process. Semantic key definition integrates the write-optimized DataStore and the error stack through the semantic group in DTP as shown in Figure 5. With SAP NetWeaver 2004s BI SPS10, the writeoptimized DataStore object is fully connected to the DTP error stack function. If you want to use write-optimized DataStore object in BEx queries, it is recommend that you define semantic key and that you run a check to ensure that the data is unique. In this case, the writeoptimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query. Delta Administration: Data that is loaded into Write-Optimized Data Store objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required. Note here that the loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the Data Store object, since the technical key for the both records not unique. The record mode (0RECORDMODE) responsible for aggregation remains, however, the aggregation of data can take place at a later time in standard Data Store objects. Write-Optimized DataStore does not support the image based delta, it supports request level delta, and you will get brand new delta request for each data load. Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted. Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes. Extraction method - Transformations thru DTP (or) Update Rules thru InfoSource Prior to using DTP, you must have to migrate 3.x DataSource into BI 7.0 DataSource by using transaction code RSDS as shown in Figure 6.

Figure 6 Migration of 3.x Data Source -> Data Source using Tcode RSDS, and then replicate the data source into BI 7.0. After data source replication into BI 7.0, you may have to create data transfer process (DTP) to load data into Write-Optimized DataStore. Write-optimized DataStore objects can force a check of the semantic key for uniqueness when data is stored. If this option is active and if duplicate records are loaded with regard to semantic key, these are logged in the error stack of the Data Transfer Protocol (DTP) for further evaluation. In BI7 you are having the option to create error DTP. If any error occurs in data, the error data will be stored in Error stack. So, you can correct the errors in stack, and if you schedule the error DTP, the error data will be stored to target. Otherwise, you have to delete the error request from target and you need to reschedule the DTP. In order to integrate Write-Optimized DataStore into Error stack, you must have to define semantic keys in DataStore definition and create semantic group in DTP as shown in Figure 5. Semantic group definition is necessary to do parallel loads to Write-Optimized DataStore. You can update write-optimized DataStore objects in parallel after you have implemented OSS 1007769 note. When you include a DTP in process chain for write-optimized DataStore Object, you will need to make sure that there is no subsequent activation step for this DataStore. On the other hand you can just link this DSO thru the Infosource with update rules as well by using 3.x functionality. Reporting Write-Optimized DataStore Data: For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting. However, it is recommended that you use them as a consolidation layer, and update the data to standard DataStore objects or InfoCubes. OLAP BEx query perspective, there is no big difference between Write-Optimized DataStore and Standard DataStore, the technical key is not visible for reporting, so the look and feel is just like regular DataStore. If you want to use write-optimized DataStore object in BEx queries, it is recommended that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query. In a nut shell, Write Optimized DSO is not for reporting purpose, its a staging DataStore used for faster upload. The direct reporting on this object is also possible without activation but keeping in mind the performance perspective you can use an infoset or multi-provider. Conclusion: Using Write-Optimized DataStore, you will have snapshot for each extraction. This data can be used for trending old KPIs or deriving new KPIs at any time because the data is stored at request level. Moreover you need not worry about the status of extracted documents into BI since data is stored as

of extracted date. Although there is help documentation available from SAP on Write-Optimzied DataStore, I thought it would be useful to write this blog that gives a clear view on Write-Optimized DataStore concept, the typical scenarios of where, when and how to use; you can customize the data flow/ data model as per requirement. A more detailed step-by-step technical document will be released soon. Useful OSS notes: Please check the latest OSS notes / support packages to overcome any technical

difficulties occurred and make sure to implement any of them that are required.
OSS 1077308: In a write-optimized DataStore object, 0FISCVARNT is treated as a key, even though it is only a semantic key. OSS 1007769: Parallel updating in write-optimized DataStore objects OSS 966002: Integration of write-opt DataStore in DTP error stack OSS 1054065: Archiving supports. You can attend SAP class DBW70E BI Delta Enterprise Data Warehousing SAP NetWeaver 2004s. Or you can visit http://www50.sap.com/useducation/ References: http://help.sap.com/saphelp_nw04s/helpdata/en/f9/45503c242b4a67e10000000a114084/content.ht m https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5c46376d-0601-0010-83bfc4f5f140e3d6 http://help.sap.com/saphelp_sem60/helpdata/en/b6/de1c42128a5733e10000000a155106/content.ht m https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2b582e94bcf8a Martin Mouli is a senior SAP Consultant for Rapidigm - A Fujitsu Consulting Company Add to: del.icio.us | Digg | Reddit Comment on this weblog Showing messages 1 through 7 of 7.

Titles Only Main Topics Question Related to partitioning

Oldest First

2008-09-08 14:52:55 DilipKumar Vedanthachari Business Card [Reply] As per this blog we are using Write-Optimized Data Store, and our active ODS table is /BIC/AZS_OCOPA00 but we observed the table is not partitioned by request, we have loaded 3 different requests to the ODS but if I look SE14 or Oracle Database level the table is not partitioned by request. As per document its is stated Partitioned on request ID (automatic).

Can you please advice if I am missing something? Thanks Dilip

dataload performance 2008-06-25 03:04:05 Takashi F Business Card [Reply] Hi Martin,

This is grate work and very helpful! I have one question.Is the data load performance of Write-Optimized DataStore faster than PSA? Regards, TF

Partitioning 2008-04-15 11:45:33 Vitaliy Rudnytskiy Business Card [Reply] You mentioned that table is partitioned, but actually I do not see partitioning on the DSO table when look into the system.

Can you pls refer to the source of this information? Thanks again, -Vitaliy

Performance of Delta extraction from W-o DSO 2008-04-13 21:07:16 Vitaliy Rudnytskiy Business Card [Reply] Hi, Have you noticed performance issues when extracting Deltas (with DTPs) from Write-Optimized DSO? I have big load requests (~10 million records) and when extracting Delta from W/o DSO the first data package takes loooong time (about 1 hours) and then subsequent data packages are quite fast - about 1 minute.

Do you have any experience with that issue? Thanks, -Vitaliy

Performance of Delta extraction from W-o DSO 2008-04-15 15:44:04 Martin Mouli Business Card [Reply] Hi Vitaliy, Its seems to be ODD for me, just open a OSS message with SAP. There are many postings in SDN if its related to DTP issue (or) Volume... Thank you, Martin

Really saves time 2008-01-18 13:00:34 Raguman AR Business Card [Reply] Thanks all the info needed abt Write optimized DSo is at one place and in details.

Informative 2008-01-15 08:36:16 swapna gollakota Business Card [Reply] i need to bookmark this link whenever i have to refer dso concepts