Sie sind auf Seite 1von 77

1. Info Objects 2. DSO: Data Store objects permit complete granular (document level) and historic data storage.

As for DataSources, the data is stored in flat database tables. A Data Store object consists of a key (for example, document number, item) and a data area. The data area can contain both key figures (for example, order quantity) and characteristics (for example, order status). In addition to aggregating the data, you can also overwrite the data contents, for example to map the status changes of the order. This is particularly important with document-related structures. A Data Store object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level. This data can be evaluated using a BEx query.A DataStore object contains key fields (such as document number, document item) and data fields that, in addition to key figures, can also contain character fields (such as order status, customer). The data from a DataStore object can be updated with a delta update into InfoCubes (standard) and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems. Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables. In BI 7.0, three types of DataStore objects exist: 1. Standard DataStore (Regular ODS). 2. DataStore Object for Direct Updates ( APD ODS). 3. Write-Optimized DataStore (new). Standard DataStore (Regular ODS) Features of Change Log & Active Queue of Standard Data Store Object (DSO) in BI 7.0 Motivation for DSO Consolidation & Cleansing o A further motivation is the need for a place where data can be consolidated and cleansed. This is important when we upload data from completely different Source Systems. o After consolidation and cleansing, data can be uploaded to Info Cubes. To store data on document level Overwrite capability of characteristics o Not possible to overwrite data in Info Cube as whenever data is added to Info Cube, this data is aggregated. So data can be overwritten in DSO and this provides a significant capability to BW. Reporting o Direct on document level data o Drilldown from Info cube to document level Architecture of Standard ODS / DSO (7.x)

"ODS Objects consist of three tables as shown in the architecture" - Source: SAP Docs

Fig.A - ODS Object Structure (C) SAP The Transition: ODS Objects (3.X) to DSO (BI 7.0) The ODS consists of consolidated data from several Info Sources on a detailed (document) level, in order to support the document analysis. In the context of the DSO, the PSA makes up the first level and the DSO table makes up the second level of the DSO. Therefore, the first level consists of the transaction data from the source system, and the second level consists of the consolidated data and data from several source systems and Info Sources. You can run this analysis directly on the contents of the table, or run it from an Info Cube query into a query by means of a drilldown.

Fig. B. Sample schema for Reporting using ODS Objects (using Update Rules & Transfer Rules) * Note: UR refers to Update Rules Prior to existence of DSO, decisions on granularity were based solely on data in Info Cube. Now Info Cube can be less granular with data held for a longer period of time versus the DSO which can be very granular but hold data for a shorter period of time. Data from the ODS can be updated into appropriate Info Cubes or other ODS Objects. Reporting on ODS can be done with the OLAP processor or directly with an ODS query. In this Fig. B, data from Data Source A and Data Source B is uploaded to a PSA. The PSA (Persistent Staging Area) corresponds to DSO. From the PSA we have the possibility, via transfer rules, to upload data to DSO. The DSO is represented here as one layer, but depending on the business scenario, BI DSO can be structured with multiple levels. Thus, the ODS objects offer data that are subject oriented, consolidated and integrated with respect to same process on different source systems. After data has been stored, or while the data is updated in the ODS, we have option of making technical changes as well as data changes. In the ODS, data is stored in a de-normalized data structure. Structure of ODS While transferring data from PSA to ODS objects, rules (Transfer Rules) can be applied to clean records and transform them to company-wide standards for characteristic values. If it is meaningful at this stage, business logic may also be applied (Update Rules). Sample Scenario for a Standard DSO Consider an example involving a Standard DSO in SAP BI 7.0. Let's check flat file records, the key fields are customer and material and we have a duplicate record (Check Rec.2). The 'Unique Data Records option is unchecked which means it can expect duplicate records.

Figure C. Explains how records are captured in a DSO (Refer selected options below) After update rule, Record 2 in PSA is overwritten as it has got same keys. It's overwritten with most recent record. The key here is [M1000 | Customer A]. If we note the monitor entries, 3 records are transferred to update rules & two records are loaded in to Active Queue table. This is because we haven't activated request yet & that duplicate record for key in DSO gets overwritten. Note: Activation Queue can also be expressed as 'New Data' table The key figures will have the overwrite option by default, additionally we have the summation option to suit certain scenarios and the characteristics will overwrite always. Naming Conventions Tech. Name of New data / Activation queue table is always for customer objects - /bic 140 and for SAP objects - /bio140. Name of active data table /BIC/A100 and /BI0 for SAP. Name of change log table - The technical name is always /BIC/. Once we activate we will have two records in DSO's Active Data table. The Active Data table always has contains the semantic key (E.g. Customer & Material for instance) Change Log The Change Log table has 2 entries with the image N (stands for New'). The technical key (REQID, DATAPACKETID, RECORDNUMBER) will be part of change log table. (Refer Fig. D)

Fig. D - Data is loaded to CL & ADT (Pl. refer Fig. A for more details) Introducing a few changes, we get the following result as in Fig. E.

Fig. E - Changes Introduced from the Flat file is reflected on PSA to ADT & PSA to CL Detailed Study on Change Logs We will check Change log table to see how the deltas are handled. The records are from first request that is uniquely identified by technical key (Request Number, Data packet number, Partition value of PSA and Data record number). With the second request the change log table puts the before and after Image for the relevant records.

Fig. F - Study on the Change Log on how the Deltas are handled In this example Customer and Material has the before image with record mode "X". And also note that all key figures will be having "-" sign if we opted to overwrite option & characteristics will be overwritten always. A new record (last row in the Fig. F) is added is with the status "N" as it's a new record.

Fig. G - Final Change Log Output Record modes The record mode(s) that a particular data source uses for the delta mechanism largely depends on the type of the extractor.

Fig.H - Types of Record modes (C) SAP Ref. OSS notes 399739 for more details. Work Scenario Let's go through a sample real time scenario. In this example we will take the Master data object Customer, Material with a few attributes for the demonstration purpose. Here we define a ODS / DSO as below where material and customer is a key and the corresponding attributes as data fields. ODS / DSO definition Definition of the transformation Flat file Loading Monitoring the Entries Monitoring Activation Queue Monitoring PSA data for comparison Checking Active Data Table Monitoring Change Log Table Displaying data in suitable Info provider (E.g. Flat File to PSA to DSO to Info Cube) Note: In 7.0 the status data is written to active data table in parallel while writing to Change log. This is an advantage of parallel processes which can be customized globally or at object level in system

Write Optimized DSO This blog describes a new DataStore in BI 7.0, "Write-Optimized DataStore" that supports the most detail level of tracking history, retaining the document status and a faster upload without activation.

In a database system, read operations are much more common than write operations and consequently, most of database systems have been read optimized. As the size of the main memory increases, more of the database read requests will be satisfied from the buffer system and also the number of disk write operations when compared to total disk operations will relatively increase. This feature has turned the focus on write optimized database systems.

In SAP Business Warehouse, it is necessary to activate the data loaded into a Data Store object to make it visible for reporting or to update it to further InfoProviders. As of SAP NetWeaver 2004, a new type of Data Store object was introduced: the Write-Optimized DataStore object. The objective of this new DataStore is to save data as efficiently as possible to further process it without any activation, additional effort of generating SIDs, aggregation and data-record based delta. This is a staging DataStore used for a faster upload. In BI 7.0, three types of DataStore objects exist: 1. Standard DataStore (Regular ODS). 2. DataStore Object for Direct Updates ( APD ODS). 3. Write-Optimized DataStore (new).

In this weblog, I would like to focus on the features, usage and the advantages of Write-Optimzied DataStore. Write-Optimized DSO has been primarily designed to be the initial staging of the source system data from where the data could be transferred to the Standard DSO or the InfoCube. o The data is saved in the write-optimized Data Store object quickly. Data is stored in at most granular form. Document headers and items are extracted using a DataSource and stored in the DataStore. o The data is then immediately written to the further data targets in the architected data mart layer for optimized multidimensional analysis. The key benefit of using write-optimized DataStore object is that the data is immediately available for further processing in active version. YOU SAVE ACTIVATION TIME across the landscape. The system does not generate SIDs for write-optimized DataStore objects to achive faster upload. Reporting is also possible on the basis of these DataStore objects. However, SAP recommends to use Write-Optimized DataStore as a EDW inbound layer, and update the data into further targets such as standard DataStore objects or InfoCubes. Fast EDW inbound layer - An Introduction Data warehousing has been developed into an advanced and complex technology. For some time it was assumed that it is sufficient to store data in a star schema optimized for reporting. However, this does not adequately meet the needs of consistency and flexibility in the long run. Therefore data warehouses are structured using layer architecture like Enterprise data warehouse layer and Architectured data mart layer. These different layers contain data at different levels of granularity as shown in Figure 1.

Figure 1 Enterprise Data Warehouse Layer is a corporate information repository The benefit of Enterprise Data warehouse Layer includes the following: Reliability, Trace back - Prevent Silos o 'Single point of truth'.

o All data have to pass this layer on it's path from the source to the summarized EDW managed data marts. Controlled Extraction and Data staging (transformations, cleansing) o Data are extracted only once and deployed many. o Merging data that are commonly used together. Flexibility, Reusability and Completeness. o The data is not manipulated to please specific project scopes (unflavored). o The coverage of unexpected adhoc requirements. o The data is not aggregated. o Normally not used for reporting, used for staging, cleansing and transformation one time. o Old versions like document status are not overwritten or changed but useful information may be added. o Historical completeness - different levels of completeness are possible from availability of latest version with change date to change history of all versions including extraction history. o Modeled using Write-Optimized DataStore or standard DataStore. Integration o Data is integrated. o Realization of the corporate data integration strategy. Architectured data marts are used for analysis reporting layer, aggregated data, data manipulation with business logic, and can be modeled using InfoCubes or Multi Cubes. When is it recommended to use Write-Optimized DataStore Here are the Scenarios for Write-Optimized DataStore. (As shown in Figure 2). o Fast EDW inbound layer. o SAP recommends Write-Optimized DSO to be used as the first layer. It is called Enterprise Data Warehouse layer. As not all business content come with this DSO layer, you may need to build your own. You may check in table RSDODSO for version D and type "Write-Optimized". o There is always the need for faster data load. DSOs can be configured to be Write optimized. Thus, the data load happens faster and the load window is shorter. o Used where fast loads are essential. Example: multiple loads per day (or) short source system access times (world wide system landscapes). o If the DataSource is not delta enabled. In this case, you would want to have a Write-Optimized DataStore to be the first stage in BI and then pull the Delta request to a cube. o Write-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object. Subsequently, the data can be updated to further InfoProviders. You only have to create the complex transformations once for all incoming data. o Write-optimized DataStore objects can be the staging layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders. o If you want to retain history at request level. In this case you may not need to have PSA archive; instead you can use Write-Optimized DataStore. o If a multi dimensional analysis is not required and you want to have operational reports, you might want to use Write Optimized DataStore first, and then feed data into Standard Datastore. o Probably you can use it for preliminary landing space for your incoming data from diffrent sources. o If you want to report daily refresh data with out activation.In this case it can be used in reporting layer with InfoSet (or) MultiProvider. I have discussed possible scenarios but request you to decide where this data store can fit in your data flow. Typical Data Flow using Write-Optimized DataStore

Figure 2 Typical Data flow using write-optimized DataStore. Functionality of Write-Optimized DataStore (As shown in Figure 3). Only active data table (DSO key: request ID, Packet No, and Record No): o No change log table and no activation queue. o Size of the DataStore is maintainable. o Technical key is unique. o Every record has a new technical key, only inserts. o Data is stored at request level like PSA table. No SID generation: o Reporting is possible(but you need make sure performance is optimized ) o BEx Reporting is switched off. o Can be included in InfoSet or Multiprovider. o Performence improvement during dataload. Fully integrated in data flow: o Used as data source and data target o Export into info providers via request delta Uniqueness of Data: o Checkbox Do not check Uniqueness of data. o If this indicator is set, the active table of the DataStore object could contain several records with the same key. Allows parallel load. Can be included in Process chain without activation step. Support Archive.

You cannot use reclustering for write-optimized DataStore objects since this DataStore data is not meant for querying. You can only use reclustering for standard DataStore objects and the DataStore objects for direct update. PSA and Write optimized DSO are the two different entities in the data flow as each one has its own features and usage. Write optimized DSO will not replace the PSA in a data flow but it allows to stage (or) store the data without activation and to apply business rules. Write-optimized DataStore Object is automatically partitioned. Manual Partitioning can be done according to SAP Notes 565725/742243. Optimized Write performance has been achieved by request level insertions, similarly like F table in InfoCube. As we are aware that F fact table is write-optimized while the E fact table is read optimized.

Figure 3 Overview of various DataStore objects types in BI 7.0 To define Write-Optimized DataStore, just change Type of DataStore Object to Write-Optimized as shown in Figure 4.

Figure 4 Technical settings for Write-Optimized DataStore. Understanding Write-Optimized DataStore keys: Since data is written into Write-optimized DataStore active-table directly, you may not need to activate the request as is necessary with the standard DataStore object. The loaded data is not aggregated; the history of the data is retained at request level. . If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, the aggregation of data can take place later in standard DataStore objects. The system generates a unique technical key for the write-optimized DataStore object. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD) as shown in Figure4. Only new data records are loaded to this key. The standard key fields are not necessary with this type of DataStore object. Also you can define WriteOptimized DataStore without standard key. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical key. Semantic Keys can be defined as primary keys in further target Data Store but it depends on requirement. For example if you are loading data into Schedule Line Level ODS thru Write-optimized DSO, you can have header, item, SCL as the semantic keys in your Write-optimized DSO. The purpose of the semantic key is to identify error in the incoming records or duplicate records. All subsequent data records with same key are written to error stack along with the incorrect data records. These are not updated to data targets; these are updated to error stack. A maximum of 16 key fields and 749 data fields are permitted. Semantic Keys protect the data quality. Semantic keys wont appear in database level. In order to process error records or duplicate records, you must have to define Semantic group in DTP (data transfer process) that is used to define a key for evaluation as shown in Figure 5. If you assume that there are no incoming duplicates or error records, there is no need to define semantic group, its not mandatory. The semantic key determines which records should be detained when processing. For example, if you define "order number" and item as the key, if you have one erroneous record with an order number

123456 item 7, then any other records received in that same request or subsequent requests with order number 123456 item 7 will also be detained. This is applicable for duplicate records as well.

Figure 5 Semantic group in data transfer process. Semantic key definition integrates the write-optimized DataStore and the error stack through the semantic group in DTP as shown in Figure 5. With SAP NetWeaver 2004s BI SPS10, the write-optimized DataStore object is fully connected to the DTP error stack function. If you want to use write-optimized DataStore object in BEx queries, it is recommend that you define semantic key and that you run a check to ensure that the data is unique. In this case, the writeoptimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query. Delta Administration: Data that is loaded into Write-Optimized Data Store objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required. Note here that the loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the Data Store object, since the technical key for the both records not unique. The record mode (InfoObject 0RECORDMODE (space,X,A,D,R)) responsible for aggregation remains, however, the aggregation of data can take place at a later time in standard Data Store objects (or) InfoCube. Write-Optimized DataStore does not support the image based delta(RECORDMODE), it supports request level delta, and you will get brand new delta request for each data load. When you load a DataStore object that is optimized for writing, the delta administration is supplied with the change log request and not the load request. Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted. Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes. Extraction method - Transformations thru DTP (or) Update Rules thru InfoSource Prior to using DTP, you must have to migrate 3.x DataSource into BI 7.0 DataSource by using transaction code RSDS as shown in Figure 6.

Figure 6 Migration of 3.x Data Source -> Data Source using Tcode RSDS, and then replicate the data source into BI 7.0. After data source replication into BI 7.0, you may have to create data transfer process (DTP) to load data into Write-Optimized DataStore. Write-optimized DataStore objects can force a check of the semantic key for uniqueness when data is stored. If this option is active and if duplicate records are loaded with regard to semantic key, these are logged in the error stack of the Data Transfer Protocol (DTP) for further evaluation. In BI7 you are having the option to create error DTP. If any error occurs in data, the error data will be stored in Error stack. So, you can correct the errors in stack, and if you schedule the error DTP, the error data will be stored to target. Otherwise, you have to delete the error request from target and you need to reschedule the DTP. In order to integrate Write-Optimized DataStore into Error stack, you must have to define semantic keys in DataStore definition and create semantic group in DTP as shown in Figure 5. Semantic group definition is necessary to do parallel loads to Write-Optimized DataStore. You can update write-optimized DataStore objects in parallel after you have implemented OSS 1007769 note. When you include a DTP in process chain for write-optimized DataStore Object, you will need to make sure that there is no subsequent activation step for this DataStore. On the other hand you can just link this DSO thru the Infosource with update rules as well by using 3.x functionality. Reporting Write-Optimized DataStore Data: For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting. However, it is recommended that you use them as a staging layer, and update the data to standard DataStore objects or InfoCubes. OLAP BEx query perspective, there is no big difference between Write-Optimized DataStore and Standard DataStore, the technical key is not visible for reporting, so the look and feel is just like regular DataStore. If you want to use write-optimized DataStore object in BEx queries, it is recommended that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.

In a nut shell, Write Optimized DSO is not for reporting purpose unless otherwise required to do so, its a staging DataStore used for faster upload. The direct reporting on this object is also possible without activation but keeping in mind the performance, you can use an infoset or multi-provider. Conclusion: Using Write-Optimized DataStore, you will have snapshot for each extraction. This data can be used for trending old KPIs or deriving new KPIs at any time because the data is stored at request level. This most granular level data by calendar day/time can be used for slice and dice, data mining, root-cause analysis, behavioral analysis which will help in better decision making. Moreover you need not worry about the status of extracted documents into BI since data is stored as of extracted date/time. For example Orderto-Cash/Spend analysis...etc life cycle can be monitored in detail to identify the bottlenecks in the process. Although there is help documentation available from SAP on Write-Optimzied DataStore, I thought it would be useful to write this blog that gives a clear view on Write-Optimized DataStore concept, the typical scenarios of where, when and how to use; you can customize the data flow/ data model as per reporting(or)downstream requirement. A more detailed step-by-step technical document will be released soon. Useful OSS notes: Please check the latest OSS notes / support packages from SAP to overcome any technical difficulties occurred and make sure to implement them. OSS 1077308: In a write-optimized DataStore object, 0FISCVARNT is treated as a key, even though it is only a semantic key. OSS 1007769: Parallel updating in write-optimized DataStore objects OSS 1128082 - P17:DSO:DTP:Write-optimized DSO and parallel DTP loading OSS 966002: Integration of write-opt DataStore in DTP error stack OSS 1054065: Archiving supports. You can attend SAP class DBW70E BI Delta Enterprise Data Warehousing SAP NetWeaver 2004s. Or you can visithttp://www50.sap.com/useducation/ References: SAP Help documentation http://help.sap.com/saphelp_nw04s/helpdata/en/f9/45503c242b4a67e10000000a114084/content.htm http://help.sap.com/saphelp_sem60/helpdata/en/b6/de1c42128a5733e10000000a155106/content.ht m New BI Capabilities in SAP NetWeaver2004s (Pls open in separate link) https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5c46376d-0601-0010-83bfc4f5f140e3d6 Enterprise Data Warehousing - SAP BI (Pls open in separate link) https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2b582e94bcf8a SAP NetWeaver 7.0 Business Intelligence Warehouse Management (Pls open in separate link) http://www.tacook.co.uk/media/pdf/00075_pre3.pdf

Difference Between Standard and Write Optimized DSO - Technical Settings

DSO activation job log and settings explained In BI 7.x you have three different kinds of DataStoreObjects (DSO): Standard, write-optimized and direct update. A Standard DSO consists of a new data and an active date table and a changelog table, which records the changes. Write-optimized DSO and DSO for direct update consist of an active table only. In BI 7.x the background process how data in standard DataStoreObjects is activated has changed in comparison to BW 3.5 or prior. In this blog I will explain the DSO activation job log and the settings / parameters of transaction "RSODSO_Settings". I will describe how the parameters you can set in this transaction influence DSO activation performance. I will not describe the different activation types. h1. 2. Manually activation of a request If you loaded a new request with a Data Transfer Process (DTP) in your standard DSO, data is written to new data table. You can manually activate the request or within a process chain. If you manually activate requests, you get following popup

screen: +Picture 1: Manual DSO Activation+ Button "Activate in Parallel" sets the settings for parallel activating. In this popup you select either dialog or background. In background you select job class and server. For both you define the number of jobs for parallel processing. By default it is set to '3'. This means, you have two jobs that can be scheduled parallel to activate your data, the BIBCTL* jobs. The third job is needed for controlling the activation process and scheduling the processes. That's the BI_ODSA* job. h1. 3. BI_ODSA* and BIBCT* Jobs The main job for activating your data is "*BI_ODSAxxxxxxxxxxxxxxxxxxxxxxxxx*" with a unique 25letter-GUID at the end. Let's have a look at the job log with

SM37.

Picture 2: Job log for BI_ODSA* job Activating data is done in 3 steps. First it checks status of request in DSO if it can be activated, marked green in the log. If there is another yellow or red request before this request in the DSO, activation terminates. In a second step data is checked against archived data, marked blue in the log. In a third step the activation of data takes place, marked red in the log. During step 3 a number of sub-jobs "*BIBCTL_xxxxxxxxxxxxxxxxxxxxxxxx*" with a unique 25-letter-GUID at the end are scheduled. This is done for the reason to get a higher parallelism and so a better performance. But how is the data split up into the BIBCTL* jobs? How does the system know, how many jobs should be scheduled? I will answer this question in the next chapter. But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In {code:html}DSO activation collection note{code} you will find some general hints for DataStore performance. I will show you in chapter 4, which settings can be the reason for these long-running activations. After the last "BIBCTL*" has been executed the SID activation will be started. Unfortunately this is not written to the job log for each generation step but at the end if the last of your SID generation jobs has been finished. Let's look at the details and performance settings, how they influence DSO activating so that you may reduce your DSO activation time. h1. 4. Transaction for DSO settings You can view and change this DSO settings with "Goto->Customizing DataStore" in your manage DSO view. Youre now in transaction RSODSO_SETTINGS. In [help.sap.com |http://help.sap.com/saphelp_nw70ehp2/helpdata/en/e6/bb01580c9411d5b2df0050da4c74dc/conten t.htm] you can find some general hints for runtime parameters of

DSO. Picture 3: RSODSO_SETTINGS As you can see, you can make cross-DataStore settings or DataStore specific settings. I choose cross-DataStore and choose "Change" for now. A new window opens which is divided

into three sections: Picture 4: Parameters for cross-datastore settings Section 1 for activation, section 2 for SID generation and section 3 for rollback. Let's have a look at the sections one after another. h1. 5. Settings for activation In the first section you can set the data package size for activation, the maximum wait time and change process parameters. If you click on the button "Change process params" in part "Parameter for activation" a new popup

window opens: Picture 5: Job parameters The parameter described here are your default '3' processes provided for parallel processing. By default also background activation is chosen. You can now save and transport these settings to your QAS or productive system. But be careful: These settings are valid for any DSO in your system. As you can see from picture 4 the number of data records per data package is set to 20000 and wait time for process is set to 300, it may defer from your system. What does this mean? This means simply that all your records which have to be activated are split into smaller packages with maximum of 20000 records each package. A new job "BIBCTL*" is scheduled for each data package. The main activation job calculates the number of "BIBCTL*" jobs to be scheduled with this formula: *One important point: You can change the parameters for data package size and maximum wait time for process only if there is no request in your DSO. If you have loaded one request and you change the parameters, the next request will be loaded with the previous parameter settings. You first have to delete the data in your DSO, change the parameter settings and restart loading.* h1. 6. Settings for SID generation In section SID generation you can set parameters for maximum Package size and wait time for processes too. With button "Change process params" popup described in picture 5 appears. In this popup you define how many processes will be used for SID generation in parallel. It's again your default value. Minimum package size describes the minimum number of records that are bundled into one package for SID activation. With SAP Layered Scalable Arcitecture (LSA) in mind, you need SID generation for your DSO only if you want to report on them and have queries built on them. Even if you have queries built on top of DSO without SID generation at query execution time missing SIDs will be generated, which slows down query execution. For more information to LSA you can watch a really good webinar from {code:html}Webinars{code}. Unfortunately SID generation is set as default if you create your DSO. My recommendation is: +Switch off SID generation for any DSO+! If you use the DataStore object as the consolidation level, SAP recommends that you use the write-optimized

DataStore object instead. This makes it possible to provide data in the Data Warehouse layer 2 to 2.5 times faster than with a standard DataStore object with unique data records and without SID generation! See performance tips for details. From[ performance tips for DataStore Objects in help.sap.com |http://help.sap.com/saphelp_nw70ehp2/helpdata/en/48/146cb408461161e10000000a421937/conten t.htm] you can also find this performance table and how the parameters SID generation and Unique records influence DSO activation: | Flag | | Saving in Runtime | | Generation of SIDs During Activation Unique Data Records | x x | approx. 25% | | Generation of SIDs During Activation Unique Data Records | | approx. 35% | | Generation of SIDs During Activation Unique Data Records | x | approx. 45% | The saving in runtime is influenced primarily by the SID determination. Other factors that have a favorable influence on the runtime are a low number of characteristics and a low number of disjointed characteristic attributes. h1. 7. Settings for Rollback Finally last section describes rollback. Here you set the maximum wait time for rollback processes and with button Change process params you set the number of processes available for rollback. If anything goes wrong during activation, e.g. your database runs out of table space, an error during SID generation occurs, rollback will be started and your data is reset to the state before activation. The most important parameter is maximum wait time for Rollback. If time is over, rollback job will be canceled. This could leave your DSO in an unstable state. My recommendation set this parameter to a high value. If you've large amount of data to activate you should take at least double the time of maximum wait time for activation for rollback. You should give your database enough time to execute rollback and reset your DSO to the state before activation started. Button "Save" saves all your cross-datastore settings. h1. 8. DataStore-specific settings For a DataStore-specific setting you enter your DSO in the input field as you can see from picture 3. With this DSO local setting you overwrite the global DSO settings for this selected DSO. Especially if you expect to have very large DSOs with lot of records you can change your parameters here. If you press button "Change process params" the same popup opens as under global settings, see picture 5. h1. 9. Activation in Process chains I explained the settings for manual activation of requests in a standard DSO. For process chains you have to create a variant for DSO activation as step in your chain, see picture 6. In this variant you can set the number of parallel jobs for activation accordingly with button "Parallel

Processing".

Performance issue during DSO request activation Purpose This page contains some general tips on to improve the performance of activating DSO requests. Overview These tips will ensure that your DSO requests are activated with optimal performance. Tips DSO activation can be slow if the batch tables are large as these are run through for object activations. So as a starter, please clean down the batch system with the usual house-keeping tools (report RSBTCDEL2, tcode SM65, etc); your Basis team will be aware of these & should run these for you. Ensure that statistics for the DSO are up to date. If you are not reporting on the DSO, the activation of SIDs is not required (this will take up some considerable time in activation); Often the logs show that the activation job takes almost all the time to schedule the RSBATCH_EXECUTE_PROCESS as job BIBCTL_*. RSBATCH_EXECUTE_PROCESS is for scheduling and executing the SID-generation process. If you don't need the relevant DSO for reporting & you don't have queries on it, you can delete the reporting flag in the DSO maintenance. This would be a good way to speed this process up significantly. Check under 'Settings' in the DSO maintenance whether you have flagged the option "SID Generation upon activation". By making some adjustments to the Data Store Object parameters in tcode RSODSO_SETTINGS you should be able to accelerate the request. You can adjust this for all, or specific, DSOs. Questions on DSO: Q1. Hi,

I need a clarification in adding objects to an existing DSO. I want to add 2 characteristics one as key field and other one as data field in DSO that is existing. The characteristics are not present in the DSO now. This DSO in Production system has millions of records in it. So my questions are 1. Is it possible to add info objects to the existing DSO? 2. Even if I am able to add, due to millions of records in production systems for this DSO will there be any problem while transports? We are using BI 7.0 A. a. You can add key fields only in empty DSO. Even if you add some data fields, you have to reload data to fill this field for existing records. b. 1) yes it's possible to add IO to the existing DSO 2) There will be no problem in adding the field and transporting it to production even if the DSO contains millions of records. but if business is requesting the historical data for this newly updated key IO then you need to drop the entire data and reload the data. if historical data is not needed then you are good to use it once its transported to Production. Q2. can I upload data from two data source into same ODS at the same time ? how about one of two data source is RDA? A. a. Yes, you can load data from multiple source to one DSO. but activation step need to happen at one time only. After the successful loading two loads from sources you can activate dso request(All) at one time. If any loads have invalid/bad data, activation step will fail here. need to do manual correction at PSA(after deleting req at dso level) and relaod from psa to dso. do activation. If you load thru process chain, you will face lock issue. For Ex: two of them, if any one request have less data and loading will finish early and start activation. but due to the another load , it won't allow to activate its need to wait finish that load and later it will activate. You can design your process chain: use parallel steps upto loading of dso and later you can add activation step in series. b. You can very well load your DSO from two datasources. You can load by Real Time also. No issues..But the DSO requests activation can be done when both datasources have been successfully loaded to DSO. I mean both requests can be activated in one shot. You can make your process chain in the above fashion. Q3. I'm faces with some issue - i need to load Data From 2 Data Sources to one DSO. But i only need to have one data record. Example: At first Data Source we have information about accounting documents, g/l account, reference transactions, reference keys. In second data source - material document number, material, storage location. First Data Source Accounting Document 1 1 G/l Account Ref. TRN MKPF 22 Ref Key

Second Data Source Material Document 22 Need to have in DSO one record: Accounting Document 1 1 G/l Account Ref. TRN MKPF Ref Key 22 22 Material Document Storage Location HR Material pencil HR Storage Location

Material pencil

Data can be assigned by material document and reference key - material document=reference key I'm tried to create an Info-Source by the desired communication structure. Then Created Transformation between Info-Source and DSO Then i configured to Data Transfer processes - one per each DataSource Then Execute DTP Activate data

And in result i have two records, exept one A. a. Your datasources should have at least one and above common fields, so that you can use these common fields as your DSO key fields. When you load data from 2 datasources, then you will get one record. Otherwise you will always get 2 records. Work around : Create two separate DSOs for the two datasources individually. You can create an Infoset by inner joining "Ref Key" and "Material Document". Hence you will get your required single record b. Make a DSO with material as keyfield and rest all the fields of datsource as data fields. Create a transformation between datsource 2 and DSO --Map all the fields of it Create another transformation between datasource 1 and dso...Make sure to map ref key to material keyfield in dso. Load the data for both in the sequence as mentioned above. This should work and give you one record as desired. c. Creating 2 separate DSO and using Infoset will work by joining reference key & material document. With Infoset, performance of the query will not be good. I will suggest going for 3rd Cube or DSO having first DSO transformation mapped and having look up from 2nd DSO(if 2nd DSO has less Infoobjects). d. The second datasource seems to contain material master data, so I would suggest du load them into the attributes of the material infoobject. Then you add the material to the dso where you load the data from first datasource. You can use the data from second datasource as navigational attributes or load them to the dso in a end routine of the Transformation from datasource 1 to the dso if you need the attributes there. 3. Info Cube 4. Multi Provider

5. Info Set 6. PSA: Persistent staging area (PSA), the structure of the source data is represented by DataSources. The data of a business unit (for example, customer master data or item data of an order) for a DataSource is stored in a transparent, flat database table, the PSA table. The data storage in the persistent staging area is short- to medium-term. Since it provides the backup status for the subsequent data stores, queries are not possible on this level and this data cannot be archived. 7. Info Package 8. DTP: DTP determines the process for transfer of data between two persistent objects within BI. As of SAP NetWeaver 7.0, InfoPackage loads data from a Source System only up to PSA. It is DTP that determines the further loading of data thereafter. Use Loading data from PSA to InfoProvider(s). Transfer of data from one InfoProvider to another within BI. Data distribution to a target outside the BI system; e.g. Open HUBs, etc. In the process of transferring data within BI, the Transformations define mapping and logic of data updating to the data targets whereas, the Extraction mode and Update mode are determined using a DTP. NOTE: DTP is used to load data within BI system only; except when they are used in the scenarios of Virtual InfoProviders where DTP can be used to determine a direct data fetch from the source system at run time. Key Benefits of using a DTP over conventional IP loading 1. DTP follows one to one mechanism between a source and a Target i.e. one DTP sources data to only one data target whereas, IP loads data to all data targets at once. This is one of the major advantages over the InfoPackage method as it helps in achieving a lot of other benefits. 2. Isolation of Data loading from Source to BI system (PSA) and within BI system. This helps in scheduling data loads to InfoProviders at any time after loading data from the source. 3. Better Error handling mechanism with the use of Temporary storage area, Semantic Keys and Error Stack. Extraction There are two types of Extraction modes for a DTP Full and Delta. Full: Update mode full is same as that in an InfoPackage. It selects all the data available in the source based on the Filter conditions mentioned in the DTP. When the source of data is any one from the below InfoProviders, only FULL Extraction Mode is available. InfoObjects InfoSets DataStore Objects for Direct Update Delta is not possible when the source is anyone of the above.

Delta: Unlike InfoPackage, delta transfer using a DTP doesnt require an explicit initialization. When DTP is executed with Extraction mode Delta for the first time, all existing request till then are retrieved from the source and the delta is automatically initialized.

The below 3 options are available for a DTP with Extraction Mode: Delta. Only Get Delta Once. Get All New Data Request By Request. Retrieve Until No More New Data. Only get delta once: If this indicator is set, a snapshot scenario is built. The Data available in the Target is an exact replica of the Source Data. Scenario: Let us consider a scenario wherein Data is transferred from a Flat File to an InfoCube. The Target needs to contain the data from the latest Flat File data load only. Each time a new Request is loaded, the previous request needs to be deleted from the Target. For every new data load, any previous Request loaded with the same selection criteria is to be removed from the InfoCube automatically. This is necessary, whenever the source delivers only the last status of the key figures, similar to a Snap Shot of the Source Data. Solution Only Get Delta Once A DTP with a Full load should suffice the requirement. However, it is not recommended to use a Full DTP; the reason being, a full DTP loads all the requests from the PSA regardless of whether these were loaded previously or not. So, in order to avoid the duplication of data due to full loads, we have to always schedule PSA deletion every time before a full DTP is triggered again. Only Get Delta Once does this job in a much efficient way; as it loads only the latest request (Delta) from a PSA to a Data target. 1. Delete the previous Request from the data target. 2. Load data up to PSA using a Full InfoPackage. I

3.

Execute DTP in Extraction Mode: Delta with Only Get Delta Once checked.

The above 3 steps can be incorporated in a Process Chain which avoids any manual intervention. II Get all new data request by request: If you set this indicator in combination with Retrieve Until No More New Data, a DTP gets data from one request in the source. When it completes processing, the DTP checks whether the source contains any further new requests. If the source contains more requests, a new DTP request is automatically generated and processed. NOTE: If Retrieve Until No More New Data is unchecked, the above option automatically changes to Get One Request Only. This would in turn get only one request from the source. Also, once DTP is activated, the option Retrieve Until No More New Data no more appears in the DTP maintenance.

Package Size The number of Data records contained in one individual Data package is determined here. Default value is 50,000. Filter The selection Criteria for fetching the data from the source is determined / restricted by filter.

We have following options to restrict a value / range of values: Multiple selections OLAP variable ABAP Routine A on the right of the Filter button indicates the Filter selections exist for the DTP. Semantic Groups Choose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package. This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected. A on the right side of the Semantic Groups button indicates the Semantic keys exist for the DTP.

Update

Error Handling Deactivated: If an error occurs, the error is reported at the package level and not at the data record level. The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety. This results in faster processing. No Update, No Reporting: If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record. The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety. Valid Records Update, No Reporting (Request Red): This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor). The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP. Valid Records Update, Reporting Possible (Request Green): Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out. The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP. Error DTP Erroneous records in a DTP load are written to a stack called Error Stack. Error Stack is a request-based table (PSA table) into which erroneous data records from a data transfer process (DTP) are written. The error stack is based on the data source (PSA, DSO or Info Cube), that is, records from the source are written to the error stack. In order to upload data to the Data Target, we need to correct the data records in the Error Stack and manually run the Error DTP.

Execute

Processing Mode Serial Extraction, Immediate Parallel Processing: A request is processed in a background process when a DTP is started in a process chain or manually. Serial in dialog process (for debugging): A request is processed in a dialog process when it is started in debug mode from DTP maintenance. This mode is ideal for simulating the DTP execution in Debugging mode. When this mode is selected, we have the option to activate or deactivate the session Break Points at various stages like Extraction, Data Filtering, Error Handling, Transformation and Data Target updating. You cannot start requests for real-time data acquisition in debug mode. Debugging Tip: When you want to debug the DTP, you cannot set a session breakpoint in the editor where you write the ABAP code (e.g. DTP Filter). You need to set a session break point(s) in the Generated program as shown below:

No data transfer; delta status in source: fetched: This processing is available only when DTP is operated in Delta Mode. It is similar to Delta Initialization without data transfer as in an InfoPackage. In this mode, the DTP executes directly in Dialog. The request generated would mark the data found from the source as fetched, but does not actually load any data to the target. We can choose this mode even if the data has already been transferred previously using the DTP. Delta DTP on a DSO There are special data transfer options when the Data is sourced from a DTP to other Data Target.

Active Table (with Archive) The data is read from the DSO active table and from the archived data. Active Table (Without Archive) The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted. Archive (Full Extraction Only) The data is only read from the archive data store. Data is not extracted from the active table. Change Log The data is read from the change log and not the active table of the DSO.

Change Status of DTP Request


Sometimes there arises a situation, wherein you need to change the status of a request that is getting loaded via a DTP. This article describes the solution for this scenario. Author's Bio Rahul Bhandare is currently working in Patni Computer Systems Ltd as a SAP BW Consultant since last two years. He is mainly involved in Development & Production Support maintenance related work to SAP BI. Scenario Consider following situations 1. DTP load for DSO is running more than its due time i.e. (taking more time to load data) and hence stays in yellow state for a long time and you want to stop the load to the DSO by changing the status of the loading request from yellow to red manually, but you deleted the ongoing background job for the DTP load. 2. A master data load through DTP failed as the background job for DTP failed with a short dump and you want to start a new DTP load but you cannot as there is a message saying The old request is still running. You cannot change the status for the old request to red or green as there is message QMaction not allowed for master data. You cannot delete the old request due to the message Request cannot be locked for delete. Solution When old request in Scenario 1 & 2 is in yellow status and you are not able to change / delete the request, its actually in a pseudo status. This request sits in table RSBKREQUEST with processing type as 5 in data elements USTATE, TSTATE and this 5 is actually "Active" status which is obviously wrong. One of the possible solutions is to ask a basis person to change the status to 3 in both USTATE and TSTATE and then it allows reloading the data. Once the data is successfully loaded, you can delete the previous bad request even though it is still in yellow. Once the request is deleted, the request status gets updated as "4" in table RSBKREQUEST. There is one more alternative solution, wherein you can manually change the status of the old request to red or green by using the function module RSBM_GUI_CHANGE_USTATE. Following are the steps to change the QM Status of a yellow request to red/green by using RSBM_GUI_CHANGE_USTATE 1. Select Request Id from target.

2. Go to SE37 and execute the function module RSBM_GUI_CHANGE_USTATE.

3. Enter Request Id here and execute it.

4. Then change the status of the request to either red/green.

5. Request will have the status you selected in step 4 and delete the request if turned to red.

9. 10. 11. 12. 13. 14. 15. 16. 17.

Error DTP Data Source Transformations Transfer Rules Update Rules LO Cock Pit Generic Extraction Delta Management Process Chains:

Triggering the Failed Process Chain Step's Status Manually to Green In many situations while monitoring Process Chains, we find that the status of the chain is red indicating errors while data loading. We check and find that in the Process Chain, one of the steps has failed and the status is appearing in red color. But in actual everything has loaded, in order, still the chain is not moving forward because of the error. Not to worry as there is a workaround to manually set it to a success/green status. But it requires us to understand how the chain works. Whenever we create a chain we create a sequence of variants which are associated with events. Its the events, which when triggered, start a particular instance of variant and the step is completed.

Each event is scheduled with relation to the previous step, so as soon as the previous step gets completed; it triggers the ensuing step, in the process, completing the chain. Therefore, if a particular step has failed inside the chain, or stuck even if the data has loaded correctly, and because of its stuck status it will not trigger the next event, which we will have to do manually. So the logical action would be to manually turn the status of this step, in the chain, to green, which will automatically trigger the next event then. But, if only we knew what the variants were, the instance, the start date and time of the particular step of the chain which failed, we could have directly gone to the function module and would have executed using the details, to manually turn the chain status to green and trigger the following events. But alas! Its not humanly possible to remember it everytime given the long BW technical names. In order to obtain this information we go to the failed step of the process chain, right click it and select Display Messages and open the tab which reads Chain. Here we note down the values in the field Variant and Instance. Next we need to go to the table which stores information on process log entries for process chains RSPCPROCESSLOG. We use transaction SE16 which takes us to Data Browser: Initial screen asking for the table name whose entries we want to check. Input RSPCPROCESSLOG and press F7 which displays the table content input fields. Enter the noted values for the particular instance and the variant run on a particular day and press F8 to execute. . This returns the entry for the failed step giving details which are needed for the subsequent use in Function module. We again take a note of the values returned under the fields; logid, type, variant, instance. Lastly we go to SE38 transaction which opens the ABAP Editor: Initial Screen. Here enter RSPC_PROCESS_FINISH in the program field and press F8 which will execute this function module. Here enter the values jotted earlier namely; logid, type, variant, instance, Batch date, batch time. Here important thing to note is the field STATE. To turn it to green select option G which stands for successfully completed. Execute the function module by pressing F8 and in the process chain that process step status will be turned to green, triggering the next step in the chain. In a nutshell:A) Go to the step where the process chain is stuck Right click on context menu to display messages Select tab chain Copy variant, instance & start date B) Enter Transaction Code SE16 Enter table name RSPCPROCESSLOG Enter values for: Variant Instance Batch date. It will return a single row from table RSPCPROCESSLOG. C) Execute the function module RSPC_PROCESS_FINISH & Enter the below details INSTANCE. VARIANT LOGID

TYPE Enter 'G' for the field STATE and press execute (F8). Process Chain Monitoring - Approach and Correction The process chain monitoring is an important activity in Business Warehouse management mainly during support. So it is necessary to know different errors which occurs during monitoring and take necessary actions to correct process chains in order to support smoothly. Some points to remember while monitoring:

1. In case of running local chain, right click and go to process monitor to check if there is any failure in any process step. It may happen that there might be failure in one of the parallel running process in the chain. 2. Right click and select Display messages option to know more about reason of the failure in any step of process chain. 3. Try to correct step which takes longer time to complete by comparing with other processes which are running in parallel. 4. Check the lock entries on Targets in SM12 transaction. This will give you an idea fpr all the locks for the target. 5. Perform RSRV check to analyze the error and correct it in relevant scenarios. Monitoring - Approach and Correction: Description Failure in the Delete Index or Create Index step Approach/Analysis Go to the target and check if any load is running Correction Trigger the chain ahead if indexes are already deleted in other process chain

Compare the last run time of job. If it taking more time, check system logs in SM21 and server processes in Inform BASIS team with system log and server SM51 processes details Long running Delete Index or Create Index job Attribute change run failure Check the SM12 transaction entries for any locks on target from some other step Check the error message Stop the delete index or create index and repeat it once lock is released Check the lock on Master Data objects used in Infocube and repeat the step

Error message roll up failed due to Master Data table lock in attribute Wait till attribute change run is completed and start change run the roll up again Roll up failure Open RSDDBIAMON2 transaction to check any issues in BI Accelerator server. Manually delete the indexes of target and start the load again Coordinate with BASIS team to analyze the database space (DB02) and take necessary action to increase the database size.

Failure in Full data load SQL error This happens generally due insufficient memory space available while reading data from tables.

Check the data in PSA table for any lower case letter or special characters etc Incorrect data records in PSA table

Ask concerned R/3 person to correct the data at R/3 side and load the data again

It is very important to check the PSA table to have complete data in PSA before reconstruction. Otherwise all On immediate basis, delete the request from target, the data will not be updated in correct the data in PSA table and reconstruct the target. request Delta load failed without any error message. Go to infopackage and check for last delta load in target if it is successful Wait till last delta load is completed or correct it or not. then start the infopackage again. PSA update failed due to job termination or target lock Make sure that all data records are available in PSA table and reconstruct the request

Delta data load Failure

Delta load job failed in R/3 system or EDWH system after all data has come to PSA table Delete the request from targets and reconstruct it. Make the status of the request Red in process monitor and repeat the delta load again. Make sure that data is not doubled as repeat delta brings last successful records as well. Check the update mode of all key figures i.e. overwrite or addition mode for this.

Delta load job failed at R/3 side without data PSA table

Remove data mart tick from below source and start the delta package again. Delta load job failed in EDWH system Or make the status of the load to Red in case all without data PSA table key figures are in overwrite mode. Check the R/3 job in SM37 if it is not released Ask BASIS team to release the job from R/3 system Long running delta Compare the last successful delta load load time along with R/3 connection Ask BASIS team to repair the R/3 connection Activation failure due to incorrect value in the ODS field. Correct the data in PSA or check the update logic for the corresponding source field Try to activate the data manually else check the jobs in SM37. If most of the activation jobs are failing, contact BASIS team.

DSO Activation

Activation job termination

Miscellaneous

Some process types are not triggered even if above process type Chain logs takes time to update in chain log table is completed. Triggering line doesnt i.e. RSPCPROCESSLOG. Check the system show any status. performance with BASIS team as well.

List of useful tables and programs: Table Name Usage Check the main chain for local chain Usage Activate Infocube Activate transformation Activate DSO Activate Info object RSPCPROCESSLOG Check the process type log RSPCCHAIN Program Name RSDG_CUBE_ACTIVATE RSDG_TRFN_ACTIVATE RSDG_ODSO_ACTIVATE RSDG_IOBJ_ACTIVATE

RS_TRANSTRU_ACTIVATE_ALL Activate transfer structure

Delete the entries from partition SAP_PSA_PARTNO_CORRECT table RSPC_PROCESS_FINISH To trigger the process type ahead

Hope this will be very useful while monitoring the process chain. Step by step process to clear extract & delta queues during patch/upgrade in ECC system I am writing this blog to give you the steps to be performed during ECC system patch/upgrade: Process: In SAP ECC system any transaction posted into data base tables it will post entries in BW related extract Queues (LBWQ or SMQ1) related to LO cockpit. These queues need to be cleared before applying any Patches or upgrade into ECC to minimize data loss if any changes in extract structures. This document show step by step method to clear LO queues before applying the patches/upgrade into SAP ECC system. Note: Below given JOBS/INFOPACKAGES names may vary in your scenario. Procedure for V3 jobs of R/3 and Info packages scheduling before taking the down time: 1) Schedule the below mentioned V3 jobs 4- 5 hrs before taking the down time continuously on hourly basis in SAP ECC system. Ex. Jobs, a. LIS-BW-VB_APPLICATION_02_500 b. LIS-BW-VB_APPLICATION_03_500 c. LIS-BW-VB_APPLICATION_11_500 d. LIS-BW-VB_APPLICATION_12_500 e. LIS-BW-VB-APPLICATION_17_500 f. PSD:DLY2330:LIS-BW-VB_APPLICATIO

2) 2) Schedule the below mentioned info packages 4-5 hrs before taking the downtime in SAP BW/BI system. BW client XXX. Ex. Info Package Name: a. MM_2LIS_03_BF_RegularDelta_1

b. c. d. e. f. g. h. i.

MM_2LIS_03_UM_Regulardelta_1 2LIS_13_VDKON(DELTA)1 Billing Document Item Data:2LIS_13_VDITM:Delta1 2LIS_12_VCHDR(Delta)1 2LIS_12_VCITM(delta)1 Sales Document Header Data:2LIS_11_VAHDR: Order Item Delta update: 2LIS_11_VAITM: Order Alloctn Item Delta1 updat :2LIS_11_V_ITM :

3) 3) Ensure that there should be minimum data in Queues (i.e. in SMQ1or LBWQ and RSA7) if data is very high then again schedule the V3 Jobs in R/3 & info packages. Steps 1 to 3 are to be followed before taking down time to minimize data extraction time during down time for patch application. 4) 4) After taking the Down time SAP Basis team will inform BW team for clearing the Queues in ECC system. 5) 5) Follow the following procedure to clear Extract Queues (SMQ1 or LBWQ) and delta Queues (RSA7) (i.e. Before Application of Patches or upgrade) a) Request SAP basis team to Lock all users in SAP ECC system (except persons who clearing the queues) and take down time of 45 minutes or depending upon your data volume or plan. b) Make sure that all jobs are terminated nothing should be in Active status except V3 & BW extraction Jobs in SAP ECC system. c) Take screen shot of Tr. Code: SMQ1 or LBWQ before scheduling the V3 Jobs

d) Screen shot of Tr Code: RSA7 before extracting the data to BW

e) Screen shot of LBWE extraction structure

6) 6) Copy following V3 Jobs in SAP ECC system - and schedule it immediately in down time for every five minutes to move data from Extract Queues (SMQ1 or LBWQ) to Delta queues(RSA7). Ex.V3 Jobs, LIS-BW-VB_APPLICATION_02_500 LIS-BW-VB_APPLICATION_03_500 LIS-BW-VB_APPLICATION_11_500

LIS-BW-VB_APPLICATION_12_500 LIS-BW-VB-APPLICATION_17_500 PSD:DLY2330:LIS-BW-VB_APPLICATIO 6.1 To Delete unwanted Queues in SAP ECC system. These queues Ex. MCEX04, MCEX17 & MCEX17_1 are not being used in your project hence you need to delete these queues in ECC system. Deleting procedure: Enter the Tr. Code SMQ1 and select the MCEX04 then press the delete button, it will take few minutes to delete the entries. Follow the same procedure to delete other not required queues in your project. 7) Then schedule the info packages in SAP BW (XXX client) until the RSA7 entries become 0. BW client XXX. Ex. Info Package Name:

MM_2LIS_03_BF_RegularDelta_1 MM_2LIS_03_UM_Regulardelta_1 2LIS_13_VDKON(DELTA)1 Billing Document Item Data:2LIS_13_VDITM:Delta1 2LIS_12_VCHDR(Delta)1 2LIS_12_VCITM(delta)1 Sales Document Header Data:2LIS_11_VAHDR: Order Item Delta update: 2LIS_11_VAITM: Order Alloctn Item Delta1 updat :2LIS_11_V_ITM : 8) If still Extraction queue (SMQ1 or LBWQ) has entries, repeat the step 6 to 7 until both the extract Queues and delta Queues read ZERO records. 9) After zero records repeat the step 6 to 7 for double confirmation to avoid any possible data entries. 10) Screen shot of Tr Code: SMQ1 after become ZERO.

11) Screen shot of Tr. Code: RSA7 after become ZERO.

12) After ensuring that SMQ1 or LBWQ and RSA7 read ZERO entries, release the system for Basis for any upgrade or patch application. 13) After patch or upgrade is over SAP Basis team will inform SAP BW team to check the extract Queues and delta Queues are getting populated or not. 14) Request SAP Basis team Restore the all V3 jobs in ECC system to Original position and unlock all the users or system/communication users. 15) Check the Tr. Code: SMQ1 and RSA7 whether the entries are getting posted or not after restoring the V3 jobs in ECC system. See the screen shot

RSA7

16) Check the Tr, code LBWE whether all the Extract structure or active or not see the screen shot after patch application.

17) Schedule the any of the info package in SAP BW ( from above list) See the screen shot

This ends the Queues clearing activity. 1. Can you please detail how to run and clear V3 jobs in ECC 2. Can you please detail suspend process chains which are scheduled for nightly load or delete/release PCs...do we need to change timings in their variants ? 3. post upgrade, how to reschedule the process chains 1) For scheduling V3 Jobs: Enter T code LBWE --> select particular Application Area say Purchasing--> Drill down to extract structures--> check all the required data sources are active or not--> select "Job Control"-->

new window will open as "Job Maintenance for collective update"--> Enter Job parameters like--> Start date, time,Period values etc..-> and in control parameters--> click on schedule job tab to schedule the JOB. it will create background job as per above given parameters like LIS-BW-VB_APPLICATION_02_500. this completes V3 job schedule. 2. All the Suspended Process chains will start immediatly after server goes up but load on the source system will increase drastically, it will impact on transactions. hence best practise will be stop all the process chains at the time of upgrade or patch application and reschedule all the jobs as per suitable timings once the servers goes up. Overview important BI performance transactions One of my colleagues asked me how he can check quickly the performance settings in a BI system. That just gave me the idea to write a little blog about performance relvant BI transactions, tables and tasks. I try to compress it on one page so that you can print it out easily and hang it on your wall. 1. Loading Performance Transactions RSMO Requestmonitor RSRQ Single Request monitor if you have the request id or SID RSPC Processchain RSPCM Processchain monitor (all process chains at a glance) BWCCMS BW Computer Center Management System RSRV Check system for any inconsistencies DB02 Database monitor ST04 SQL monitor 2. Reporting Performance Transactions RSRT Query debug and runtime monitor RSTT- execute query, get traces ST03 Statistics 3. Important Tables: RSDDSTAT_DM Statistics DAtaManager for Query execution RSDDSTATWHM Warehouse management statistics RSDDSTAT_OLAP OLAP statistics RSADMIN RSADMIN parameters 4. Mandatory tasks Check Cube performance tab in RSA1 (indexes and statistics) every day Check ST22 for shortdumps and SM37 for failed jobs every morning Check DB02 for table space and growth every day Check request monitor every morning Check process chains for errors every day Load and update technical content every day Run BI Technical Content queries or check BI Administrator Cockpit every day Ok, that's it for now.

Restarting Processchains How is it possible to restart a process chain at a failed step/request? Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end. You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on. Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'. In the opened popup click on the tab 'Chain'. In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections: 1. copy the variant from the popup to the variante of table rspcprocesslog 2. copy the instance from the popup to the instance of table rspcprocesslog 3. copy the start date from the popup to the batchdate of table rspcprocesslog Press F8 to display the entries of table rspcprocesslog. Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode. Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows: 1. rspcprocesslog-log_id -> i_logid 2. rspcprocesslog-type -> i_type 3. rspcprocesslog-variante -> i_variant 4. rspcprocesslog-instance -> i_instance 5. enter 'G' for parameter i_state (sets the status to green). Now press F8 to run the fm. Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end. Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled .... Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values. Interrupting the Process chain Scenario: Lets say there are two Process chains A and B and C. A and B are the Master chains which extracts the data from different non sap source systems. All our BW chains are dependent on a non sap source system, when the jobs get completed on non sap systems, itll send a message to the third party system from where our BW jobs will get triggered an Event based. Note: Reason why the non sap system sends the message to third party triggering tool is because when ever there is failure in the non sap System; it will not raise an event to kick off the BW chain(s). we have to trigger them manually. To avoid this we use the third party triggering toll to trigger the chains at a time using an Event . Now C is dependent both on A and B, In other words C has to trigger only after A and B is completed. We can achieve this using the Interrupt Process type. For example, if Process chain A got completed and B is still running, then using the Interrupt we can make the Process chain C to wait until both the Chains got completed.

Lets see step by step. Process Chain A and B

Process chain C Process chain C is dependent on both A and B chains, we use interrupts (A_interrupt, B_interrupt) which will wait till those chains got completed.

Now lets see how interrupt works A_interrupt: Interrupting the PC C until PC A gets completed.

Copy the highlighted Event and Parameters

Enter the above copied Event and Parameter in the Interrupt Process type like below screen

Activate and schedule all the three process chains. Note: All the three process chains (A, B, C) get triggers on Event based. When the process chain C is in Scheduling, you can see the job BI_INTERRUPT_WAIT chains like below screens.

in both A and B

Three chains (A , B , C) got triggered by the same Event <<img src="https://weblogs.sdn.sap.com/weblogs/images/251925283/Interrupt10.JPG" border="0" alt="image" width="659" height="200" /> C will wait for both A and B like below.

Thanks for your post. I have a question though, what happens in this scenario. Let's assume A fails and B succeeds. C will wait till both are finished, in which case it will not run that day. Next day both A and B succeeds, will C pick up the data request ID from previous day for A or will it have both data request IDs. Also what does the Reset Interrupt do? Any insight will be helpful. Again thanks for the post and visual explanation. Hi Pratik, Scenario1:Yes, When A fails and B succeeds.. C will wait for A until it gets successfully completed. Note:When a PC gets failed,we will rectify the issue to get it resolved and make the PC successful.otherwise we'll miss that particular day data! Scenario2:C will pick up both A's previous and current load. I have a question.. What if I have two (2) chains: MAIN_PC - executed daily every 3AM SUB_PC - executed daily every 8AM This is the diagram of my SUB_PC chain which starts every 8AM daily: START | INTERRUPT (based on AFTER EVENT - using the EVENT and PARAMETER of last process of MAIN_PC chain) |

ABAP PROGRAM (to send report)

***NOTE: The ABAP PROGRAM in SUB_PC chain must run later than 8AM and if MAIN_PC ran successfully.. Will the INTERRUPT process work before the ABAP PROGRAM in the SUB_PC chain if the MAIN_PC ran before 8AM? How about if MAIN_PC ran around 10AM (because an error occurred during its run), will SUB_PC still work? Loed In your case, ABAP Program in SUB_PC will run only after MAIN_PC is finished and after the scheduled start time of SUB_PC, ie., 8 am). If MAIN_PC finishes after 10 am, then SUB_PC will start after 10 am. Will that happen every day? On your example, the MAIN_PC ran at 10AM so the ABAP PROGRAM will also ran at 10AM..What if the event was triggered again at around 1PM same day? Will the chain re-run? Loed SUB_PC will not run unless you have it periodically scheduled to run multiple times in a day. How often SUB_PC runs, will depend on the periodic values of START_PROCESS of SUB_PC and not the INTERRUPT_EVENT in SUB_PC. So if I put the START PROCESS to run daily it will only run once a day (even if the PERIODIC in INTERRUPT EVENT was ticked)? Is that right? Ok mate..Thanks for clarifying it.. How to schedule process Chain based on after job - Periodically Requirement: Process chain will be scheduled after execution of info package on daily basis There are two scenarios: 1)If you want to execute process chain after job - one time(It can be done directly from start variant after job) 2)if you want to execute process chain after job periodically(Here in your case you need to maintain subsequect process in infopackage) Steps to Follow: 1)If you want to execute process chain after job - one time(It can be done directly from start variant after job) 1) Schedule info package as per below screen

Here you can mention cancel job after after X run also.suppose you want to execute infopackage for 5 days then you should maintain 5. It will create job for 5 days only. Check the job in sm37 it is scheduled.

2)

Open the created process chain modify the start variant

Save the Variant -> Activate the chain Do not forget to execute Chain. This is for one time when infopackage executed sucessfully process chain will be triggered. This will trigger process chain one time only. But when you want this should happen periodically based on same job then 2)if you want to execute process chain after job periodically(Here in your case you need to maintain subsequect process in infopackage) As per below screen change the infopackage Click on subsequest process.

Click on trigger events -> create new event.This event we need to use in process chain start variant. Modify Start variant

As soon as your infopackge job is completed it will trigger event and event will trigger the process chain.

Here the process chain triggered after info package successful execution.

As we are scheduled info package on daily basis so it will create new job for next day after successful execution of info package. Each successful run it will trigger event which achieve the purpose of execution of process chain based on job.

In this way we will able to schedule process chain after job periodically. Appreciate your suggestions and feedback. Actually I have faced this issue when I want to execute process chain based on event. I have triggered event in SM64 then also chain is not started because I have not executed the chain first time. 18. Scheduling a Process Chain While Scheduling a Process Chain, there are two options : The start process is set to Direct scheduling.: BI_PROCESS_TRIGGER is released with configured start options. Subsequent application processes are scheduled and released as event triggered jobs.

The start process is set to Start Using Meta Chain or API : No BI_PROCESS_TRIGGER is scheduled or released. You have to start the process chain via another process chain or API (Function module: RSPC_API_CHAIN_START). This provides flexibility to schedule process chains with the help of a driver ABAP program. Subsequent application processes are scheduled and released as event triggered jobs. In this case Change Selection option will not be there. Scheduling scenario for Direct scheduling : Scheduling a Process Chain on a particular day of every Month Suppose we want to schedule a Process chain on 1st of every month.... Open the Chain in Planning mode ( RSPC or RSPC1) >> Right click on the Start Process >> Display variant >> Change Selections >> Click on Date/Time tab >> there click on Periodic Values Tab

" There Click on Other Period >> there in the Day(S) field enter 1.....

Then Check and Save >> Again Check and Save ......... Then in the Scheduled start field give the date when the Chain is going to run for the first time.............and Enter the time in the Time field.... Then the Process chain will run on 1st of every Month... Or U can do it in another way.... Click on Periodic Values Tab >> There click on Monthly.....Check and Save. In the Schedule Start date field enter 1st .Then the chain will run on first of every month.

Schedule a process chain at different times during the day Suppose we need to run a Process Chain twice a day daily.........at two different timings......X and Y..... Open the Chain in Planning mode ( RSPC or RSPC1) >> Right click on the Start Process >> Display variant >> Change Selections >> Click on Date/Time tab ....

There in the Scheduled start field give the date when the Chain is going to run for the first time.............and Enter the time in the Time field.......X Then click on the Periodic value tab and Click on Daily.......

Then Check and Save ..............again Check and save....... Activate and Schedule the Chain...... Now right Click on the Start Process >> Display Scheduled Jobs >> there u will find one Release Job (BI_PROCESS_TRIGGER) >> Select the Job >> In the top JOB tab >> Click on it and select Repeat Scheduling

There Click on Date/Time tab.................maintain the date and enter time Y......Select Periodic Job Check Box >> Check and save ...................then come out........... Then u will find that the Start Process is in red color.........it means that the Process chain is scheduled for multiple times..........i.e. it has more than one Release Job....... Suppose u want to run the chain 5 or 6 times a day........ You can do it in this way........or you can do it in another way...... Create an Event in SM62............ Then go to SM36 Give a Job name >> Then click on Steps >> Click on ABAP Program >>Give the Program name BTC_EVENT_RAISE >> In the Variant Field Enter the Event created in SM62 >> Check and Save.....

Then click on the Start Condition >> Click on Date/Time tab >> Give the date and give the First time...... In this way create 5 different Jobs by giving 5 different timings...... Maintain this Event in the Start condition of the Process chain. Note : BTC_EVENT_RAISE is available in BI 7.0.............for BW 3.X..........create a Program using function module "BP_EVENT_RAISE" and pass the event name as parameter. Save this as variant. Check SAP Note : 919458 for Documentation. Schedule a Process Chain to run only for X number of days of a month Suppose we need to run a Process chain every first 15 days and Last two days of every month..... In this case..........Create a Factory Calendar using Tcode SCAL.......there declare first 15 days and Last two days as Working days and rest all as Holidays.... Right click on the Start Process >> Display variant >> Change Selections >> Click on Date/Time tab >> Click on the Restriction tab (F6) >> there maintain the Calendar id and Select Do not Execute on Sundays or Holid.........>> Then Check and Transfer........... Then in the Scheduled start field give the date when the Chain is going to run for the first time............ Then the Process chain will only run as per the above requirement.....

Trigger Event at R/3 to BW side Suppose we want to Schedule a Process chain as event based.This event will be trigger from R/3 to BW : In BW System: 1. Create New Z-Program with Parameters Event Id, Event Parameter and In the Code Use BP_EVENT_RAISEFunction raise Event With Parameter. Create Transaction ZEVENT for this Program. 2. Create Process Chain to Load Data from R/3 to BW. 3. Schedule this process chain after the Event and parameter. Activate and Schedule this process Chain. 4. Now Run SHDB to Create BDC code for this Transaction'ZEVENT' with Parameters of Event and Event parameters. Now BW is Ready to Load after In R/3 System : 1. Call Function RFC_CALL_TRANSACTION with 'ZEVENT' Transaction and BDC Table with Correct Event and Parameter. Use Destination as your BW System. 2. Check SM59 for BW Connection. This Function Execute the Transaction in BW system. That will Raise the Trigger Required and that will Start the Process Chain and Load the Data. 19. Re - Designing of Process Chains 20. De - Scheduling and Re- Scheduling of Process Chains A. Q. Hi Guys, I have 76 process chains that run every day in BWD system, so i removed all the timings from the start variant and in the infopackages, but it seems some process chains are still running, I did do the following steps on changing the time from a particular time to immediate, Save, Check and activate, I didn't schedule it as I don't want it to load right away . do I have to hit check , save , schedule once i change the time from a particular time to immediate load or activate will work as well.?

what else could be the cause of process chains still loading even there are no events, or time given to any process chains at all. I went to SM 37 and I can see the bi process are all running at the time they were originally set up on. so any suggestions would help, is there any way from SM 37 I can tell which process chain is triggered off? A. the start conditions on the infopackages do not affect when they run in process chains. The process chains have their own start job. If you want to change the start time of a process chain, then you need to go to maintain the process chain, right-click on the Start process and select Displaying Scheduled Job(s)... . Here you will see a job called BI_PROCESS_TRIGGER. Change the start conditions of this job. The start job for all process chains is BI_PROCESS_TRIGGER, so using SM37 may be difficult to identify which BI_PROCESS_TRIGGER job is associated to a particular chain. Also, to let you know, each process in a process chain has an associated job. All jobs called BI_PROCESS_xxxxx are jobs associated with a process chain. Eg. BI_PROCESS_LOADING is an infopackage. BI_PROCESS_TRIGGER is the start job Q2. I have a question. suppose we required to shedule the process chain, for first 10 days, we need to extract data 8 time a day and for remaing days in months, we need to extratc data 1 time a day. How can we achive that in process chain. Please can anyone tell me the solution. A. I see your requiremnet can be fulfill by creating the process chain based on event. you can do this by following step SM64 ---> Create event > come to your process chain -> maintain the start variant > event base---> give that event name here. Now in Sm36 schedule two job which will trigger that event say Job 1 and Job 2. Job 1 will get trigger 8 times a day say after hourly manner depend upon the business requirement when the delta or data get ready, and will run for first 10 days of everymonth. and the second job say Job 2 will run after that on the daily basis. if want to know how to create the job for triggering the event which trigger the process chain in trn please let me know here are the step to have this thing done from event. i) make your process chain trigger based on the event. Now your process chain will get trigger once the event gets trigger ryt. ii)Let us take the below PC as example; ZL_TD_HCM_004 -- This PC is running after event u2018START_ZL_TD_HCM_004u2019 iii)Go to T Code: SM36

Here we define the Background job which will be available in SM37 after saving it. iv)It will ask for ABAP Program to be entered. Give it as Z_START_BI_EVENT and Select Variant from the list. (Based on Process chain, you can select it) v)Then select Start Conditions and give the start time of process chain. and select periodicity. Vi)Save the newly created job in SM36.It will be now available in SM37.
Q. we do the BW process chain scheduling from our CPS system (NW 7.00 CPS = Job Scheduling by

Redwood) from CPS we schedule a batch job in our SAP-ERP (NW-7.01 ECC EHP4) system, running some steps and then (from the ERP-system) starting a remote process chain => which starts a process chain in our BWsystem (NW-7.01 BW). but - since several days, we had a duplicate process chain running and we do not know, why (in the BW-system) exists a released Job to start the BW-chain at the same time, as the CPS initiated Job => ERP => BW should run. we recognized an unwanted job in the BW-system (released Job with interval daily) - how to get rid of this unwanted process chain start in BW, because the BW process chain start should only initiated from CPS => ERP => BW. first try: in the BW-system, we can see in the context men of the chain => show all jobs => SM37 a new released Job => we changed this job to planed => we deleted this job !!! the next day, in the BW-system, we can see in the context men of the chain => show all jobs => SM37 a new released Job => why ?? second try: in RSPC - showing the process chain => Menu-option => remove from schedule the next day, in the BW-system, we can see in the context men of the chain => show all jobs => SM37 a new released Job => why ?? so - no more ideas: how to get rid of these unwanted process chain start in BW A. for all users, who will find the BW scheduling of process-chains just as unlogical and confusing, here are some explanations, which works as follows: 1) removing of periodical scheduling: please proceed as follows: - TA RSPC - Choose a chain - Click (right click - context menu) of the start process - display variant - Switch to change mode - Change selection (of the variant) - remove the 'job run periodically' checkbox-entry - save or activate after these steps, the periodic schedule is off, but .......the chain itself, however, is still shown with a green background and is still scheduled (has a released scheduling job) 2) action "remove from schedule" please proceed as follows:

- TA RSPC - choose a process chain - menu bar execution => "remove from schedule" here the name of the function could be misunderstood, because it did not remove the BW-chain from scheduling (in general) but only "remove from the next scheduled run" so, if you do not do step 1, then the process chain will be scheduled for the next schedule run again so, you have to do both steps - remove from periodically scheduling (in the variant) - remove from scheduling (the process chain, not the variant) to cancel/delete the released job 21. 22. 23. 24. 25. 26. Aggregates Attribute Change Run Compression Roll up Indexes Partitioning 27. Transport Management: When you release a task, the object entries it contains are copied to the object list of its change request. The objects are still locked. This means that only those users involved in the change request can edit the objects. Once you have released all the tasks in a change request, you can release the change request itself. At this point, the object list of the change request contains all objects that have been changed by the users involved.

The following occurs when the change request is released:

The current version of all objects entered in the request is stored in the version database. This means that the sequence of the change requests under which an object is edited corresponds to the various object versions archived in the database. If the change request contains repairs, the repair flag for these objects is reset when you release the request, as long as no sub-objects are locked in another change request. If this is the case, the objects repair flag will be reset only when the last change request is released. After the repair flag has been reset, the objects are no longer protected from being overwritten by imports into the system.

When you release a transportable change request, its objects are exported out of the SAP System and copied to an operating system file. The request is also marked for import into the target system. The objects entered in the request are unlocked. They can now be changed again.

For this process in the Development system, CTS (Change and Transport System) is responsible. So To ensure a smooth export, the CTS must be correctly installed. Importing a Change Request into the Target System A transportable request is not automatically imported into the target system, since this could disrupt work significantly, particularly if the target system is an SAP System used for productive operation. The import must be started by the system administrator at an appropriate point in time. The system administrator can decide whether only particular requests or all requests waiting for import should be imported into the target system. If a consolidation route leading from the current system is defined in the TMS for this transport layer, then the object is recorded in a task belonging to a transportable change request. If no consolidation route leading from the current system is defined in the TMS for this transport layer, then the object is recorded in a task belonging to a local change request. If the change request is transportable, the target of the request must be the same as the consolidation target of the object. If the current system is the original system of the object, the object will be assigned to a task of the type correction. If the current system is not the original system of the object, the object will be assigned to a task of the type repair.

Lock of Objects (Related to Transport): The first user who starts work on an object has to specify a change request where the work will be recorded. This locks the object so that it can only be changed by users who are working on a task within the change request.

All users working on the object have a corresponding entry in the object list of their task. This enables you to determine which users have actually edited the object. Object locks are deleted when change requests are released. The objects can then be edited again by all developers within the Transport Organizer. You can usually only release requests when all the objects they contain have been locked. This makes sure that only released object versions are exported and that inconsistent intermediate versions are not imported into other SAP Systems. If we want to change the object by an user (Ex: U_EAST) which was modified and already transported by different user (Ex: SAPUSER), the system will ask like this.

And if you try to assign new task for the same request of user SAPUSER, then it will give 'Object was locked by user' error will be generated.

How to collect and transport SAP BI objects in a correct way? Summary: This document provides the directives needed to collect & reorganize transport requests from an ECC system to a BW system. Introduction: The transport procedures from an ECC system to a BW system are covered in this document. Assumption taken into account is the Transport flow process from a Dev - > Qual -> Prod environment finally. We will not go into the basics of "What is a Request/ Package?", as most BI persons will be familiar with the terms.

In the below sections we will look into a scenario of a dataflow, ranging from a Datasource with enhancement of fields to the transformation, DTP, DSO's, Info providers & the corresponding reports associated with it. Transport collection and sequencing in ECC: Collect your changes related to table, views, datasources, append structure, CMOD in a single request. Collecting in a single request will not take much time while transporting to target system. Make sure you have properly commented in the description of the transport request mentioning in short what the transport request contains. E.g.: If it is a CR which involves changes in CMOD and changes in the Datasource, I would mention as CRXXXX - Chng in CMOD, Enhancement for 2LIS_02_ITM A look at the description will give me in instance what this request contains. Transport collection and sequencing in BI: Collect in the following order and make sure you collect them in separate requests. 1.Info Area 2.Info Object Catalog 3.Info Objects E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX - Info Area & Info Object Catalog CRXXXX InfoObjects OR You can combine them both. Info Providers: 4.Info Cubes 5.Multi Providers 6.Info Sets 7.Data Store Objects 8.Info Cube Aggregate E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX InfoCubes CRXXXX --DSOs CRXXXX --Multi Providers Note: Make sure you transport Info Cube Aggregates, Multiproviders and Infosets after transport of corresponding InfoCubes & DSOs. Datasources: 9.Application Components 10.Data Source replica 11.Data sources (Active version) E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX --Datasources Note: Application Components can be placed within this same request. Rules: This contains of: 12.Communication Structure 13.Transformations 14.Info Source Transaction data 15.Transfer Structure 16.Transfer Rules 17.Routines & BW Formulas used in the Transfer routines

1. 2. 3. 4. 5.

18.Extract Structures 19.Update Rules, which may have: 20.Routines associated with them. E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX --Transformations CRXXXX --Update Rules & Transfer Rules CRXXXX --Infosource Note: If you need to collect before & after image of the transformation, collect in a same transport request. Info Packages & DTPs: 21.Info Packages 22.DTPs E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX --Infopackages & DTPs Note: This can be collected the same request or broken into Infopackages / DTPs transport requests. Process Chains: 23.Process Chains 24.Process Chain Starter 25.Process Variants 26.Events E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX --Process Chain NAME OF PROCES CHAIN Note: Recommended to collect all objects of process chain in the same requestPre-requisite Make sure the DTPs and Infopackages are transported to the target & are active. Reports/Queries: 27.Variables 28.Calculated Key Figures/Formula 29.Restricted Key Figures 30.Structures 31.Query 32.Work Books 33.Web Templates E.g.: Consider for a CR/Module Im collecting the request, the description would be CRXXXX --Query Name description Note: 1) Include in the description technical name if desired. 2) Workbooks could be collected in the same or different request. 3) All the query objects are automatically store in BEx request. Make sure you remove only the contents of the query from the BEx request and then align the objects to the assigned transport request. Transport Import Sequence: 1.Collect objects from ECC side and make sure it is transported successfully. 2.Collect BI Objects in the below sequence Info Area/Info Object Catalog/Info Objects. Info Cubes/Data Store Objects. Info Cube Aggregate/Multi Providers/Info Sets. Data sources/Application Components/Data Source replica. Transformations/Update Rules - Routines associated with them /Infosource.

6. Info Packages & DTPs. 7. Process Chains 8. Reports/Queries. Important points: Transport Collection Timing Do not start transporting until the development is stable. When new objects are created (e.g. InfoObjects, DSOs) these are by default created as local objects ($TMP). Leave all new objects as $TMP until they are absolutely ready for transport. Transport Request Deletion Do not delete any transport Requests in the DEV system. If you do not require a transport request to be transported, rename and append DO NOT TRANSPORT in the name. Make sure that the ECC transports that are required by the BW transports are successfully imported before the BW transports Make sure that ECC DataSources are replicated prior to initiating the BW transports. Maintain a Transport Status Log A transport log should be maintained in order to track the status of transports. Q: I've released the task to QA and the QA system had some problem and did not get the task, what I should I do now, release the task again or ? And can a report be assigned to different tasks(numbers) under the same transport request. can the report be assigned to different transport requests. please let me know. A: a). you have to release the complete request and not only one task. Normally there is no way to get one report into different requests, but you can get it into different tasks in the same request. b). There is a transport route defined by your Basis Team for the transport scenarios. There, the possible ways through which request can be moved( from Dev --> QA & Dev --> Prod ) are defined. But it does not mean that you can't transport your request to production without transporting it to QAS. Further, when you release a transport request, it does not automatically move to QAS or Production but someone from your Basis Team has to run the TP through T-Code STMS, which in turn imports the objects from your Dev Request and put it on QAS / Production. In case of any error encountered while importing to let's say QA, you need to check whether it's a problem with the transport buffer or is it because some objects are missing from the transports or dependent transports are not moved as yet. Please understand that there is no concept of re-release. You can only ask your Basis guy to re-import it. According to the encountered problem, you need to decide whether you should just ask you Basis consultant to re-import the same Transport. Or create a new request through SE10, and include all the objects from your last request. Release it and ask your Basis guy to import it to QAS. Or first move the dependent transport and then ask your Basis guy to re-import your current transport. Or create and group the changed objects, altogether in a new transport by changing all the objects which were contained in the original request.

Further, to your other queries, if two users are modifying the same program ( not exactly at the same time - that's not possible but one after other )then that program will be contained in two separate tasks of the same transport request. TP is a command that run at OS level to transfer (import) requests. Just an executable program by SAP for OS level. Can be run directly at OS level or using T-Code STMS.

28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.

Queries Calculated Key Figures Restricted Key Figures Structures Variables Exceptions Conditions Report to Report Interface Selective Deletion Full Repair House Keeping Jobs Re initializing Delta Loads Master Data:

BW Master Data Deltas Demystified


Within the BW forum community there are quite a few posts relating to the delta mechanism for Master Data extracts from R3, these posts tend to be of the "My infopackage has gone red and it says repeat delta is not available" To which a stock answer is - reinitialise... Naturally, this is not the correct answer and this short weblog will help explain the background, the processing and more importantly how to request a failed Master Data Delta. How to identify delta enable Master Data extractors By running transaction RSA2 for a master data extractor we can identify whether the extractor uses ALE change pointer technology. We are going to follow the example of 0MATERIAL_ATTR for the rest of the weblog.

In the example above we can see that the delta technology is via change pointers even though the Delta Process may be marked as E (non specific) or A (ALE). Process Type E - then utilises a function module (MDEX_CUSTOMER_MD) to read the change pointer tables. Normally within the function module there will be split of processing dependent on the type of run: WHEN FULL, WHEN INIT will read the base table MARA WHEN DELTA - will read the ALE change pointer tables. How to identify the Message Type The Message Type is the key to all future delta processing by ALE Change Pointers. We can find out the generated Message Type by reading the ROOSGEN table in the source system.

The generated Message Type will be different for all source systems within the transport landscape. How to Ensure The Message Type is Active Now we have the Message Type for 0MATERIAL_ATTR (RS0045), we can now check to see if it is active in table TBDA2.

How R3 Posts ALE Change Pointer Records for BW When a user creates or changes a master data record, and that base table has associated with it a BW message type, then the ALE Change Pointer Tables will be updated. In this example, MM01 is used to create a R3 Material Master Record. This posts not only to MARA but also to table BDPCV.

The message type, RS0045, identifies it as a 0MATERIAL_ATTR change, and this record is a new record creation. (changeid = I) In addition a record is posted to table BDCPS.

The important field here is the Process Indic field. This is initially set to blank. How BW Extracts Master Data As explained previously, the extractor function module will utilise different methods for reading the data from either the base tables or from the ALE Change Pointer tables. The difference between FULL and INIT is purely down to set up of the initial entry in RSA7. RSA7 will have 0 records, even though they may be deltas, this is because data is read at run time and not posted to the BW Delta Queue within the save transaction.

For a INIT or FULL infopackage, the data will originate from the base table MARA. For a delta infopackage, all blank process indicator records from BDCPS are read for the Message Type associated with the datasource. BDCPV is then read for those change pointer records, then the data from MARA is read to fill the rest of the extract structure. Once the tRFC scheduler has sent the packages to BW, the process indicator on BDCPS is then set to X. How to Request a Failed Delta If you had read the last sentence on the previous paragraph you will notice the "Oh what if... " scenario start to raise it's head. Because the process indicators have been set to X, and there is nothing in RSA7, how do you request a failed delta. The solution is to run program RSA1BDCP in the source system. The ABAP does not validate the selection screen, as such I strongly advise you to validate and fill in all fields.

If a field is left blank, all data will be reset and thus your next delta will be a bit larger than intended! This ABAP will reset the process indicator field on BDPCS to blank from a given data forward. The process therefore to request a failed delta is: Set the incorrect infopackage to green Run RSA1BDCP for a date greater than the last successful delta Rerun the delta infopackage Clearing Up The Change Pointers All of those BDCPV and BDCPS records take up an lot of space very quickly. The normal resolution to this is to delete obsolete and already processed Change Records.

Luckily transaction BD22 will do exactly this - you shoudl set up a ABAP Variant for your Message Type and run this periodically. In a SAP Retail Environment, you may end up running this every day with a deletion date of 3 days (to ensure a repeat delta is feasible) Go Live and Reinitialisation Problems if you have been reading this far, you will have noticed one glaring problem. If we have 1,000,000 MARA records which we initialised, the data comes straight from MARA. What then is the size of our first delta? Answer: 1,000,000 MARA records... Exactly, all those BDCPS records still have a process indicator of blank - remember INIT and FULL do not touch the ALE Change Pointer tables! There are a number of ways to address this you could of course, ignore it... but if you have aggregates you will have a bit of pain on your next change rum. Or we could anticipate the problem and either: 1. Write an ABAP to update BDCPS to X 2. Do an early initialisation before all the MARA records are created 3. Clearing up the Change Pointers Q1. Thanks for the great blog!. But I have a doubt regarding the master data delta for 0MAT_VEND_ATTR
datasource. There seems to be no standard message types attached to the above DS (Checked the table ROOSGEN and found the message type field is blank) .Hence the delta extraction seems to be time consuming than the full upload. For eg. Delta extraction of just 2000 records takes more than 5 hours to complete with more than 100 data packages(less number of records in each package),but Full extraction of the same with more than 1 million records takes only 30 to 40 minutes to complete(more than 100 data packages but with more number of records). Is there any standard workaround for this?? NB: By checking RSO2, found that the extraction happens through a Function Module MEBW_INFORECORD_GET_DATA.

A). That is because it doesn't use the change pointer functionality at all. It does reads of CDHDR and CDPOS
instead. I would suggest you trace the SQL to see where the time is taken and take remedial aspects to resolve it (ie do a archiving run on CDHDR, CDPOS or do as some others have done - create a generic data source with your own change pointers to resolve the issue) Q2. This is good information. It would have saved me several hours of investigation on my own. The next question often seen in many forum posts goes along these lines: "I have enhanced the 0MATERIAL_ATTR datasource with added fields. These new fields may be standard fields from MARA/MARC, or they may be appended to MARA/MARC. How can changes in these field values be collected in a delta extraction?" The changes to field values are posted in change documents and BDCPV, etc. So presumably the new fields *just* need to be added to the message type in table TBD62.... A. Thanks for the comments - I have had a look at the different ways of doing this - including the fix you mentioned But currently I am suggesting for my developers that they use a BTE to write the change pointer manually for field changes - and keep a record of the new fields inside a BW table ie a BW version of TVARC that can hold field names - then the BTE in the save document in R3 calls the table and checks the y and x internal tables before firing the change document. In addition if there are LESS fields then I disable the change pointer and fire a BTE to the BW delta queue directly (as per the How to Guides on SDN).. Many will ask the reason why... from my point of view it is risk mitigation.. in a upgrade there may be more or less fields added to the T* tables that control the change pointers.. and I have no control over that The two ways above - I do have control

Extraction: Did you emptied your Delta Queue by running delta loads twice to BI? Step 1: Step 2: Step 3: Step 4: Step 5: Step 6: Step 7: Step 8: Step 9: Step 10: Step 11: Step 12: Go to RSA5 and Activate the required AC(11) Datasource Go to LBWE and Maintain/Generate/Update Mode of Datasource and select the Job Control Check SMQ1/LBWQ and RSA7 Schedule V3 Job to move from LBWQ to RSA7 Run a Delta load to BI Repeat Step 4 Repeat Step 5 Delete Setup Tables by LBWG Fill up Setup Tables by OLI7BW. You can select Orders/Notifications or both Run a Repair Full Load to BW from Setup Tables You can initialize without data transfer Thus, the subsequent Delta will run fine

The consultant is required to have access to the following transactions in R3. 1. ST22 2. SM37 3. SM58 4. SM51 5. RSA7 6. SM13 Authorizations for the following transactions are required in BW 1. RSA1 2. SM37 3. ST22 4. ST04 5. SE38 6. SE37 7. SM12 8. RSKC 9. SM51 10. RSRV The Process Chain Maintenance (transaction RSPC) is used to define, change and view process chains. Upload Monitor (transaction RSMO or RSRQ (if the request is known) The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance The OS Monitor (transaction ST06) gives you an overview on the current CPU, memory, I/O and network load on an application server instance.

The database monitor (transaction ST04) checks important performance indicators in the database, such as database size, database buffer quality and database indices. The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data. The ABAP runtime analysis (transaction SE30) The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries. The Export/Import Shared buffer determines the cache size; it should be at least 40MB.

Das könnte Ihnen auch gefallen