Sie sind auf Seite 1von 38

Mohan (Belandhur Lake) Real-time consultant

1. What is Line Item Dimension?


Line Item dimension contains precisely of one characteristic. This means that the system does not
create a dimension table. Instead, the SID table takes the role of dimension table. Removing the
dimension table has the following advantages:
When loading transaction data, no IDs are generated for the entries in the dimension table. This
number range operation can compromise performance precisely in the case where a degenerated
dimension is involved.
2. What is High Cardinality?
This means that the dimension is to have a large number of instances. This information is used to
carryout optimizations on a physical level depending on the DB platform. Different index types are
used than is normally the case. A general rule is that a dimension has a high cardinality when the
number of dimension entries is at least 20% of the Fact table entries.
3. Failed Delta Update / Repair and repair full request in InfoPackage level.
Mostly full Repair requests are used to "fill in the gaps" where Delta extractions did not extract all
delta records (this does happen from time-to-time), a delta extraction process failed and could not be
restarted or to re-synchronize the data in BW with the source system.
These types of requests are also useful at times when we need to initially extract large volumes of
data, whereby we execute an Init w/o Data Transfer and execute multiple parallel InfoPackage that
are Full Repair requests with specific selection criteria.
4. GAP Analysis?
Gap analysis is usually prepared fairly early on in the project, usually concurrent with Business
Blueprint, or sometimes immediately after Business Blueprint.
In the beginning of a project, the team performs a study to determine the business requirements from
the client. These are documented in as much detail as possible in the BB. Usually, the client is asked
to sign off on the completeness of requirements, and limit the scope of the project.
After the requirements are known, the team then begins documentation of the solutions. In some
projects, prototyping is begun. In some projects, the solutions are straightforward and satisfy all of
the known business requirements. However, some of the requirements are difficult to satisfy; the
solution is not immediately evident. At this point, there will then be preparation of a GA document.
Here, the troublesome requirements are outlined, and the lack of a solution is documented. In some
GA docs, several possible solutions are outlined. In others, the only statement that is made is that a
customized solution would be required. In some GA docs, there may be a risk analysis (what happens
if no solution is found; what happens if a partial solution is implemented; what is the risk that a
solution cannot be found at a reasonable investment).

The GA doc is then presented to the client, who is asked to make decisions. He can authorize the
project team to close the gaps, and perhaps select one of the proposals, if multiple proposals were
made. The client may also decide to relax the requirements, and thus eliminate the gap. Generally, the
client is asked to sign off on these decisions. From these decisions, the project team then knows
exactly how to proceed on the project.
5. What is the property of MD IO, IC, ODS?
The property of MD IO and ODS is Overwrite. That of IC is Addition.
6. What are the types of Attributes?
1. Display attributes
2. Navigational attributes
3. Exclusive attributes
4. Compounding attributes
5. Time depending attributes
6. Time depending-Navigational attributes
7. Transitive attributes
7. In which table will you find the BW delta details?
ROOSOURCE, RODELTAM
8. How to optimize dimensions?
Start with an ERD-Entity Relationship Diagram and get the flow of the business. The ERD's purpose
should be to figure out the relationships between different entities. What we mean by relation is
whether it is a 1:1/1: N relation. Now group every entity to a dimension.
Let us take a use case, say a Sales Order Scenario; the entity diagram would comprise of 3 entities:
Customer, Location, Product. The other entities say Time characteristics, Unit characteristics can be
ignored (because they will definitely map to 2 separate dimensions).
The entity Customer and Product share an N: N relation, i.e. a customer A can buy products P1, P2.
Similarly, the product P1 can be bought by both customer A as well as B.
Do not keep the characteristics of such entities together i.e., if you keep customer_id and product_id
in the same dimensions, as they have N: N relation, this dimension table might be really too big and
will affect the performance.
The better fit would be like having 3 dimensions:
Customer: Customer_id, Age
Location: Country, State
Product: Product Group, Product_id
The fourth is the Time: Month, Year.
Suppose, a characteristic is going to take a very big range of values, keep that alone in a separate
dimension and make it line item.

ASM Technologies Bangalore (Mahesh)


1. What are Conditions, Exceptions and Variables?
Conditions:
Conditions are restrictions placed on key figures in order to filter data in the query results.
Conditions restrict the data accordingly in the results area of the query so that you only see the data
that interests you. We can define multiple conditions for a query, and then activate or deactivate them
in the report itself to create different views of the data.
Examples:
We can display all key figure values above or below a certain value. Using ranked lists, you can
display your ten best customers by sales revenue.
Exceptions:
Exceptions are deviations from pre-defined threshold values or intervals. Exception reporting enables
us to select and highlight unusual deviations of key figure values in a query. Activating an exception
displays the deviations in different colors in the query result. Spotting these deviations early provides
the basis for timely and effective reactions.
Variables:
Variables are query parameters which are created in BEx Query Designer; these are filled with values
during query runtime. Variables could be processed in different ways, such that the variables in the
BIW are Global Variables, i.e. they are uniquely defined and are available for definition of all queries.
2. Types of Variables - Can you explain me a scenario of where have you created variables?
1. Characteristic value Variables.
2. Hierarchy Variables.
3. Hierarchy Node Variables.
4. Text Variables.
5. Formula Variables.
3. What are the models you have created in your business scenario?
Star Schema and Extended Star Schema were developed for the business scenario.
Layered Scalable Architecture Model (Refer to the PDF documents on details for the same).
4. What are the settings required for activating Business Content?
Refer to the document on Activation of Business Content.
5. The Scenario is - Some huge data, say somewhere around 10 million data is scheduled in a
process chain, where by the load gets executed with intermediate breaks (say for example
30,000 data get loaded successfully and 31,000 to 35,000 data get failed and 35,001 to 50,000
data get loaded successfully and 50,001 to 60,000 data get failed). What method will you adopt
for monitoring the failed data load?

For analyzing the errors, we can simply check the error log for failed records; if the error occurred
due to invalid data then you can activate error handling in DTP and correct your data in Error PSA
and reload.
If error PSA is not available then you can do the similar modifications in PSA also. First check what
error you are getting and then look for different approaches.
6. Steps involved in Generic Extraction.
In the R/3 side:
1. Create DataSource in RSO2.
2. Check for data in Extractor checker RSA3.
In the BI side:
1. Check for Source system file using RSA13.
2. Replicate Metadata from R/3 to BW.
3. Create InfoPackage.
4. Create Transformations.
5. Create and Execute DTP.
7. What is a DELTA?
It is a feature of the extractor which refers to the changes (new/modified entries) occurred in the
source system.
8. When I execute DTP (PSADSO, PSA contains 1,200,000 records) data package 1 is taking
1,000,000 records by default and processed but data package 2 and data package 3 processed
successfully.
Semantic Key Settings have to be carried out at DTP level. If set then all records from same semantic
key will come under one package. Semantic groups are for defining sorting sequence of data in
package level. Sometime we have routines at transformation working based on some specific field.
Example: You have 10 employee data, you want to update records of employee and then you are
sorting and deleting duplicates. In that case it will work only if all employees having same employee
No. come under one package. Otherwise if 1 employee exists in 5 packages in your target you will
have 5 records of that employee instead of 1. By defining semantic group you set a rule that employee
with same employee No. must come in one package.
This also defines the sorting sequence of data in package level, if set for more than one field.
Defiance Technologies Chennai (Shrikanth) 01.11.2011
1. Tell me about yourself right from your academics to present.
2. What will happen if MD is loaded after the Transaction data?

The MD table will remain blank. It is a best practice to load Master Data initially and then load the
Transaction data. Else an error stating that "SID's are not generated" pops up terminating the process.
3. What is referential Integrity? Give me a scenario of Referential Integrity?
When the values of one dataset are dependant on the existence of values in the other dataset this
principle is termed as Referential Integrity.
Scenario: In the DTP, we can now set referential integrity checking for loading master data attributes.
This allows us to interrupt the loading process if no master data IDs (SIDs) exist for the values of the
navigation attributes belonging to the target InfoObject.
If the target InfoObject of the DTP contains navigation attributes, the Check Referential Integrity of
Navigation Attributes flag is input ready in the DTP on the Update tab page.
If we set this flag, the check will be performed during runtime for the master data tables of the
navigation attributes. The loading process will be stopped if SIDs are missing. If error handling is
activated in the DTP, the loading process is not stopped immediately. It is only stopped once the
maximum number of errors per data package has been reached.
If this flag is not set (default setting), the missing SIDs are created for the values of the navigation
attributes when the master data is updated.
4. Name some specific reports generated by you.
1. Average Daily Processing Time.
2. Delivery delay with respect to Sold to Party.
3. Delivery delay with respect to Sales Area.
5. What are all the cubes you have worked with?
0SD_C05 (Offers/Orders)
0SD_C03 (Sales Overview)
0SD_C04 (Delivery Services)
6. What is Rollup Process?
When we load new data packages (requests) into the InfoCube, these are not immediately available in
reporting for use in an aggregate. In order to supply the aggregate with the new InfoCube data, we
first have to load these into the aggregate tables for a set time frame. This process is known as a
Rollup.
7. Have you worked on Migration Project?
No.
Semantic Space Hyderabad (Maleeswaran & Sreelekha) 04039991604 03.11.2011
Attended the interview, but the JD was for BI with BO.

Semantic Space Hyderabad (Deepak) 040 33867459 24.11.2011


1. Tell me about yourself from your Educational BG.
2. Reporting: The scenario is like this; there is a Master Data (Product for ex) and an attribute
for the MD. How will you display the KF in a report?
In our requirement Cost of Product is Key figure, which is the attribute of Characteristic. Generally
we cannot use display attributes in report for business logic (for calculations, as Free characteristics,
Filter), but your requirements need to use KF. It can be achieved using Formulae Variable with
Replacement Path processing type.
3. Reporting: What is the difference between Characteristic Restriction and Default in filters of
Cell Definition?
The Characteristic restrictions cannot be altered during query navigation while Default values can be
altered.
Characteristic restriction is one which defines the range of characteristic values that you can use them
in filters. Default values are the one which we can see when the query executes and you can change
them by using filters.
Example:
There is a very little difference between characteristic restriction and default values. One thing
we know is in both places we can keep the filter values, now to see the difference follow below steps;
Create an input variable on any one characteristic:
1. Keep this variable in characteristic restriction and execute reports. Here, after execution of report
whatever value you inserted at input criteria screen you can not change those value after execution of
report(like global filter).
2. Now keep this variable in default values and execute reports. Here, after execution you can change
the value of variable that you entered at selection criteria screen.
4. Reporting: The scenario is Sales over a period. How would you display the month in a report
dynamically? Say I want to display 8 months in the report.
It is not possible to create dynamic key figure elements at run time in BEx. You can only populate the
values for any month dynamically if already there exists a column i.e. for example your report have 6
column in key figure then you can restrict them for any 6 month data by taking start month as input.
But you cannot have a requirement like enter start month and then enter number of month for
which you want to display data. It is not possible with current architecture.
5. Modeling: You have some LO extractors say SD, MM and PP. How will you design the BW
staging layers for reporting??

Approach 1:
Standard way is:
Reporting layer Cube and Multiprovider
Integration layer Standard DSO
Acquisition layer WDSO (Write-optimized DataStore object)
1. At PSA (InfoPackage) we have all the data coming from different DataSource including duplicate
data.
2. At WDSO we load data for staging in BW. Write-Optimized DSO is basically used to stage data
quickly for further process and as we don't need to activate data after loading. Additional effort of
generating SIDs, aggregation and data-record based delta is not required. A unique technical key is
generated for the WDSO which has 3 more fields than standard DSO (Request ID-0REQUEST, Data
Package ID-0DATAPAKID and Record Number 0RECORD).
3. The data granularity is more in WDSO as it stores whole data (duplicate records also, as technical
key is unique) without aggregation .The immediate availability of data is main reason to use WDSO
as before applying complex business logic in data it get collected in one Object (WDSO) at document
level. SAP recommends use of WDSO at Acquisition layer (from PSA to WDSO) for efficient staging
of data in BW.
4. WDSO does not filter out duplicate data so we use DSO for Delta loads and filtering duplicate
data. Business rules can be applied to data in transformation.
5. From DSO we move data to cube, InfoSets or Multiprovider that is immediately available for
efficient reporting.
Approach 2:
WDSODSOCubeMPRO
1. This model is usually followed for major modules like FI (GL, CCA, AP, AR etc.) and HR. Most of
the business follows this model.
2. However sometime we have reporting requirement where we need only 1-2 reports and data loads
are not daily. In that case we skip WDSO and follow DSOCUBEMPRO flow.
3. If amount of data is more and we have duplicate data or delta loads we prefer WDSO.
4. If you need only 1-2 report on a single cube you can skip creating MPRO.
5. InfoSets are used when you have join functionality required for two InfoProvider.
6. Creating reports in WDSO or DSO must be avoided as they store data at very granular level (flat
file like structure) so reporting performance will be very slow.
7. If you have complex business logics that you need to implement by routines then its good if you
follow standard flow.
6. What are the options for getting the data into a Virtual Cube?
1. Direct access of DTP.
2. Based on BAPI.
3. Based on FM.
7. What are the types of DSO?
Standard DSO, Direct Update DSO and Write Optimized DSO.

8. Difference between Direct update DSO and Write Optimized DSO?


Direct Update DSO is the same as the Transactional ODS Object in BW3.5, which has only one
table and the data, is written by Service API's and usually used for SEM applications. This cannot be
used for Delta Transfer to further targets connected, only Full load is possible.
1. Load large amounts of transaction data (close to 100 million records) every month thru flat
files and provide reporting capability on it to slice and dice data.
2. DataStore Object for Direct Update stores data in single version, so no activation needed and
data can be loaded much faster.
3. Since the data was received thru flat files, using an ABAP program which calls the
RSDRI_ODSO_INSERT API to insert data into the DataStore Object for Direct update eliminates
additional layers like generation of ready to load data files, PSA, Change Log Table, etc.
Write optimized DSO is used to save data efficiently as possible to further process it without any
activation, additional effort of generation of SIDS, aggregation and data records based on delta. This
is used for staging and for faster upload.
1. There is always the need for faster data load. DSOs can be configured to be Write optimized.
Thus, the data load happens faster and the load window is shorter.
2. Used where fast loads are essential. Example: multiple loads per day (or) short source system
access times.
3. If the DataSource is not delta enabled. In this case, you would want to have a Write-Optimized
DataStore to be the first stage in BI and then pull the Delta request to a cube.
4. Write-optimized DataStore object is used as a temporary storage area for large sets of data
when executing complex transformations for this data before it is written to the DataStore object.

9. Apart from APD's where all Direct Update DSO's are used. Comment.
Apart from APDs Direct DSOs are also used in creating and changing data mining models,
execution of data mining methods, integration of data mining models from third parties and also for
visualization of data mining models. Data mining is a process of re-purifying data.
The DSO of type Direct Update, is just like transactional ODS in BW 3.x. So in order to load data
into this Direct Update DSO, you can go to the T-Code RSINPUT. Give DSO name and click on
create/ change Execute. Click on create tab in down area, fill the values and press enter it enters
into top area. Then you can check the data in that DSO. This type of DSO is not having any
Transformations, update rules etc. This is having the property of write and read.
10. Is Transformation possible with Hierarchy for BW 3.5, BI 7.0, BI 7.3?
Hierarchies could not be created in BI 7.0, where you cant use transformation mode connection
between sources to target.
BI 7.3 supports creation of Hierarchies.
11. Whenever a job is scheduled, where from it picks the data and where does it place the data?
Source may be - Base tables, setup tables, RSA7 (ECC side)
PSA, DSO (active Data Table, Change log Table), InfoCube (F& E fact table).

Target objects - PSA, DSO, and IC.


Steps while performing Extraction between source to target - - (Activity in DTP)
1. Extracting data from source
2. Error handling
3. Filter
4. Read transformation logic (Rule type and reading if any start routine or end routine or expert
routine).
5. Updating data to target.
12. Give me a scenario of filling of SETUP Tables and deletion of SETUP Tables.
While doing a full repair/delta initialization for LO data sources in BW the data is accessed from Set
up tables and not from source tables. Once the delta process is set up the data moves into delta queue
only and it does not move to set up tables.
Before a full repair it is necessary to fill the set up table from source tables in R/3 with required data
Example: If there is an issue in data being extracted through delta load from LO data sources (E.g.
2LIS_11_VAHDR, 2LIS_11_VAITM) and it is required to reload data into BW from R/3, full repair
has to be used to reload the data
For using full repair option for LO data sources the set up tables need to be filled with the
required data from the source SAP R/3 tables
Setup Tables:
1. Setup tables are kind of interface between the extractor and application tables.
2. Setup tables get data from actual application tables (OLTP tables storing transaction records)
3. Used only for LO extraction.
4. Setup for initial load of historic data to BW or in case a full repair load is required later.
If you know the extract structure, then setup table will be extract structure name followed by
SETUP.
E.g. MC13VD0ITMSETUP setup table name for extract structure MC13VD0ITM.
13. Difference between LO, FI and COPA Extractor?
1. One of the main differences is that the CO/FI data sources are "pull based" i.e. that the delta
mechanism is based on a time stamp in the source table and data is pulled from these tables into the
RSA7 queue.
2. The LO data sources are "push" based meaning, that the delta mechanism is based on an
intermediary queue to which the delta records are pushed at the time of transaction. From the
intermediary the delta records are transferred to RSA7 queue.
3. For LO we have the setup tables but for FI no setup tables are there. Setup Tables mean from R3
data will come to the setup tables first and then to BI.
FI data will directly extract from R3 Tables.
14. How would you decide that an aggregate should be built on an InfoCube?

When a query is frequently used for reporting and we have huge amount of data in the InfoCube on
which the query is built then in that case we can go for the creation of aggregates on that InfoCube.
This will increase the query performance.
1. The execution and navigation of query data leads to delays with a group of queries.
2. You want to speed up the execution and navigation of a specific query.
3. You often use attributes in queries.
4. You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy
levels.
15. If a client says that the load performance is very poor. What are the parameters that need to
be checked for initially?
In order to increase the load performance you can follow the below guidelines:
1. Delete indexes before loading. This will accelerate the loading process.
2. Consider increasing the data packet size.
3. Check the unique data records indicator if you require only unique records to be loaded into DSO.
4. Uncheck or remove the BEx reporting check box if the DSO is not used for reporting.
5. If ABAP code is used in the routines then optimize it. This will increase the load performance.
6. Write optimized DSO are recommended for large set of data records since there is no SID
generation incase of write optimized DSO. This improves the performance during data load.
Sony India Limited (Prasad) 07.12.2011
1. What are Free Characteristics?
We put the characteristics which we want to offer to the user for navigation purposes in this pane.
These characteristics do not appear on the initial view of the query result set; the user must use a
navigation control in order to make use of them. We do not define the filter values here.
3. What is a Scaling Factor?
Scaling factor is the one which simplifies complexity in KF in simple forms. We can specify a scaling
factor between one and one billion. If you set 1000, for example, the value 3000 is displayed as 3 in
the report.
Sometime User want see records where Amounts want to see in Lac which shown in Column Header:
Example without Scaling Factor:
Amount
150000
120000
260000
Example:
With Scaling Factor
Amount in Lacs
1.5
1.2
2.6

See in both example Amount in Lac but when we use scale amount by 100000 Factor in second
example which nothing but scaling.
4. What are Boolean Operators? What is the end result of a Boolean Operator?
Boolean operators are used for conditional statement in query designer while creating CKF.
IF<Expression > 5 > THEN <Expression4 * 20 > ELSE <Expression 6 * 30>
If the first expression in the above is true then it will go to THEN operation otherwise to ELSE
operation. Output for Boolean operation is always 1 (true) or 0" (false).
In case of conditional statement if your condition is true it will go to THEN part otherwise to ELSE
PART.
5. What is 1ROW COUNT?
Approach 1: The InfoObject 1ROWCOUNT is contained in all flat Info Providers, which is, in all
InfoObjects and ODS objects. It counts the number of records in the InfoProvider. In this scenario,
you can see from the row number display whether or not values from the InfoProvider, InfoObjects
are really displayed.
Approach 2: The Row Count in BI is basically used to calculate the number of records displayed by
InfoProvider. When you are selecting the various characteristic and KF for watching the data of info
provider then select the row count. For every record the value of row count will be one, you can use
sigma to see how many records you are watching as we do not have functionality like no of records as
present during display records for PSA.
6. What are Variable Offsets in BEx?
Variable offsets are used when you need to calculate more than one value as the characteristic
restriction, based on a value entered by the user. A common use of the offset is in query requiring
time ranges. Lets say you need to provide the results for a Month entered by the user, as well as the
results for 6 months prior to that month. You can specify one variable (user entry) in an RKF and
create another RKF with the same variable and specify -6 as the offset.
See this for more info:
http://help.sap.com/saphelp_nw04/helpdata/en/f1/0a563fe09411d2acb90000e829fbfe/content.htm
7. I'm trying to calculate the difference between the two periods of the fiscal year 2007, for
example period 02 (February month) amount minus period 01 (January month) amount by
entering -01 or -1 in the Variable Offset, but the results of the two periods returned the SAME
amounts.
1. Create 2 RKF; use the appropriate variable offset values.
2. Then create a CKF to derive the difference of RKF 1 and RKF 2.
Rockwell Collins Hyderabad (KEVIN from USA) 07.12.2011
1. What is the default packet size in a DTP or InfoPackage? What would happen if you increase
or decrease the size of the Packet Size?

Approach 1: The default data package size would be 50K and if you decrease data packet size then
your loading time may increase but data load failure chances will be reduced.
Approach 2: Default data package size = 50K.
Default Size of data Package Size -10K , this can be customized
in order to check the data packet sizes for info package you can check in RSA1 Administration
Current settings Bi system data transfer or directly use T-Code : RSCUSTV6.
2. If a customer complaints the performance of the query is slow. What are the performance
activities to be carried out to improve the performance of the query?
Create Aggregates, Index, Check for BW Statistics, Check for OLAP Cache, these accounts for the
performance tuning of the Queries.
3. Routines Comment. Give a Scenario for InfoPackage Routine, Start Routine and End
Routine?
Routines are used to define complex business rules. In most of the cases data will not be coming
directly in the desired form before updating into the target. In certain scenarios output needs to be
derived from some incoming data, in such cases we will be creating routines.
InfoPackage Routines: If the business scenario requires changing the flat file which contains data to
be loaded time to time, then the file has to be updated manually every time it is being changed.
Instead a routine can be created to load the file. Whenever the InfoPackage runs, the routine will be
executed and according to the logic, the data will be selected.
InfoPackage Routines could be created at:1. Extraction Tab: A routine could be created to determine the name of the file.
2. Data Selection Tab: To determine the data selection from the source system.
Start Routines: The start routine is run at the start of the transformation. The start routine has a table
in the format of the source structure as input and output parameters. It is used to perform preliminary
calculations and store these in a global data structure or in a table.
Scenario - Each incoming record is indicated with unique key job number with its Start and End
Date. At the output there is one key figure Total no of days which is the difference of End and Start
Dates and based on the key figure value Total no of days.
End Routine: An end routine is a routine with a table in the target structure format as input and
output parameters. You can use an end routine to post process data after transformation on a packageby-package basis. Data is stored in result package.
4. Define Condition and Exception in Query?
Conditions:
Conditions are restrictions placed on key figures in order to filter data in the query results.
Conditions restrict the data accordingly in the results area of the query so that you only see the data
that interests you. We can define multiple conditions for a query, and then activate or deactivate them
in the report itself to create different views of the data.
Examples:
We can display all key figure values above or below a certain value. Using ranked lists, you can
display your ten best customers by sales revenue.

Exceptions:
Exceptions are deviations from pre-defined threshold values or intervals. Exception reporting enables
you to select and highlight unusual deviations of key figure values in a query. Activating an exception
displays the deviations in different colors in the query result. Spotting these deviations early provides
the basis for timely and effective reactions.
Mind Tree Bangalore, (Uma Shankar & Srinivas) 14.12.2011
1. What are the different types of DELTA methods available for Generic Extractors?
Delta in Generic extraction is based on Timestamp, Calday and Numeric pointer.
To add, you have Safety upper limit and Safety lower limit while defining the Delta modes.
In case you have selected numeric pointer as delta mode and chosen the Safety upper limit
with a value of 10. In this case, suppose during last load the value of the numeric pointer
was 1000 and next time you try to load the value has reached 1100.
You have 100 new records. As you have chosen safety upper limit as 10, then the data load
will start from numeric pointer value 990-1100(110 records), so as to decrease the risk of
losing data. This data can be loaded to a DSO only as duplicate records arrive.
Safety Interval Upper Limit
The upper limit for safety interval contains the difference between the current highest value at the
time of the delta or initial delta extraction and the data that has actually been read. If this value is
initial, records that are created during extraction cannot be extracted.
A timestamp is used to determine the delta value. The timestamp that was read last stands at 12:00:00.
The next data extraction begins at 12:30:00. The selection interval is therefore 12:00:00 to 12:30:00.
At the end of the extraction, the pointer is set to 12:30:00.
This transaction is saved as a record. It is created at 12:25 but not saved until 12:35. As a result, it is
not contained in the extracted data and the timestamp means the record is not included in the
subsequent extraction.
To avoid this discrepancy, the safety margin between read and transferred data must always be longer
than the maximum time the creation of a record for this DataSource can take (for timestamp deltas),
or a sufficiently large interval (for deltas using a serial number).
Safety Interval Lower Limit
The lower limit for safety interval contains the value that needs to be taken from the highest value of
the previous extraction to obtain the lowest value of the following extraction.
A timestamp is used to determine the delta. The master data is extracted. Only images taken after the
extraction are transferred and overwrite the status in BW. Therefore, with such data, a record can be
extracted more than once into BW without too much difficulty.
Taking this into account, the current timestamp can always be used as the upper limit in an extraction
and the lower limit of the subsequent extraction does not immediately follow on from the upper limit
of the previous one. Instead, it takes a value corresponding to this upper limit minus a safety margin.
This safety interval needs to be sufficiently large so that all values that already contain a timestamp at
the time of the last extraction, but which have yet to be read (see type 1), are now contained in the

extraction. This implies that some records will be transferred twice. However, due to the reasons
outlined previously, this is irrelevant.
You should not fill the safety intervals fields with an additive delta update, as duplicate records will
invariably lead to incorrect data.
2. The scenario is like,
There are 2 Extractions running in R/3 side, simultaneously in the BW side one transaction load
has to start.
1. How to automate the load to start at the BW side at a specific time using Process Chains?
2. The Process chain has to start immediately after the completion of the extraction in the R/3
side. How will you automate the process?
You can make the process chain event triggered. Create a program that triggers the event in R/3 and
schedule a job with 2 steps: - the extraction job in r/3
- the program which will trigger the process chain after the extraction job.
3. How do you rectify Data load failure due to special characteristics?
There are some characters which BW doesn't allow to load even if you select ALL_CAPITAL in
RSKC. These are characters with hexadecimal values - 00 to 1F and these are values like tab, enter,
backspace which SAP cannot display and they are displayed as hash values. You can write an ABAP
code to remove such values in transformation/update rules.
5. How to identify which queries are in 3.x version and which in 7.0 versions (At query level)?
Approach 1: Goto RSA1find the "Metadata Repository" select "BEx query" we can find the
detailed information.
Approach 2: Enter your Query Technical Name as input to the field COMPID in the table
RSZCOMPDIR and execute.
Field VERSION in the Table RSZCOMPDIR should have the value below 100; it means Query is in
3.x version. If it is Greater than 100 then, it is already migrated to BI 7.0
5. What does the T-code RSRV and RSRT stand for?
RSRV is used to perform Analysis and repair of BW InfoObjects.
RSRT is used to perform Query Monitoring.
6. What is a Return Table? Give me a scenario?
Usually the update rule sends one record to the data target; using this option you can send multiple
records.
Example: The Sales region data across three different months of a year with their sales value will be
coming in a single record into the source. But the target structure is entirely different it has only one
month and sales value field. So the target should be updated with three individual records.
7. Give me a scenario where you created reports from the scratch based on modeling, design
based on your business scenario?

Profitability Analysis - Actual vs. Plan, Sales Performance by Service Rendered Date ,Shift
Consumption Report, Scrap Quantity Report, Spending Comparison Actual vs. Prior Year ,sales
order/volume report, daily revenue based on customers, etc.,
8. How to Create the Customized characteristic InfoObject, is there any other method to create
than the normal procedure using IO catalog or unassigned nodes?
1. You can use FMs:
CREATE_IOBJ_MULTIPLE
ACTIVATE_IOBJ_MULTIPLE
2. You can use Tcode CTBW_META this Tcode can be used to create lot of InfoObjects in one go.
Also with the help of FM : BAPI_IOBJ_CREATE
Issue: I have a DataSource with 4 delta requests in PSA and this DataSource is mapped with 2
data targets i.e. let us say 2 cubes (Cube A and Cube B).Now my scenario here is to fill Cube A
with 2 requests and Cube B with 2 requests. Suggest me the ways to do this....
Solution: Let us say you have two requests in PSA - Req1 and Req2.
In DTP > Extraction Tab, you have a option "Get All New Data Request by Request", once you
check this you will get an additional option - "Retrieve Until No More New Data".
If you do not tick the second option, you can run the delta job one by one for loading requests to
the right cube and keep on deleting the request from PSA once loaded.
You will not get this option in Full.
Explanation:
you use the "Get all data request by request" option without using "Retrieve Until No New
Request" - the option will change to "Get one request only" and when you execute the DTP Req1 will be loaded to the data target.
Now if you execute the same delta DTP again, Req2 will be loaded to data target. That means
only one request from PSA will be loaded to data target at once.
if you tick both the options - "Get all data request by request" and "Retrieve until no more
request", both the Requests Req1 and Req2 will be loaded to data target and you will see two
requests in the data target.
If you do not use "Get all data request by request" option, and execute DTP both the requests
Req1 and Req2 will be loaded to data target and you will see only one request in data target.

SAP LABS (INOVATE)


There are 2 DSO's (1st level), One DSO contains the Header data & another DSO contains the
Item Data. Now the question is, How would you combine the data from these two DSO into
another DSO (2nd level) which contains both the header and item data by making use of simple
ABAP Coding?
Approach 1:
The simple way (and almost transparent) would be to make a simple transformation between the item
level DSO and the 2nd level DSO, and then in the End routine read the data from the header and add
it to the relevant fields in the result package. This solution will only partly allow for delta, as a change
on the header level would not trigger a delta. You could overcome this problem by making a
transformation from header to item level, but it's not as simple as you will need to explode the header
level into all items, but it's a possible way to go.
A pure ABAP way would be to make the 2nd level DSO a direct update DSO as it's pretty easy to
write into using normal ABAP. There is an API for that or actually you can just write directly into the
A-table (there is only an A-table on a direct update DSO). The down side of this method is that you
are not able to run delta loads.
Approach 2:
Avoid ABAP coding
Create a new DSO and assign to it the InfoObjects you wish to combine from the 1st level DSO's.
Additionally you have to carefully define the keys to the new InfoObject. E.g. Doc. Number and line
Item number.
Then create an InfoSet joining the two 1st level DSO's. Create a transformation from the InfoSet to
the new DSO.
What is a join? Explain the different types of joins?
Join is a query which retrieves related columns or rows from multiple tables. Self Join - Joining the
table with itself. Equi Join - Joining two tables by equating two common columns. Non-Equi Join Joining two tables by equating two common columns. Outer Join - Joining two tables in such a way
that query can also retrieve rows that do not have corresponding join value in the other table.

What are the benefits of loading requests in parallel?


Several requests can be updated more quickly in the ODS object.
Can requests be loaded and activated independently of one another?
Yes. You need to create a process chain that starts the activation process once the loading process is
complete.
Are there a maximum number of records that can be activated simultaneously?
No.

Can the loading method that is used to load the data into the ODS object be changed from a full
update to a delta update?
No. Once a full update has been used to load data into the ODS object, you are no longer able to
change the loading method for this particular DataSource/source system combination. An exception is
updating an ODS object to another (not yet filled) ODS object if there are already data targets that
have been supplied with deltas from the ODS object. Then you can run a full upload, which is
handled like an init, into this empty ODS object, and then load deltas on top of that.
Why is it, that after several data loads, the change log is larger than the table of active data?
The change log grows in proportion to the table of active data, because before and after-images of
each new request are stored there.
Can data be deleted from the change log once the data has been activated?
If a delta initialization for updates in connected data targets is available, the requests have to be
posted first before the corresponding data can be deleted from the change log. In the administration
screen for the ODS object, you use the Delete Change Log Data function. You can schedule this
process to run periodically.
However, you cannot delete data that has just been activated immediately because the most recent
deletion selection you can specify is Older than 1 Day.
Are locks set when you delete data from the ODS object to prevent data being written
simultaneously?
In any case, you are not permitted to activate it simultaneously.
When is it useful to delete data from the ODS object?
There are three options available for deleting data from the ODS object: by request, selectively, and
from the change log.
What are the benefits of the new maintenance options for secondary indexes?
The secondary indexes that are created in the maintenance screen of the ODS object can be
transported. They can be created in the development system and transported into the production
system.
Why do I need a transactional ODS object?
Transactional ODS could be deployed to load data quickly (without staging).

Real-time Scenarios from SDN Forum

1. In Process Chains, for first 10 days, we need to extract data 8 times a day and for remaining
days in a month; we need to extract data 1 time a day.
Approach 1: You can do this by following step
SM64 Create event come to your process chain maintain the start variant event base
give that event name here.
Here are the steps to have this thing done from event.
i) make your process chain trigger based on the event. Now your process chain will get trigger once
the event gets triggered.
ii) Let us take the below PC as example;
ZL_TD_HCM_004 -- This PC is running after event START_ZL_TD_HCM_004
iii)Go to T Code: SM36
Here we define the Background job which will be available in SM37 after saving it.
iv) It will ask for ABAP Program to be entered. Give it as Z_START_BI_EVENT and Select Variant
from the list. (Based on Process chain, you can select it)
v) Then select Start Conditions and give the start time of process chain. and select periodicity.
vi) Save the newly created job in SM36.It will be now available in SM37.
Approach 2: Another option would be using the InfoPackage settings. Create 2 InfoPackage.
1st one for first 10 days:
In the "Schedule" tab, define the Scheduling options to Hourly and save it. Now, you will see
"Periodic Processing" section with two options,
1. Do Not Cancel Job
2. Cancel Job After X Runs
For 2, provide 8.
2nd for From 11th day:
Create another InfoPackage for Daily load.
In process Chain, you may use the previous "Decision Between Multiple Alternatives" process type
(in the command formula, write a simple formula as below:
RIGHT (2, Current Date) <= 10
Then include 2 ABAP Program for each option (using "BAPI_IPAK_START" with the InfoPackage
name) in decision options. (use the InfoPackage as variants).
This will solve your requirement,
"for first 10 days, we need to extract data 8 times a day and for remaining days in months, we need to
extract data 1 time a day."
Now, further update using DTP, for remaining days in months (i.e. 1 time a day) you can directly
include the DTP in the chain.
2. If there is an object ex 0MATERIAL which is used in InfoCube and the same object is used as
Navigational Attribute. In reporting how will the 2 objects differ?

We know that NAV attribute and Normal char behaves the same in report. Is there any diff
view the way it shown in the report.
One scenario could be if your Navigation Attribute is time dependent. In that case you keep your cube
data as per the time of transaction. Let's say you have material and material type both in cube.
Now, material M01 has type "Software" during the transaction (in 2010). So you have your
transaction data.
Material Type Year Amount
M01 Software 2010 1000 this is your result in BEx
Now, in 2011 material type got changed to "Peripheral". If your master data is time dependent then
you can have history of this change in master data table but not in cube.
So if you report on the cube by taking material type from Nav attribute (time dep) then you will see in
result.
M01 Peripheral 2010 1000 this is your result in BEx
Now from the above example if you use Material Type both from Navigational Attribute and Cube,
you can show both 'Historical" data and 'Current' Data.
3. A report based on G/L master data and G/L line item is required. A field in master table has
N no. of entries in line item table. We need to display the field in master table as well as all the
transactions related to that field.
In this case since the data which we are looking for is in G/L master and G/L line item and we
have SAP standard ones for both activated and data pulled from ECC. I assume we define the
link only between the data source and the respective data target at the transformation level for
the data to flow but not between the info providers. In that case on the report how does it pull
the transactions only belonging to that particular master data field?
Let's say in your Cube you have some transaction data.
_____________________________________________________________________
Fiscal Year/Period | G/L Account | Comp Code | Debit Amount
001/2011 | 400025 | 1005 | $5000
002/2011 | 400025 | 1005 | $6000 *********** this is your transaction Data
001/2011 | 400050 | 1006 | $7000
003/2011 | 400050 | 1006 | $1000
________________________________________________________________________
Now in the InfoObject ( 0GL_ACCOUNT) you have loaded master data for GL Account, let's say
G/L Acc has an attribute called Account Type and you have the master as follows
G/L Account | Account Type
400025 | Revenue
400026 | Cost **************** this is your master data
400050 | Expense

Now, when you run the query if you pull Account Type attribute of G/L Account in the report OLAP
system performs a runtime join between your transaction data G/L Account and master data of G/L
Acc and show you Account Type.
4. How to identify the size of the DSO and InfoCube?
Run this Program in BW, SAP_INFOCUBE_DESIGNS
you can get some information from Transaction code DB02 also.
There are two ways to measure the size of the cube. One is an estimate and other is the accurate
reading in MB or GB.
Before you build the cubes if you want to estimate what will be the size of the cube, then you you can
use the formula.
Formula is:
IC = F x ((ND x 0.30)+2) x NR x NP
where
F = ((ND+3) x 4bytes) + (22 bytes x NK)
= required disk space in bytes
add
30% per dimension in the fact table
100% for aggregates
100% for indexes
F = fact table size in bytes
ND = no. of dimension
NK = no. of key figures
NR = no. of records
NP = no. of periods
But as in your case you already have a cube and ODS that is ready so use the following calculations
(This is for the cube)
Data on the BW side is in terms of "number of records" not TB or GB. The size, if required has to be
calculated only. You have to either use the formulae as given at above, to translate the number of
records into TB or GB or the easy way, if you want to do it for yourself, is to estimate from the data
growth and put an intelligent guess on it. Depends how accurate you would want to be.
The exact method, however, still remains as under:
Go through SE16. For example if the cube is ZTEST, then look at either E table or F table by typing
in /BIC/EZTEST or /BIC/FZTEST and clicking on "number of records", just the way we do for other
tables.
If the cube has never been compressed (a rare case if you are working on a reasonable project), then

you need to bother only on the F Fact table as all the data is in F Fact table only.
You can get the table width by going to SE11, type in the table name; go to "extras" and "table
width". Also you can get the size of each record in the fact table in bytes. Next, you can find out the
size of all dimension tables by doing this. The complete picture of extended star schema should be
clear in your mind to arrive at the correct figure.
Add all these sizes (fact table width + all dimension tables widths) and multiply it by number of
records in the fact table. This gives you total size of the cube.
If the cube is compressed, then you will need to add records in E table also because after
compression, data moves from F Fact table to E Fact table, hence you need to look into the E Fact
table also.
2. What is change request? What is the approach?
CR (Change Request) is a unique number in ITSR (Information Technology Service Request system).
ITSR system is one from which we will get work from the client. If client want to have a new object
(say cube) in their production system then they will prepare one BRD (Business Requirement
Document) document and places it in the ITSR system. Then ITSR system will assign one unique
number to that document and as a BI consultant we have to analyze the document and prepare
workflow.
Who used to make the Technical and Functional Specifications?
Technical Specification: Here we will mention all the BW objects (info objects, data sources, info
sources and info providers). Then we are going to say the data flow and behavior of the data load
(either delta or full) also we can tell the duration of the cube activation or creation. Pure BW technical
things are available in this document. This is not for End users document. Functional Specification:
Here we will describe the business requirements. That means here we are going to say which are all
business we are implementing like SD, MM and FI etc., then we are going to tell the KPI and
deliverable reports detail to the users. This document is going to mingle with both Function
Consultants and Business Users. This document is applicable for end users also.
Give me one example of a Functional Specification and explain what information we will get
from that?
Functional Specs are requirements of the business user. Technical Specs translate these requirements
in a technical fashion. Let's say Functional Spec says, 1. The user should be able to enter the Key
date, Fiscal Year, Fiscal Version.2. The Company variable should be defaulted to USA but then if the
user wants to change it, they can check the drop down list and choose other countries.3. The
calculations or formulas for the report will be displayed in precision of one decimal point.4. The
report should return values for 12 months of data depending on the fiscal year that the user enters Or
it should display in quarterly values. Functional specs are also called as Software requirements. Now
from this Technical Spec follows, to resolve each of the line items listed above.1. To give the option
of key date, Fiscal year and Fiscal Version certain Info Objects should be available in the system. If
available, then should we create any variables for them - so that they are used as user entry variable?
To create any variables, what is the approach, where do you do it, what is the technical of the objects
you'll use, what'll be the technical name of the objects you'll create as a result of this report.2. Same

explanation goes for the rest. How do you set up the variable,
3. What changes in properties will you do to get the precision? 4. How will you get the 12 months of
data. What will be the technical and display name of the report, Who will be authorized to run this
report, etc are clearly specified in the technical specs.
What is the difference between filter & Restricted Key Figures? Examples & Steps in BI?
Filter restriction applies to entire query. RKF is restriction applied on a keyfigure. Suppose for
example, you want to analyse data only after 2006...showing sales in 2007,2008 against
Materials..You have got a keyfigure called Sales in your cube
Now you will put global restriction at query level by putting Fiscyear > 2006 in the Filter.This will
make only data which have fiscyear >2006 available for query to process or show.
Now to meet your requirement. ..like belowMaterial Sales in 2007 Sales in 2008M1 200 300M2 400
700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure Sales
restricted by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on Keyfigure Sales
restricted by Fiscyear = 2008Now i think u understood the differenceFilter will make the restriction
on query level..Like in above case putting filter Fiscyear>2006 willmake data from cube for yeaers
2001,2002,2003, 2004,2005 ,2006 unavailable to the query for showing up.So query is only left with
data to be shown from 2007 and 2008.Within that data.....you can design your RKF to show only
2007 or something like that...
How do we gather the requirements for an Implementation Project?
One of the biggest and most important challenges in any implementation is gathering and
understanding the end user and process team functional requirements. These functional requirements
represent the scope of analysis needs and expectations (both now and in the future) of the end user.
These typically involve all of the following:- Business reasons for the project and business questions
answered by the implementation- Critical success factors for the implementation- Source systems that
are involved and the scope of information needed from each- Intended audience and stakeholders and
their analysis needs- Any major transformation that is needed in order to provide the informationSecurity requirements to prevent unauthorized use. This process involves one seemingly simple task:
Find out exactly what the end users' analysis requirements are, both now and in the future, and build
the BW system to these requirements. Although simple in concept, in practice gathering and reaching
a clear understanding and agreement on a complete set of BW functional requirements is not always
so simple.
How do we decide what cubes have to be created?
It depends on your project requirement. Customized cubes are not mandatory for all the projects. If
your business requirement is differs from given scenario ( BI content cubes ) then only we will opt for
customized cubes. Normally your BW customization or creation of new info providers all are
depending on your source system. If your source system other that R3 then you should go with
customization of your all objects. If your source system is R3 and your users are using only R3
standard business scenarios like SD, MM or FI... etc., then you dont want to create any info providers
or you dont want to enhance anything in the existing BW Business Content. But 99% this is not
possible. Because surely they should have included their new business scenario or new
enhancements. For example, In my first project we implemented for Solution Manager BW
implementation. There we have activated all the business content in CRM. But the source system has

new scenarios for message escalation, ageing calculation etc., According their business scenario we
couldnt use standard business content. For that we have taken only existing info objects and created
new info objects which are not there in the business content. After that we have created custom data
source to info providers as well as reports.
Tickets and Authorization in SAP Business Warehouse What is tickets? And example?
Tickets are the tracking tool by which the user will track the work which we do. It can be a change
requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to
solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by
informing the client that the issue is solved. Tickets are raised at the time of support project these may
be any issues, problems.... .etc. If the support person faces any issues then he will ask/request to
operator to raise a ticket.
Operator will raise a ticket and assign it to the respective person. Critical means it is most
complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies
from contract to contract in between companies. Generally Ticket raised by the client can be
considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high
priority it has to be resolved ASAP. If the ticket is of low> priority it must be considered only after
attending to high priority tickets. The typical tickets in a production Support work could be: 1.
Loading any of the missing master data attributes/texts. 2. Create ADHOC hierarchies. 3. Validating
the data in Cubes/ODS. 4. If any of the loads runs into errors then resolve it. 5. Add/remove fields in
any of the master data/ODS/Cube. 6. Data source Enhancement. 7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the
infopackages for the attributes/texts mentioned by the client. 2. Create ADHOC hierarchies. - Create
hierarchies in RSA1 for the info-object. 3. Validating the data in Cubes/ODS. - By using the
Validation reports or by comparing BW data with R/3. 4. If any of the loads runs into errors then
resolve it. - Analyze the error and take suitable action. 5. Add/remove fields in any of the master
data/ODS/Cube. - Depends upon the requirement 6. Data source Enhancement. 7. Create ADHOC
reports. - Create some new reports based on the requirement of client.
Production support
In production support there will be two kind jobs which you will be doing mostly 1, looking into the
data load errors. 2, solving the tickets raised by the user. Data loading involves monitoring process
chains, solving the errors related to data load, other than this you will also be doing some
enhancements to the present cubes and master data but that done on requirement. User will raise a
ticket when they face any problem with the query, like report showing wrong values incorrect data
etc.if the system response is slow or if the query run time is high. Normally the production support
activities include * Scheduling * R/3 Job Monitoring * B/W Job Monitoring * Taking corrective
action for failed data loads. * Working on some tickets with small changes in reports or in AWB
objects. The activities in a typical Production Support would be as follows: 1.Data Loading - could be
using process chains or manual loads. 2. Resolving urgent user issues - helpline activities 3.
Modifying BW reports as per the need of the user. 4. Creating aggregates in Prod system 5.
Regression testing when version/patch upgrade is done. 6. Creating adhoc hierarchies. We can
perform the daily activities in Production 1. monitoring Dataload failures thru RSMO 2. Monitoring
Process Chains Daily/weekly/ monthly 3. Perform Change run Hirerachy 4. Check Aggr's Rollup.

1. How to use Virtual K.F/Char. ?


Ans : This virtual characteristic is getting a value assigned at query runtime and must not be loaded
with data in data target. Therefore, no change to existing update rules.
The implementation can be divided into the following areas:
1. Create of InfoObject [Key Figure / Characteristics] and attach the InfoObject to the InfoProvider.
2. Implementation of BADI RSR_OLAP_BADI (Set filter on Infoprovider while defining BADI
implementation)
3. Adding the InfoObject into the Query.
2. Query Performance Tips :
Ans :
i. Dont show too much data in initial view of report output
ii. Limit the level of hierarchies on initial view
iii. Always use Mandatory variables
iv. Utilize filters based on InfoProviders
v. Suppress Result rows if not needed
vi. Eliminate or Reduce Not logic in query selection
3. DataStore Objects :
Ans
: Standard DSO Max. 16 key fields can be created.
4. Types Of DataStore Objects :
Standard DSO
Write-Optimized DSO Direct DSO - The DataStore object for direct update differs from the standard DataStore object in
terms of how the data is processed. In a standard DataStore object, data is stored in different versions
(active, delta, modified), whereas a DataStore object for direct update contains data in a single
version. Therefore, data is stored in precisely the same form in which it was written to the DataStore
object for direct update by the application. In the BI system, you can use a DataStore object for direct
update as a data target for an analysis process
Type
Structure
Data SupplySID
Details
Example
Generation
Standard
Consists of From data Yes
Standard
Operational Scenario for
DataStore
three tables: transfer
DataStore
Standard DataStore Objects
Object
activation
process
Object
queue, table
of active
data, change
log
WriteConsists of From data No
WriteA plausible scenario for writeOptimized
the table of transfer
Optimized
optimized DataStore objects is
DataStore
active data process
DataStore
exclusive saving of new, unique
Objects
only
Object
data records, for example in the
posting process for documents in
retail. In the example below,

however, write-optimized
DataStore objects are used as the
EDW layer for saving data.
For APD

DataStore
Consists of
Objects for the table of
Direct Update active data
only
5. Line Item Dimension : If dimension table size (no. of rows) exceeds 20% of fact table size, then the
dimension should be flagged as Line Item Dimension. This means that the system does not create a
dimension table. Instead, the SID table of the characteristic takes the role of dimension table.
Removing the dimension table has the following advantages:
When loading transaction data, no IDs are generated for the entries in the dimension table. This
number range operation can compromise performance precisely in the case where a degenerated
dimension is involved.
A table- having a very large cardinality- is removed from the star schema. As a result, the SQLbased queries are simpler. In many cases, the database optimizer can choose better execution
plans. Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot
subsequently include additional characteristics. This is only possible with normal dimensions.
Scenario : 0IS_DOCID(Document Identification) InfoObejct used as Line Iten Dimension.
6. High cardinality: This means that the dimension is to have a large number of instances (that is, a high
cardinality). This information is used to carry out optimizations on a physical level in depending on
the database platform. Different index types are used than is normally the case. A general rule is that
a dimension has a high cardinality when the number of dimension entries is at least 20% of the
fact table entries. If you are unsure, do not select a dimension having high cardinality.
7. Different Types Of InfoCubes :
Standard InfoCube (with physical data store)
VirtualProvider (without physical data store)
Based on a data transfer process and a DataSource with 3.x InfoSource: A
VirtualProvider that allows the definition of queries with direct access to transaction data in
other SAP source systems.
Based on a BAPI: A VirtualProvider whose data is not processed in the BI system, but
externally. The data is read from an external system for reporting using a BAPI.
Based on a function module: A VirtualProvider without its own physical data store in the BI
system. A user-defined function module is used as a data source.
8. Real Time Cube:
Real-time-enabled InfoCubes can be distinguished from standard InfoCubes by their ability to support
parallel write accesses, whereas standard InfoCubes are technically optimized for read accesses.
Real-time InfoCubes are used when creating planning data. The data is written to the InfoCube by
several users at the same time. Standard InfoCubes are not suitable for this. They should be used if
you only need read access (such as for reading reference data).
Real-time-enabled InfoCubes can be filled with data using two different methods: Using the BW-BPS
transaction for creating planning data and using BW Staging. You have the option to switch the realtime InfoCube between these two methods. From the context menu for your real-time InfoCube in the
InfoProvider tree, choose Switch Real-Time InfoCube. If Real-Time InfoCube Can Be Planned, Data
Loading not Allowed is selected by default, the Cube is filled using BW-BPS functions. If you change

this setting to Real-Time InfoCube Can Be Loaded with Data; Planning Not Allowed, you can then
fill the Cube using BW Staging.
For real-time InfoCubes, a reduced read performance is compensated for by the option to read in
parallel (transactionally) and an improved write performance.
9. Remodelling :
Remodeling is a new feature available as of NW04s BI 7.0 which enables to change the structure of
an InfoCube already loaded without disturbing data. This feature does not yet support remodeling of
DSO and InfoObjects.
Using remodeling a characteristic can be simply deleted or added/replaced with a constant value,
value of another InfoObject (in the same dimension), with value of an attribute of another
InfoObject (in the same dimension) or with a value derived using Customer Exit.
Similarly a KeyFigure can be deleted, replaced with a constant value or a new KeyFigure can be
added and populated using a constant value or a Customer Exit.
This article describes how to add a new characteristic to InfoCube using the remodeling feature and
populating it using a Customer Exit.
Note following before you start remodeling process:
Back-up of existing data
During remodeling process InfoCube is locked for any changes or data loads so make sure
you stall all the data loads for this InfoCube till the time this process finishes.
If you are adding or replacing a KeyFigure compress the cube first to avoid inconsistencies
unless all the records in the InfoCube are unique.
Note following after you finish remodeling process and start daily loads and querying this InfoCube:
All the objects dependent on InfoCube like transformations, MultiProviders will have to be reactivated.
If aggregates exists than they need to be reconstructed.
Adjust queries based on this InfoCube to accommodate the changes made.
If new field was added using remodeling than dont forget to map it in the transformation
rules for future data loads.
The code is written in SE24 by creating a new class. The interface for the class should be
IF_RSCNV_EXIT and code is written in the Method IF_RSCNV_EXIT~EXIT.
10.Difference between With Export and Without Export Migration of 3.x datasource :
With Export - Allows you to revert back to 3.x DataSource, Transfer Rules, etc...when you choose
this option (Recommended) Without Export - Does not allow you to ever revert back to the
3.x DataSource.
11. Difference between Calculated Key Figure and Formula:
The replacement of formula variables with the processing type Replacement Path acts differently in
calculated key figures and formulas:
If you use a formula variable with Replacement from the Value of an Attribute in a calculated key
figure, then the system automatically adds the drilldown according to the reference characteristic for
the attribute. The system then evaluates the variables for each characteristic value for the reference
characteristic. Afterwards, the calculated key figure is calculated and, subsequently, all of the other
operations are executed, meaning all additional, calculated key figures, aggregations, and formulas.
The system only calculates the operators, which are assembled in the calculated key figure itself,
before the aggregation using the reference characteristic.

l
l
o

If you use a formula variable with Replacement from the Value of an Attribute in a formula element,
then the variable is only calculated if the reference characteristic is uniquely specified in the
respective row, column, or in the filter.
12. Constant Selection :
In the Query Designer, you use selections (e.g. Characteristic restriction in Restricted Key Figure) to
determine the data you want to display at the report runtime. You can alter the selections at runtime
using navigation and filters. This allows you to further restrict the selections. The Constant Selection
function allows you to mark a selection in the Query Designer as constant. This means that navigation
and filtering have no effect on the selection at runtime.
13. Customer Exit for Query Variables :
The customer exit for variables is called three times maximally. These three steps are called I_STEP.
The first step (I_STEP = 1) is before the processing of the variable pop-up and gets called for every
variable of the processing type customer exit. You can use this step to fill your variable with default
values.
The second step (I_STEP = 2) is called after the processing of the variable pop-up. This step is called
only for those variables that are not marked as ready for input and are set to mandatory variable
entry.
The third step (I_STEP = 3) is called after all variable processing and gets called only once and not
per variable. Here you can validate the user entries.
Please note that you cannot overwrite the user input values into a variable with this customer
exit. You can only derive values for other variables or validate the user entries.
14. How to create Generic DataSource using Function Module:
A structure is created first for extract structure which will contain all DataSource fields. Then a
Function module is created by copying the FM RSAX_BIW_GET_DATA_SIMPLE. The code needs
to be modified as per requirement.
For delta functionality, if the base tables (from where data will be fetched) contain date and time field
then include a dummy field (Timestamp) in the extract structure created for the FM and use this field
in code. (By splitting the timestamp in date and time).
Type-Pools: SBIWA, SRSC.
15. In which scenario you have used Generic DataSource?
We had a requirement to send Contract (Sales Order) data from BI to MERICS (external system).
Selection criteria to extract data was
Billing Plan Date(FPLT-AFDAT) < Current month date
Billing Status(FPLT-FKSAF) = Not yet Processed
Contract Type(VBAK-AUART) = Fixed Price (ZSCC)
Item Category(VBAP-PSTYV) = ZSV2
We could have used Data Source 2LIS_11_VAITM and could have enhanced that for FPLT fields.
But the problem was that whenever there will be status change in a Billing Plan, which will not be
captured by data source 2LIS_11_VAITM.
Thereby we have created a generic data source using Function Module.
16. Generic DS Safety Interval Lower limit and Upper Limit? What are the Delta Specific
fields? When to chose New status for changed records and when Additive Delta?

Safety Interval Upper Limit :


The upper limit for safety interval contains the difference between the current highest value at the
time of the delta or initial delta extraction and the data that has actually been read. If this value is
initial, records that are created during extraction cannot be extracted."
This would mean that if your extractor takes half an hour to run, then ideally your safety upper limit
should be half hour or more , this way records created during extraction are not missed.
For example : If you start extraction at 12:00:00 with no safety interval and then your extract runs for
15 minutes , the delta pointer will read 12:15:00 and subsequent delta will read records created /
changed on or after 12:15:00 - this would mean that all records created / changed during extraction
are skipped.
Estimate the extraction time for your DataSource and then accordingly set the safety upper limit so
that no records are skipped. But then this being an additive delta - you need to be careful not to
double your records. Ideally this being an additive delta - either extract records during periods of very
low activity or have smaller safety limits to make sure data does not get duplicated.
Safety Interval Lower Limit: This field contains the value taken from the highest value of the
previous delta extraction to determine the lowest value of the time stamp for the next delta extraction.
For example: A time stamp is used to determine a delta. The extracted data is master data: The system
only transfers after-images that overwrite the status in the BW. Therefore, a record can be extracted
into the BW for such data without any problems.
Taking this into account, the current time stamp can always be used as the upper limit when
extracting: The lower limit of the next extraction is not seamlessly joined to the upper limit of the last
extraction. Instead, its value is the same as this upper limit minus a safety margin. This safety margin
needs to be big enough to contain all values in the extraction which already had a time stamp when
the last extraction was carried out but which were not read. Not surprisingly, records can be
transferred twice. However, for the reasons above, this is unavoidable.
1. If delta field is Date (Record Create Date or change date), then use Upper Limit of 1 day.
This will load Delta in BW as of yesterday. Leave Lower limit blank.
2. If delta field is Time Stamp, then use Upper Limit of equal to 1800 Seconds (30 minutes).
This will load Delta in BW as of 30 minutes old. Leave Lower limit blank.
3. If delta field is a Numeric Pointer i.e. generated record # like in GLPCA table, then use
Lower Limit. Use count 10-100. Leave upper limit blank. If value 10 is used then last 10
records will be loaded again. If a record is created when load was running, those records
may get lost. To prevent this situation, lower limit can be used to backup the starting
sequence number. This may result in some records being processed more than once;
therefore, be sure this DataSource is only feeding an ODS Object
Delta Specific Fields :
o Time Stamp - The field is a DEC15 field which always contains the time stamp of the last change
to a record in the local time format.
o Calendar Day - The field is a DATS8 field which always contains the day of the last change.
o Numeric Pointer - The field contains another numerical pointer that appears with each new
record.
Additive Delta:
The key figures for extracted data are added up in BW. DataSource with this delta type can supply
data to ODS objects and InfoCubes.
New status for changed records :

Each record to be loaded delivers the new status for the key figures and characteristics. DataSources
with this delta type can write to ODS objects or master data tables.
17. How to Fill Up Set Up Table and related Transaction
18. Maximum Char. And Key Figures allowed in a Dimension of Cube ?
Max. Char. 248
Max. Key Fig. 233
19. Different Types of DTP :
o Standard DTP - Standard DTP is used to update data from PSA to data targets ( Info cube, DSO
etc).
o Direct Access DTP - DTP for Direct Access is the only available option for VirtualProviders.
o Error DTP - Error DTP is used to update error records from Error stock to the corressponding data
targets.
20. How to Create Optimized InfoCube ?
o Define lots of small dimensions rather than a few large dimensions.
o The size of the dimension tables should account for less than 10% of the fact table.
o If the size of the dimension table amounts to more than 10% of the fact table, mark the dimension as
a line item dimension.
21. Difference between DSO and Cube
DSO
InfoCube
Use
Consolidation of data in the Aggregation and performance
data warehouse layer
optimization for
Loading delta records that can multidimensional reporting
subsequently be updated to Analytical and strategic data
InfoCubes or master data
analysis
tables
Operative analysis (when
being used in the operational
data store)
Type of data
Non volatile data (when being Non volatile data
used
Aggregated data, totals
in the data warehouse layer)
Volatile data (when being
used in the
operational data store)
Transactional data, documenttype
data (line items)
Type of data update
Overwrite (in rare cases:
Addition only
addition)
Data structure
Flat and relational database Enhanced star schema (fact
tables,
table and dimension tables)
semantic key fields
Type of data analysis
Reporting at high level of
Multidimensional data
granularity,
analysis
flat reporting
with low level of granularity

The number of query records


should
be strictly limited by the
choice of key
fields
Individual document display

(OLAP analysis)
Use of InfoCube aggregates
Drill-through to document
level
(stored in DataStore objects)
possible using the reportreport
interface

22. Give an example where DSO is used for Addition not overwrite.
23. Difference between 3.x and 7.0
1. In Infosets now you can include InfoCube as well.
2. The Remodeling transaction helps you add new key figure and characteristics and handles historical
data as well without much hassle. This is only for info cube.
3. The BI accelerator (for now only for InfoCube) helps in reducing query run time by almost a factor
of 10 - 100. This BI accelerator is a separate box and would cost more. Vendors for these would be
HP or IBM.
4. The monitoring has been improved with a new portal based cockpit. Which means you would need
to have an EP guy in your project for implementing the portal! :)
5. Search functionality has improved!! You can search any object. Not like 3.5
6. Transformations are in and routines are pass! Yes, you can always revert to the old transactions
too.
7. The Data Warehousing Workbench replaces the Administrator Workbench.
8. Functional enhancements have been made for the DataStore object: New type of DataStore object
Enhanced settings for performance optimization of DataStore objects.
9. The transformation replaces the transfer and update rules.
10. New authorization objects have been added
11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12 The Data Source: There is a new object concept for the Data Source. Options for direct access to
data have been enhanced. From BI, remote activation of Data Sources is possible in SAP source
systems.
13. There are functional changes to the Persistent Staging Area (PSA).
14. BI supports real-time data acquisition.
15. SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise
Data Warehousing (EDW). The new features/ Major differences include:
a) Renamed ODS as DataStore.
b) Inclusion of Write-optimized DataStore which does not have any change log and the requests do
need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue f) Introduction of
BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have
the option to bypass the PSA Yes,
16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoC transfer
method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the

Transformation now we can do "Start Routine, Expert Routine and End Routine". During data load.
New features in BI 7 compared to earlier versions:
i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition
(RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rules iv. Performance
optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more flexible BI end user
authorizations.
24. What is Extended Star Schema?

25. What is Compression, Roll Up, Attribute Change Run?


Roll Up: You can automatically roll up and transfer into the aggregate requests in the InfoCube with
green traffic light status, that is, with saved data quality. The process terminates if no active,
initially filled aggregates exist in the system.
Compression: After rollup, the InfoCube content is automatically compressed. The system does this
by deleting the request IDs, which improves performance.
If aggregates exist, only requests that have already been rolled up are compressed. If no aggregates
exist, the system compresses all requests that have yet to be compressed.
First we need to do aggregate roll up before compression. When we roll up data load requests, we roll
them up into all the aggregates of the InfoCube and then carry on the compression of the cube. For
performance and disk space reasons, it is recommended to roll up a request as soon as possible and
then compress the InfoCube.
When you COMPRESS the cube, "COMPRESS AFTER ROLLUP" option ensures that all the data is
rolled up into aggregates before doing the compression.
Compression - with Zero Elimination Zero-elimination means that data rows with all key figs = 0
will be deleted.
26. What is Change Run? How to resolve when an attribute change run fails because of locking
problem?
27. What are the errors you have faced during Transport of object?
28. What steps need to follow when a process in Process Chain fails and we need to make it green to
proceed further?
1. Right click on the failed process and go to Display Messages. From Chain tab get VARIANT
and INSTANCE value. For some cases INSTANCE is not available, that case we take Job Count
Number.
2. Go to table RSPCPROCESSLOG. Give the VARIANT and INSTANCE and get LOGID,
TYPE, BATCHDATE and BATCHTIME
3. Execute program (SE38) RSPC_PROCESS_FINISH by providing LOGID, CHAIN, TYPE,
VARIANT, INSTANCE, BATCHDATE, BATCHTIME and STATE = G.

29. What is Rule Group in Transformation? Give example.


A rule group is a group of transformation rules. It contains one transformation rule for each key field of
the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create
different rules for different key figures.
Few key points about Rule Groups:
o A transformation can contain multiple rule groups.
o A default rule group is created for every transformation called as Standard Group. This group
contains all the default rules.
o Standard rule group cannot be deleted. Only the additional created groups can be deleted.
Example :
Records in source system: Actual and Plan Amount are represented as separate fields.
Company Code Account Fiscal Year/Period Actual Amount Plan Amount
1000 5010180001 01/2008 100 400
1000 5010180001 02/2008 200 450
1000 5010180001 03/2008 300 500
Records in business warehouse: Single Key Figure represents both Actual and Plan Amount. They
are differentiated using the characteristic Version (Version = 010 represents Actual Amount and
Version = 020 represents Plan Amount in this example).
Company Code Account
Fiscal Year/Period Version Amount
1000
5010180001
01/2008
010
100
1000
5010180001
02/2008
010
200
1000
5010180001
03/2008
010
300
1000
5010180001
01/2008
020
400
1000
5010180001
02/2008
020
450
1000
5010180001
03/2008
020
500
To achieve this, In Standard Rule group (target), we will make the char. Version to constant value
010 and direct assignment form Actual Amount to Amount in target field.
And another rule group (New Rule group) will be created where we will make the char. Version to
constant value 020 and direct assignment form Plan Amount to Amount in target field.
30. Why we cannot use DSO to load inventory data?
DS objects cannot admit any stock key figures (see Notes 752492 and 782314) and, among other
things, they do not have a validity table, which would be necessary. Therefore, ODS objects cannot be
used as non-cumulative InfoCubes, that is, they cannot calculate stocks in terms of BW technology.
31. Processing Type Replacement Path for Variable with Examples.
You use the Replacement Path to specify the value that automatically replaces the variable when you
execute the query or Web application.
The processing type Replacement Path can be used with characteristic value variables, text
variables and formula variables.
o Text and formula variables with the processing type Replacement Path are replaced by a
corresponding characteristic value.
o Characteristic value variables with the processing type Replacement Path, are replaced by the results
of a query.

Replacement with a characteristic value:


Replace Variable with
Key The variable value is replaced with the characteristic key.
External Characteristic Value Key The variable value is replaced with an external value of the
characteristic (external/internal conversion).
Name (Text) The variable value is replaced with the name of the characteristic. Note that
formula variables have to contain numbers in their names so that the formula variable represents a
value after replacement.
Attribute Value The variable value is replaced with the value of an attribute. An additional
field appears for entering the attribute. When replacing the variable with an attribute value, you can
create a reference to the characteristic for which the variable is defined. Choose the attribute
Reference to Characteristic (Constant 1). By choosing this attribute, you can influence the
aggregation behavior of calculated key figures and obtain improved performance during calculation.
Hierarchy Attribute The variable value is replaced with a value of a hierarchy attribute. An
additional field appears for entering the hierarchy attribute. You need this setting for sign reversal
with hierarchy nodes
o Example: Replacement with Query You want to insert the result for the query Top 5 products
as a variable in the query Sales Calendar year / month.
1. Select the characteristic Product and from the context menu, choose New Variable. The Variable
Wizard appears.
2. Enter a variable name and a description.
3. Choose the processing type Replacement Path.
4. Choose Next. You reach the Replacement Path dialog step.
5. Enter the query Top 5 Products.
6. Choose Next. You reach the Save Variable dialog step
7. Choose Exit
You are now able to insert the variable into the query Sales Calendar year / month. This allows
you to determine how the sales for these five, top-selling products has developed month for month.
32. Pseudo Delta
This is different from the normal delta in a way that, if you look at the data load it would say FULL
LOAD instead of DELTA. But as a matter of fact, it is only pulling the records that are changed or
created after the previous load. This can be achieved multiple ways like logic in InfoPackage routine,
selections identifying only the changed records and so on.
In my past experience, we had a code in the InfoPackage that looks at when the previous request was
loaded, using that date calculates the month and loads data for which CALMONTH is between the
previously loaded date and today's date (since the data target is an ODS, even if there is a duplicate
selection, overwriting will happen thus not affecting the data integrity).
33. Flat Aggregates
If you have less than 16 characters in an aggregate ( including the time , data and package dimensions)
then the characteristic SIDs are stored as Line items meaning the E Fact table of the aggregate
(Assuming that your aggregate is compressed) will have 16 columns and these will have the SIDs
only...

You do not have any further tables like dimension tables etc for an aggregate in this case - hence the
name FLAT - meaning that the aggregate more or less is like a standard table with the necessary SIDs
and nothing else.
Flat aggregates can be rolled up on DB Server (without loading data into Application Server)
34. Master Data Load failure recovery Steps:
Issue:
A delta update for a master data DataSource is aborted. The data was sent to the BW in this case but it
was not posted in the PSA. In addition, there are no, as yet, executed LUWs in the TRFC outbound
of the source system. Therefore, there is no way of reading the data from a buffer and transferring this
to the master data tables.
Solution:
Import the next PI or CRM patch into your source system and execute the RSA1BDCP report.
Alternatively, you can import the attached correction instructions into your system and create an
executable program in the customer namespace for this using transaction SE38, into which you copy
the source code of the correction instructions. Execute the report.
The report contains 3 parameters:
1. P_OS (DataSource): Name of the DataSource
2. P_RS (BIW system): logical name of the BW system
3. P_TIME (generation time stamp):The generation date and time of the first change pointer, which
are transferred into BW during the next upload, should be displayed as YYYYMMDDHHMMSS (for
example, 20010131193000 for January 31, 2001, 19:30:00).(e.g. 20010131193000 for 31.01.2001,
19:30:00).For this time stamp select the time stamp of the last successful delta request of this
DataSource in the corresponding BW system. After the report is executed, a dialog box appears with
the number of records that should have the 'unread' status. Check the plausibility of this number of
records. It should be larger than or the same as the number of records for the last, terminated request.
After you execute the report, change the status of the last (terminated) request in BW to 'green' and
request the data in 'delta' mode.
35. What is KPI?
(1) Predefined calculations that render summarized and/or aggregated information, which is useful in
making strategic decisions.
(2) Also known as Performance Measure, Performance Metric measures. KPIs are put in place and
visible to an organization to indicate the level of progress and status of change efforts in an
organization. KPIs are industry-recognized measurements on which to base critical business
decisions.
In SAP BW, Business Content KPIs have been developed based upon input from customers, partners,
and industry experts to ensure that they reflect best practices.
36. Performance Monitoring and Analysis tools in BW:
a) System Trace: Transaction ST01 lets you do various levels of system trace such as authorization
checks, SQL traces, table/buffer trace etc. It is a general Basis tool but can be leveraged for BW.
b) Workload Analysis: You use transaction code ST03
c) Database Performance Analysis: Transaction ST04 gives you all that you need to know about
whats happening at the database level.

d) Performance Analysis: Transaction ST05 enables you to do performance traces in different are as
namely SQL trace, Enqueue trace, RFC trace and buffer trace.
e) BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that needs to be
activated. It contains several InfoCubes, ODS Objects and MultiProviders and contains a variety of
performance related information.
f) BW Monitor: You can get to it independently of an InfoPackage by running transaction RSMO or
via an InfoPackage. An important feature of this tool is the ability to retrieve important IDoC
information.
g) ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of a transaction,
program or function module. It is a very helpful tool if you know the program or routine that you
suspect is causing a performance bottleneck.
37. Runtime Error MESSAGE_TYPE_X when opening an info package in BW
You sometimes run into error message 'Runtime error MESSAGE_TYPE_X' when you try to open an
existing delta InfoPackage. It won't even let you create a new InfoPackage, it throws the same error.
The error occurs in the FUNCTION-POOL FORM RSM1_CHECK_FOR_DELTAUPD.
This error typically occurs when delta is not in sync between source system and BW system. It might
happen when you copy new environments or when you refresh you QA or DEV boxes from
production.
Solution: Try to open a existing full InfoPackage if you have it, you will be able to open existing full
InfoPackage because it is not going to check delta consistency. After you open the InfoPackage
remove the delta initialization from the InfoPackage as shown below. Got menu scheduler
Initialization options for source system Select the entry click on delete button
After that you will be able to open existing delta InfoPackage. You can re initialize the delta and start
using the InfoPackage.
Follow the steps in the note 852443, if you do not have a existing full InfoPackage. There are many
troubleshooting steps in this note. You can go through all of them or do what I do follow the steps
below.
1. In table RSSDLINIT check for the record with the problematic DataSource.
2. Get the request number (RNR) from the record.
3. Go to RSRQ transaction and enter the RNR number and say execute, it will show you the monitor
screen of actual delta init request.
4. Now change the status of the request to red.
That's it. Now you will be able to open your delta InfoPackage and run it. Of course you need to do
your delta init again as we made last delta init red. These steps have always worked for me; follow
the steps in the OSS note if this doesn't work for you.
39. Example of Display Key Figure used in Master Data
In 0MATERIAL: The display key figures are 0HEIGHT (Height), 0LENGTH (Length), 0GROSS_WT
(Gross Weight), 0GROSS_CONT (Gross Content).
40. What all Custom reports you have created in your project?
41. InfoCube Optimization:

o When designing an InfoCube, it is most important to keep the size of each dimension table as small
as possible.
o One should also try to minimize the number of dimensions.
o Both of these objectives can usually be met by building your dimensions with characteristics that are
related to each other in a 1:1 manner (for example each state is in one country) or only have a small
number of entries.
o Generally characteristics that have a large number of entries should be in a dimension by themselves,
which is flagged as a "line item" dimension.
o Characteristics that have a "many to many" relationship to each other should not be placed in the
same dimension otherwise the dimension table could be huge.
o It is generally recommended to do this if the dimension table size (number of rows) exceeds 10% of
the fact table's size. You should also flag it as a "line item" dimension in an SAP InfoCube.
42. How do you handle Init without data transfer through DTP?
Under Execute tab select the processing mode as s

Shown in the screenshot.


Delta Consistency Check
A write-optimized DataStore object is often used like a PSA. Data that is loaded into the DataStore
object and then retrieved from the Data Warehouse layer should be deleted after a reasonable period
of time.
If you are using the DataStore object as part of the consistency layer though, data that has already
been updated cannot be deleted. The delta consistency check in DTP delta management prevents a
request that has been retrieved with a delta from being deleted. The Delta Consistency Check
indicator in the settings for the write-optimized DataStore object is normally deactivated. If you are
using the DataStore object as part of the consistency layer, it is advisable to activate the consistency
check. When a request is being deleted, the system checks if the data has already been updated by a
delta for this DataStore object. If this is the case, the request cannot be deleted.
1. What does LO Cockpit contain?
* Maintaining Extract structure.
* Maintaining DataSource.

* Activating Updates.
* Controlling Updates.
2. Different types of Delta in LO's?
Direct Delta, Queued Delta, Serialized V3 update, Unserialized V3 Update.
Direct Delta: - With every document posted in R/3, the extraction data is transferred directly into the
BW delta queue. Each document posting with delta extraction becomes exactly one LUW in the
corresponding Delta queue.
Queued Delta: - The extraction data from the application is collected in extraction queue instead of as
update data and can be transferred to the BW delta queue by an update collection run, as in the V3
update.
3. Steps involved in LO Extraction?
* Maintain extract structures. (R/3)
* Maintain DataSource. (R/3)
* Replicate DataSource in BW.
* Assign Info Sources.
* Maintain communication structures/transfer rules.
* Maintain InfoCube & Update rules.
* Activate extract structures. (R/3)
* Delete setup tables/setup extraction. (R/3)
* InfoPackage for the Delta initialization.
* Set-up periodic V3 update. (R/3)
* InfoPackage for Delta uploads.

Drill down - Drill down goes to lower levels of a dimension as designed in the model.
Drill thru - If the columns in the query match the columns used to build the dimensions the query will
automatically apply those values to the where clause. You can write any query or queries you want to
support a drill thru from a cube. Context is important to balance the two.
Slice and Dice, is the term used to change dimensions after viewing the cube. See things by location,
then change to view by product. You are slicing the data in a different perspective.

Das könnte Ihnen auch gefallen