Sie sind auf Seite 1von 48

Give details of any standard BI report that are relevant to Purchasing?

Answer:
For any Reports you just go to RSRREPDIR table and execute it and see the cube w
ise/module wise reports.
Eg: Give cube = 0PUR* then execute it will display all reports. Then take COMPUI
D goto RSZELTTXT table and give this ID and get description.
These are some of the BI standard reports for purchasing:
Contract Details : Technical Name: 0SRCT_DS1_Q003
Consolidated Purchase Order Value Analysis (Over Three Months) : Technical Name:
0BBP_C01_Q029
ABC Analysis :Technical name: 0BBP_C01_Q041
Procurement Values with/Without Contracts : Technical Name: 0BBP_C01_Q020
Expiring Contracts : Technical Name: 0SRCT_DS1_Q004
Quantity reliability :Technical Name: 0BBP_C01_Q011
Procurement Values per Service Provider :Technical name: 0BBP_C01_Q019
Delivery Delay of Last Confirmation :Technical Name: 0BBP_C01_Q012
Procurement Card Use :Technical Name: 0BBP_C01_Q015
Procurement Values per Vendor : Technical Name: 0BBP_C01_Q005
Procurement Values per Product Category :Technical Name: 0BBP_C01_Q004
Explain the main purpose OF T.code SMQ1 qrfc queue.
SMQ1 is the Tcode where you can view the delta queue for the delta enabled extra
ctors.
SMQ1 is generally a outbound queue used to monitor the status of Logical unit of
work for different data source used in BW .
The QRFC monitor , we can say is same as Delta Queue (RSA7) , but the we can't i
dentify the current and repeat delta in QRFC but in RSA7 we can find them sepera
tely. So better to use RSA7 for the monitoring purpose.
In RSA7 also we can view the delta queue then what is the difference between RSA
7 AND SMQ1.
Any changes or new posting will hit the qRFC queue immediately and this will be
reflected in that queue. To pull to BW we need to run the job control to get it
collected in RSA7. The RSA7 queue has the entries which are to be pulled to BW.
Clearing SMQ1 Queue
In a test phase, I want to clear all the entries from a queue in SMQ1, there are
450 or so LUW's. Is it necessary to delete line by line? This is a slow proce
ss, maybe if this is not possible through the standard transaction then someone
might have some code to do so through the qRFC API?

If this a test system and you are sure to delete all queues, why don't you do a
select all F5 in the first screen of SMQ1 and click delete.
'Select All' and 'Delete Selected' options are available in edit menu.
Deleting an outbound queue SMQ1
Is it ok to delete an outbound queue in smq1? The situation is, we have tried a
delta load but we already aborted it since it is not needed anymore. But when we
check on smq1, the queue is still there and it's status is running (transaction
executing). What will happen if that queue will be deleted? will it cause data
lost?
Yes you can delete queue in SMQ1. Make sure you also delete delta events for tha
t object R3AC4 so that NO more deltas will come to CRM.
You can delete it. You won't loose ay data, because this will delete any queue t
hat sends data from one system to the other and will note delete data on the sou
rce side.
This shouldn't be a problem. And there won't be any data loss. however you might
see additional table entries in your CRM or ECC (depending on what was the dest
ination) for the LUWs that were already processed in your delta load.
Q) Difference Between BW Technical and Functional
In general Functional means, derive the funtional specification from the busines
s requirement document. This job normally is done either by the business analyst
or system analyst who has a very good knowledge of the business. In some large
organizations there will be a business analyst as well as system analyst.
In any business requirement or need for new reports or queries originates with t
he business user. This requirement will be recorded after discussion by the busi
ness analyst. A system analyst analyses these requirements and generates functio
nal specification document. In the case of BW it could be also called logical d
esign in DATA MODELING.
After review this logical desing will be translated to physical design . This pr
ocess defines all the required dimensions, key figures, master data, etc.
Once this process is approved and signed off by the requester(users), then conve
rsion of this into practically usable tasks using the SAP BW software. This is c
alled Technical. The whole process of creating an InfoProvider, InfoObjects, Inf
orSources, Source system, etc falls under the Technical domain.
What is the role of consultant has to play if the title is BW administrator? Wha
t is his day to day activity and which will be the main focus area for him in wh
ich he should be proficient?
BW Administartor - is the person who provides Authorization access to different
Roles, Profiles depending upon the requirement.
For eg. There are two groups of people : Group A and Group B.
Group A - Manager
Group B - Developer
Now the Authorization or Access Rights for both the Groups are different.
So for doing this sort of activity.........we required Administrator.
Q) Common BW Support Project Errors
Below are some of the errors in a support project which will be a great help for
new learners:
by: Anoo
1) RFC connection lost.
A) We can check out in the SM59 t-code
RFC Destination
+ R/3 connection

CRD client (our r/3 client)


double click..test connection in menu
2) Invalid characters while loading.
A) Change them in the PSA & load them.
3)
1)
2)
3)
4)

ALEREMOTE user is locked.


Ask your Basis team to release the user. It is mostly ALEREMOTE.
Password Changed
Number of incorrect attempts to login into ALEREMOTE.
USE SM12 t-code to find out are there any locks.

4) Lower case letters not allowed.


A) Uncheck the lower case letters check box under "general" tab in the info obje
ct.
5) Object locked.
A) It might be locked by some other process or a user. Also check for authorizat
ions
6) "Non-updated Idocs found in Source System".
A) Check whether any TRFC s strucked in source system. You can check it in SM58.
If no TRFC s are there then better trigger the load again i.e., change the sta
tus to red, delete the bad request and run the load. Check whether the load is
Delta or Full. If it is full just go ahead with the above step.
If it is Delta check wheteher it is source system or BW. If it is source system
go for repeat delta. If it is BW then you need to reset Data Mart status.
7) Extraction job aborted in r3
A) It might have got cancelled due to running for more than the expected time, o
r may be cancelled by R/3 users if it is hampering the performance.
8) Repeat of last delta not possible.
A) Repeat of last delta is not a option, but a mandate, in case the delta run fa
iled. In such a case, we cant run the simple delta again. The system is going
to run a repeat of last delta, so as to collect the failed delta's data again as
well as any data collect till now right from failure.
For repeat of last delta to be run, we should have the previous delta failed. L
ets assume, in your case, I am not sure, if the delta got falied or deleted. If
this is a deletion, then we need to catch hold of the request and make the statu
s to red. This is going to tell the system that the delta failed(although it ran
successfully, but you are forcing this message to the system). Now, if you run
the delta info package, it will fetch the data related to 22nd plus all the cha
nges from there on till today.
An essential point here, you should not have run any deltas after 22nd till now.
Then only repeat of last delta is going to work. Otherwise only option is to ru
n a repair full request with data selections, if we know selection parameters.
9) Datasource not replicated
A) Replicate the datasource from R/3 through source system in the AWB & assign i
t to the infosource and activate it again.
10) Datasource/transfer structure not active.
A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
11) ODS activation error.
A) ODS activation errors can occur mainly due to following reasons1.Invalid characters (# like characters)
2.Invalid data values for units/currencies etc
3.Invalid values for data types of char & key figures.
4.Error in generating SID values for some data.

Q) Tickets and Authorization in SAP Business Warehouse


What is tickets? and example?
The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done
by scheduling the infopackages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by com
paring BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and
take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requ
irement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of c
lient.
Tickets are the tracking tool by which the user will track the work which we do.
It can be a change requests or data loads or what ever. They will of types crit
ical or moderate. Critical can be (Need to solve in 1 day or half a day) depends
on the client. After solving the ticket will be closed by informing the client
that the issue is solved. Tickets are raised at the time of support project thes
e may be any issues, problems.....etc. If the support person faces any issues th
en he will ask/request to operator to raise a ticket. Operator will raise a tick
et and assign it to the respective person. Critical means it is most complicated
issues ....depends how you measure this...hope it helps. The concept of Ticket
varies from contract to contract in between companies. Generally Ticket raised b
y the client can be considered based on the priority. Like High Priority, Low pr
iority and so on. If a ticket is of high priority it has to be resolved ASAP. If
the ticket is of low priority it must be considered only after attending to hig
h priority tickets.
Checklists for a support project of BPS - To start the checklist:
1) InfoCubes / ODS / datatargets
2) planning areas
3) planning levels
4) planning packages
5) planning functions
6) planning layouts
7) global planning sequences
8) profiles
9) list of reports
10) process chains
11) enhancements in update routines
12) any ABAP programs to be run and their logic
13) major bps dev issues
14) major bps production support issues and resolution
Q) What are the tools to download tickets from client? Are there any standard to
ols or it depends upon company or client...?
Yes there are some tools for that. We use Hpopenview. Depends on client what the
y use. You are right. There are so many tools available and as you said some cli
ents will develop their own tools using JAVA, ASP and other software. Some clien
ts use just Lotus Notes. Generally 'Vantive' is used for tracking user requests

and tickets.
It has a vantive ticket ID, field for description of problem, severity for the b
usiness, priority for the user, group assigned etc.
Different technical groups will have different group ID's.
User talks to Level 1 helpdesk and they raise ticket.
If they can solve issue for the issue, fine...else helpdesk assigns ticket to th
e Level 2 technical group.
Ticket status keeps changing from open, working, resolved, on hold, back from ho
ld, closed etc. The way we handle the tickets vary depending on the client. Some
companies use SAP CS to handle the tickets; we have been using Vantive to handl
e the tickets. The ticket is handled with a change request, when you get the tic
ket you will have the priority level with which it is to be handled. It comes wi
th a ticket id and all. It's totally a client specific tool. The common features
here can be
- A ticket Id,
- Priority,
- Consultant ID/Name,
- User ID/Name,
- Date of Post,
- Resolving Time etc.
There ideally is also a knowledge repository to search for a similar problem and
solutions given if it had occurred earlier. You can also have training manuals
(with screen shots) for simple transactions like viewing a query, saving a workb
ook etc so that such queried can be addressed by using them.
When the problem is logged on to you as a consultant, you need to analyze the pr
oblem, check if you have a similar problem occurred earlier and use ready soluti
ons, find out the exact server on which this has occurred etc.
You have to solve the problem (assuming you will have access to the dev system)
and post the solution and ask the user to test after the preliminary testing fro
m your side. Get it transported to production once tested and posts it as closed
i.e. the ticket has to be closed.
What is User Authorizations in SAP BW?
Authorizations are very important, for example you don't want the important fina
ncial report to all the users. so, you can have authorization in Object level if
you want to keep the authorization for specific in object for this you have to
check the Object as an authorization relevent in RSD1 and RSSM tcodes. Similarly
you set up the authorization for certain users by giving that users certain aut
h. in PFCG tcode. Similarly you create a role and include the tcodes, BEx report
s etc into the role and assign this role to the userid.
Q) Differences Between BW and BI Versions
List the differences between BW 3.5 and BI 7.0 versions.
Major Differences between Sap Bw 3.5 & SapBI 7.0 version:
1. In Infosets now you can include Infocubes as well.
2. The Remodeling transaction helps you add new key figure and characteristics
and handles historical data as well without much hassle. This is only for info
cube.
3. The BI accelerator (for now only for infocubes) helps in reducing query run
time by almost a factor of 10 - 100. This BI accl is a separate box and would c
ost more. Vendors for these would be HP or IBM.
4. The monitoring has been imprvoed with a new portal based cockpit. Which mea
ns you would need to have an EP guy in ur project for implementing the portal !
:)
5. Search functionality hass improved!! You can search any object. Not like 3.
5
6. Transformations are in and routines are passe! Yess, you can always revert
to the old transactions too.

7. The Data Warehousing Workbench replaces the Administrator Workbench.


8. Functional enhancements have been made for the DataStore object: New type o
f DataStore object Enhanced settings for performance optimization of DataStore o
bjects.
9. The transformation replaces the transfer and update rules.
10. New authorization objects have been added
11. Remodeling of InfoProviders supports you in Information Lifecycle Management
.
12 The Data Source:
There is a new object concept for the Data Source.
Options for direct access to data have been enhanced.
From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).
14.BI supports real-time data acquisition.
15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements t
he Enterprise Data Warehousing (EDW). The new features/ Major differences includ
e:
a) Renamed ODS as DataStore.
b) Inclusion of Write-optmized DataStore which does not have any change log and
the requests do need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
f) Intoduction of BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks li
ke we would not have the option to bypass the PSA Yes,
16. Load through PSA has become a mandatory. You can't skip this, and also there
is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the
Transfer and Update rules. Also in the Transformation now we can do "Start Routi
ne, Expert Routine and End Routine". during data load.
New features in BI 7 compared to earlier versions:
i. New data flow capabilities such as Data Transfer Process (DTP), Real time d
ata Acquisition (RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate
options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Ru
les
iv. Performance optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more
flexible BI end user authorizations.
Q) What Is Different Between ODS & IC
What is the differenct between IC & ODS? How to flat data load to IC & ODS?
ODS is a datastore where you can store data at a very granular level. It has ove
rwritting capability. The data is stored in two dimensional tables. Whereas cube
is a based on multidimensional modeling which facilitates reporting on diff dim
ensions. The data is stored in an aggregated form unlike ODS and have no overwri
ting capability. Reporting and analysis can be done on multidimensions unlike on
ODS.
ODS are used to consolidate data. Normally ODS contain very detailed data, techn
ically there is the option to overwrite or add single records.InfoCubes are opti
mized for reporting. There are options to improve performance like aggregates an
d compression and it is not possible to replace single records, all records sent
to InfoCube will be added up.
The most important difference between ODS and BW is the existence of key fields
in the ODS. In the ODS you can have up to 16 info objects as key fields. Any oth

er info objects will either be added or overwritten! So if you have flat files a
nd want to be able to upload them multiple times you should not load them direct
ly into the info cube, otherwise you need to delete the old request before uploa
ding a new one. There is the disadvantage that if you delete rows in the flat fi
le the rows are not deleted in the ODS.
I also use ODS-Objects to upload control data for update or transfer routines. Y
ou can simply do a select on the ODS-Table /BIC/A<ODSName>00 to get the data.
ODS is used as an intermediate storage area of operational data for the data war
e house . ODS contains high granular data . ODS are based on flat tables , resul
ting in simple modeling of ODS. We can cleanse transform merge sort data to bu
ild staging tables that can later be used to populate INOFCUBE .
An infocube is a multidimentionsl dat acontainer used as a basis for analysis an
d reporting processing. The infocube is a fact table and their associated dimens
ion tables in a star schema. It looks like a fact table appears in the middle of
the graphic, along with several surrounding dimension tables. The central fact
is usually very large, measured in gigabytes. it is the table from which you ret
rieve the interesting data. the size of the dimension tables amounts to only 1 t
o 5 percent of hte size of the fact table. common dimensions are unit & time etc
. There are different type of infocubes in BW, such as basic infocubes, remote i
nfocubes etc.
An ODS is a flat data container used for reporting and data cleansing/quality as
surance purpose. They are not based on star schema and are used primaily for det
ail reporting rather than for dimensional analyais.
An infocube has a fact table, which contains his facts (key figures) and a relat
ion to dimension tables. This means that an infocube exists of more than one tab
le. These tables all relate to each other. This is also called the star scheme,
because the dimension tables all relate to the fact table, which is the central
point. A dimension is for example the customer dimension, which contains all dat
a that is important for the customer.
An ODS is a flat structure. It is just one table that contains all data. Most o
f the time you use an ODS for line item data. Then you aggregate this data to an
infocube.
Q) Difference Between PSA, ALE IDoc, ODS
What is difference between PSA and ALE IDoc? And how data is transferd using ea
ch one of them?
The following update types are available in SAP BW:
1. PSA
2. ALE (data IDoc)
You determine the PSA or IDoc transfer method in the transfer rule maintenance s
creen. The process for loading the data for both transfer methods is triggered b
y a request IDoc to the source system. Info IDocs are used in both transfer meth
ods. Info IDocs are transferred exclusively using ALE
A data IDoc consists of a control record, a data record, and a status record The
control record contains, for example, administrative information such as the re
ceiver, the sender, and the client. The status record describes the status of th
e IDoc, for example, "Processed". If you use the PSA for data extraction, you b
enefit from increased flexiblity (treatment of incorrect data records). Since yo
u are storing the data temporarily in the PSA before updating it in to the data
targets, you can check the data and change it if necessary. Unlike a data reques
t with IDocs, the PSA gives you various options for additional data updates into
data targets:
InfoObject/Data Target Only - This option means that the PSA is not used as a te
mporary store. You choose this update type if you do not want to check the sourc
e system data for consistency and accuracy, or you have already checked this you
rself and are sure that you no longer require this data since you are not going
to change the structure of the data target again.
PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives th
e data from the source system, writes the data to the PSA and at the same time s
tarts the update into the relevant data targets. Therefore, this method has the

best performance.
The parallel update is described in detail in the following: A dialog process is
started by data package, in which the data of this package is writtein into t
he PSA table. If the data is posted successfully into the PSA table, the system
releases a second, parallel dialog process that writes the data to the data targ
ets. In this dialog process the transfer rules for the data records of the data
package are applied, that data is transferred to the communcation structure, and
then written to the data targets. The first dialog process (data posting into t
he PSA) confirms in the source system that is it completed and the source system
sends a new data package to BW while the second dialog process is still updatin
g the data into the data targets.
The parallelism relates to the data packages, that is, the system writes the dat
a packages into the PSA table and into the data targets in parallel. Caution: T
he maximum number of processes setin the source system in customizing for the ex
tractors does not restrict the number of processes in BW. Therefore, BW can requ
ire many dialog processes for the load process. Ensure that there are enough dia
log processes available in the BW system. If there are not enough processes on t
he system side, errors occur. Therefore, this method is the least recommended.
PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in
series into the PSA table and into the data targets by data package. The system
starts one process that writes the data packages into the PSA table. Once the d
ata is posted successfuly into the PSA table, it is then written to the data tar
gets in the same dialog process. Updating in series gives you more control over
the overall data flow when compared to parallel data transfer since there is onl
y one process per data package in BW. In the BW system the maximum number of dia
log process required for each data request corresponds to the setting that you m
ade in customizing for the extractors in the control parameter maintenance scree
n. In contrast to the parallel update, the system confirms that the process is c
ompleted only after the data has been updated into the PSA and also into the dat
a targets for the first data package.
Only PSA - The data is not posted further from the PSA table immediately. It is
useful to transfer the data only into the PSA table if you want to check its acc
uracy and consistency and, if necessary, modify the data. You then have the foll
owing options for updating data from the PSA table:
Automatic update - In order to update the data automatically in the relevant dat
a target after all data packages are in the PSA table and updated successfully t
here, in the scheduler when you schedule the InfoPackage, choose Update Subseque
ntly in Data Targets on the Processing tab page. *-- Sunil
What is difference between PSA and ODS?
PSA: This is just an intermediate data container. This is NOT a data target. Mai
n purpose/use is for data quality maintenance. This has the original data (uncha
nged) data from source system.
ODS: This is a data target. Reporting can be done through ODS. ODS data is overw
riteable. For datasources for which delta is not enabled, ODS can be used to upl
oad delta records to Infocube.
You can do reporting in ODS. In PSA you can't do reporting directly
ODS contains detail -level data , PSA The requested data is saved, unchanged fro
m the source system. Request data is stored in the transfer structure format in
transparent, relational database tables in the Business Information Warehouse. T
he data format remains unchanged, meaning that no summarization or transformatio
ns take place
In ODS you have 3 tables Active, New data table, change log, In PSA you don't ha
ve.
Q) Daily Tasks in Support Role and Infopackage Failures
1. Why there is frequent load failures during extractions? and how they are goin
g to analyse them?
If these failures are related to Data,, there might be data inconsistency in sou

rce system..though you are handling properly in transfer rules. You can monitor
these issues in T-code -> RSMO and PSA (failed records).and update .
If you are talking about whole extraction process, there might be issues of work
process scheduling and IDoc transfer to target system from source system. The
se issues can be re-initiated by canceling that specific data load and ( usually
by changing Request color from Yellow - > Red in RSMO).. and restart the extrac
tion.
2. Can anyone explain briefly about 0record modes in ODS?
ORECORDMODE is SAP Delivered object and will be added to ODS object while activa
ting. Using this ODS will be updated during DELTA loads.. This has three possibl
e values ( X D R).. D & R is for deleting and removing records and X is for ski
pping records during delta load.
3. What is reconciliation in bw? What the procedure to do reconciliation?
Reconcilation is the process of comparing the data after it is transferred to th
e BW system with the source system. The procedure to do reconcilation is either
you can check the data from the SE16 if the data is coming from a particular tab
le only or if the datasource is any std datasource then the data is coming from
the many tables in that scenario what I used to do ask the R/3 consultant to rep
ort on that particular selections and used to get the data in the excel sheet an
d then used to reconcile with the data in BW . If you are familiar with the repo
rts of R/3 then you are good to go meaning you need not be dependant on the R/3
consultant ( its better to know which reports to run to check the data ).
4. What is the daily task we do in production support.How many times we will ext
ract the data at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time
is depends in number of records and kind of transfer rules you have provided. I
f transfer rules have some kind of round about transfer rules and updates rules
has calculations for customized key figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update f
rom PSA.
5. What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to b
e fail , if you want it for the interview perspective I would answer it in this
way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other l
oads
d) Can be because of erreneous records
e) Can be because of RFC connections
Q) Questions Answers on SAP BW
What is the purpose of setup tables?
Setup tables are kind of interface between the extractor and application tables.
LO extractor takes data from set up table while initalization and full upload a
nd hitting the application table for selection is avoided. As these tables are r
equired only for full and init load, you can delete the data after loading in or
der to avoid duplicate data. Setup tables are filled with data from application
tables.The setup tables sit on top of the actual applcation tables (i.e the OLTP
tables storing transaction records). During the Setup run, these setup tables
are filled. Normally it's a good practice to delete the existing setup tables be
fore executing the setup runs so as to avoid duplicate records for the same sele
ctions
We are having Cube. what is the need to use ODS. what is the necessary to use OD
S though we are having cube?
1) Remember cube has aggregated data and ods has granular data.
2) In update rules of a infocube you do not have option for over write whereas
for a ods the default is overwrite.

What is the importance of transaction RSKC? How it is useful in resolving the is


sues with speial characters.
1A. RSKC.
Using this T-code, you can allow BW system to accept special char's in the data
coming from source systems. This list of chars can be obtained after analyzing
source system's data OR can be confirmed with client during design specs stage.
How to handle double data loading in SAP BW?
What do you mean by SAP exit, User exit, Customer exit?
2A. These exits are customized for handling data transfer in various scenarios.
(Ex. Replacement Path in Reports- > Way to pass variable to BW Report) Some ca
n be developed by BW/ABAP developer and inserted wherever its required. Some of
these programs are already available and part of SAP Business Content. These ar
e called SAP Exits. Depends on the requirement, we need to extend some exits and
customize.
What are some of the production support isues-trouble shooting guide?
3A. Production issues are different for each BW project and most common issues c
an be obtained from some of the previous mails. (data load issues).
When we go for Business content extraction and when go for LO/COPA extraction?
What are some of the few infocube name in SD and MM that we use for extraction a
nd load them to BW?
How to create indexes on ODS and fact tables?
What are data load monitor (RSMO or RSMON)?
.
4A.
LIS Extraction is kind of old school type and not preferred with big BW systems.
Here you can expect issues related to performance and data duplication in set
up tables.
LO extraction came up with most of the advantages and using this, you can extend
exiting extract structures and use customized data sources.
If you can fetch all required data elements using SAP provided extract structure
s, you don't need to write custom extractions... You can get clear idea on this
after analyzing source system's data fields and required fields in target system
's data target's structure.
5A.
MM - 0PUR_C01(Purchasing data) , OPUR_C03 (Vendor Evaluation)
SD - 0SD_CO1(Customer),0SD_C03( Sales Overview) ETC..
6A.
You can do this by choosing "Manage Data Target" option and click on few button
s available in "performance" tab.
7A.
RSMO is used to monitor data flow to target system from source system. You can s
ee data by request, source system, time request id etc.... just play with this..
What is KPI?
A KPI are Key Performance Indicators.
These are values companies use to manage their business. E.g. net profit.
In detail:
Stands for Key Performance Indicators. A KPI is used to measure how well an orga
nization or individual is accomplishing its goals and objectives. Organizations
and businesses typically outline a number of KPIs to evaluate progress made in a
reas where performance is harder to measure.
For example, job performance, consumer satisfaction and public reputation can be
determined using a set of defined KPIs. Additionally, KPI can be used to specif
y objective organizational and individual goals such as sales, earnings, profits
, market share and similar objectives.
KPIs selected must reflect the organization's goals, they must be key to its suc

cess, and they must be measurable. Key performance indicators usually are long-t
erm considerations for an organization
Q) Business Warehouse SAP Interview
1. How to convert a BeX query Global structure to local structure (Steps invol
ved)?
To convert a BeX query Global structure to local structureSteps:
A local structure when you want to add structure elements that are unique to the
specific query. Changing the global structure changes the structure for all the
queries that use the global structure. That is reason you go for a local struct
ure.
Coming to the navigation part-In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open que
ry icon (icon tht looks like a folder) On the SAP BEx Open dialog box: Choose Qu
eries. Select the desired InfoCube Choose New. On the Define the query screen: I
n the left frame, expand the Structure node. Drag and drop the desired structure
into either the Rows or Columns frame. Select the global structure. Right-click
and choose Remove reference. A local structure is created.
Remember that you cannot revert back the changes made to global structure in thi
s regard. You will have to delete the local structure and then drag n drop globa
l structure into query definition.
When you try to save a global structure, a dialogue box prompts you to comfirm c
hanges to all queries. that is how you identify a global structure.
2. I have RKF & CKF in a query, if report is giving error which one should be c
hecked first RKF or CKF and why (This is asked in one of int).
RKF consists of a key figure restricted with certain charecteristics combination
s CKF have calculations which fully uses various key figures
They are not interdependent on each other . You can have both at same time
To my knowledge there is no documented limit on the number of RKF's and CKF's. B
ut the only concern would be the performance. Restructed and Calculated Key Figu
res would not be an issue. However the No of Key figures that you can have in a
Cube is limited to around 248.
Restricted Key Figures restrict the Keyfigure values based on a Characteristic.(
Remember it wont restrict the query but only KF Values)
Ex: You can restrict the values based on particular month
Now I create a RKFlike this:(ZRKF)
Restrict with a funds KF
with period variable entered by the user.
This is defined globally and can be used in any of the queries on that infoprovi
der. In columns: Lets assume 3 company codes are there. In new selection, i drag
ZRKF
Company Code1
Similarly I do for other company codes.
Which means I have created a RKF once and using it in different ways in differen
t columns(restricting with other chars too)
In the properties I give the relevant currency to be comverted which will displa
y after converting the value to target currency from native currency.
Similarly for other two columns with remaining company codes.
3. What is the use of Define cell in BeX & where it is useful?
Cell in BEX:::Use
When you define selection criteria and formulas for structural components and th
ere are two structural components of a query, generic cell definitions are creat
ed at the intersection of the structural components that determine the values to
be presented in the cell.
Cell-specific definitions allow you to define explicit formulas, along with impl
icit cell definition, and selection conditions for cells and in this way, to ove
rride implicitly created cell values. This function allows you to design much mo
re detailed queries.

In addition, you can define cells that have no direct relationship to the struct
ural components. These cells are not displayed and serve as containers for help
selections or help formulas.
You need two structures to enable cell editor in bex. In every query you have on
e structure for key figures, then you have to do another structure with selectio
ns or formulas inside.
Then having two structures, the cross among them results in a fix reporting area
of n rows * m columns. The cross of any row with any column can be defined as f
ormula in cell editor.
This is useful when you want to any cell had a diferent behaviour that the gener
al one described in your query defininion.
For example imagine you have the following where % is a formula kfB/KfA * 100.
kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 50%
Then you want that % for row chC was the sum of % for chA and % chB. Then in cel
l editor you are enable to write a formula specifically for that cell as sum of
the two cell before. chC/% = chA/% + chB/% then:
kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 86%
Q) SAP BW Interview Questions 2
1) What is process chain? How many types are there? How many we use in real tim
e scenario? Can we define interdependent processes with tasks like data loading,
cube compression, index maintenance, master data & ods activation in the best p
ossible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data modell
ing?
6) How can enhance business content and what for purpose we enhance business con
tent (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done
tuning in real time. tuning can only be done for infocube partitions and creati
ng aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1:
Process chains exists in Admin Work Bench. Using these we can automate ETTL proc
esses. These allows BW guys to schedule all activities and monitor (T Code: RSPC
).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any give
n process chain. Is a procedure either with in the SAP or external to it with a
start and end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In o
ther words each process is dependent on the previous process and dependencies ar
e clearly defined in the process chain.
This is normally done in order to automate a job or task that has to execute mor
e than one process in order to complete the job or task.
1. Check the Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.
4. Double Click on the Job.
5. You will navigate to a screen
6. In that Click "Job Details" button
7. A small Pop-up Window comes

8. In the Pop-up screen, take a note of


a) Executing Server
b) WP Number/PID
9. Open a new SM37 (/OSM37) command
10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11. Goto Executing server, and Double Click (Point 8 (a))
12. Goto PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option.
Ans # 2:
Data Integrity is about eliminating duplicate entries in the database and achiev
e normalization.
Ans # 4:
InfoCube compression creates new cube by eliminating duplicates. Compressed info
cubes require less storage space and are faster for retrieval of information. He
re the catch is .. Once you compress, you can't alter the InfoCube. You are safe
as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Tips by: Anand
Ans#3
Indexing is a process where the data is stored by indexing it. Eg: A phone book.
.. When we write somebodys number we write it as Prasads number would be in "P"
and Rajesh's number would be in "R"... The phone book process is indexing.. simi
larly the storing of data by creating indexes is called indexing.
Ans#5
Datamodeling is a process where you collect the facts..the attributes associated
to facts.. navigation atributes etc.. and after you collect all these you need
to decide which one you ill be using. This process of collection is done by inte
rviewing the end users, the power users, the share holders etc.. it is generally
done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs o
f exp) So if you are new you dont have to worry about it....But do remember that
it is a imp aspect of any datawarehousing soln.. so make sure that you have rea
d datamodeling before attending any interview or even starting to work....
Ans#6
We can enhance the Business Content bby adding fields to it. Since BC is deliver
ed by SAP Inc it may not contain all the infoobjects, infocubes etc that you wan
t to use according to your company's data model... eg: you have a customer infoc
ube(In BC) but your company uses a attribute for say..apt number... then instead
of constructing the whole infocube you can add the above field to the existing
BC infocube and get going...
Ans#7
Tuning is the most imp process in BW..Tuning is done the increase efficiency....
that means lowering time for loading data in cube.. lowering time for accessing
a query.. lowering time for doing a drill down etc.. fine tuning=lowering time
(for everything possible)...tuning can be done by many things not only by partit
ions and aggregates there are various things you can do... for eg: compression,
etc..
Ans#8
Multiprovider can combine various infoproviders for reporting purposes.. like yo
u can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master d
ata.. etc.. you can refer to help.sap.com for more info...
Ans#9
Scheduled data load means you have scheduled the loading of data for some partic
ular date and time you can do it in scheduler tab if infoobject... and monitored
means you are monitoring that particular data load or some other loads by using
transaction RSMON.

Q) What is ODS?
It is operational data store. ODS is a BW Architectural component that appears
between PSA ( Persistant Staging Area ) and infocubes and that allows Bex ( Busi
ness Explorer ) reporting. It is not based on the star schema and is used prima
rily for details reporting, rather than for dimensional analysis. ODS objects do
not aggregate data as infocubes do. Data are loaded into an IDS object by inse
rting new records, updating existing records, or deleting old records as specifi
ed by RECORDMODE value.
*-- Viji
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records
to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes? *-- Kiran
1. Ans. This depends,if you have complex coding in update rules it will take lon
ger time,orelse it will take less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surroun
ded by different dim table which connects to sids. And the data wise, you will
have aggregated data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will h
ave granular data(detailed level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Wh
ere as navigational attribute is used for drilling down in the report.We don't n
eed to maintain Nav attribute in the cube as a characteristic(that is the advant
age) to drill down.
*-- Ravi
Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete
it by request.
Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table.
Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?
Yes of course. For example, for loading text and hierarchies we use different d
ata sources but the same infosource.
Q4. BRIEF THE DATAFLOW IN BW.
Data flows from transactional system to analytical system(BW). DS on the transa
ctional system needs to be replicated on BW side and attached to infosource and
update rules respectively.
Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RU
LES?
Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
Full and delta.
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDU
RE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fiel
ds).Refer white paper on LO-Cokpit extractions.
Q8. SIGNIFICANCE OF ODS.
It holds granular data.
Q9. WHERE THE PSA DATA IS STORED?

In PSA table.
Q10.WHAT IS DATA SIZE?
The volume of data one data target holds(in no.of records)
Q11. DIFFERENT TYPES OF INFOCUBES.
Basic,Virtual(remote,sap remote and multi)
Q12. INFOSET QUERY.
Can be made of ODSs and objects
Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW. 2 in R/3 and 2 in BW
Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine
Q15. BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns,you can create structures.
Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization
Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level you want using Nav attributes and jump targets
Q18. WHAT ARE INDEXES?
Indexes are data base indexes,which help in retrieving data fastly.
Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED.
Nope
Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?
KPI s indicate the performance of a company.These are key figures
Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image(correct me if I am wrong)
Q23. REPORTING AND RESTRICTIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q24. TOOLS USED FOR PERFORMANCE TUNING.
ST*,Number ranges,delete indexes before load ..etc
Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAIL
Y.
There should be some tool to run the job daily(SM37 jobs)
Q26. AUTHORIZATIONS.
Profile generator
Q27. WEB REPORTING.
What are you expecting??
Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE INFOPROVIDER.
Of course
Q29. PROCEDURES OF REPORTING ON MULTICUBES.
Refer help.What are you expecting??.Multicube works on Union condition
Q30. EXPLAIN TRANPORTATION OF OBJECTS?
Dev ---> Q and Dev ---> P
Q) BW Query Performance
Question:
1. What kind of tools are available to monitor the overall Query Performance?
Answers:
o BW Statistics
o BW Workload Analysis in ST03N (Use Export Mode!)
o Content of Table RSDDSTAT
Question:
2. Do I have to do something to enable such tools?
Answer:
o Yes, you need to turn on the BW Statistics:

RSA1, choose Tools -> BW statistics for InfoCubes


(Choose OLAP and WHM for your relevant Cubes)
Question:
3. What kind of tools are available to analyse a specific query in detail?
Answers:
o Transaction RSRT
o Transaction RSRTRACE
Question:
4. Do I have a overall query performance problem?
Answers:
o Use ST03N -> BW System load values to recognize the problem. Use the
number given in table 'Reporting - InfoCubes:Share of total time (s)'
to check if one of the columns %OLAP, %DB, %Frontend shows a high
number in all InfoCubes.
o You need to run ST03N in expert mode to get these values
Question:
5. What can I do if the database proportion is high for all queries?
Answers:
Check:
o If the database statistic strategy is set up properly for your DB platform
(above all for the BW specific tables)
o If database parameter set up accords with SAP Notes and SAP Services (EarlyWat
ch)
o If Buffers, I/O, CPU, memory on the database server are exhausted?
o If Cube compression is used regularly
o If Database partitioning is used (not available on all DB platforms)
Question:
6. What can I do if the OLAP proportion is high for all queries?
Answers:
Check:
o If the CPUs on the application server are exhausted
o If the SAP R/3 memory set up is done properly (use TX ST02 to find
bottlenecks)
o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,
Customizing default)
Question:
7. What can I do if the client proportion is high for all queries?
Answer:
o Check whether most of your clients are connected via a WAN Connection and the
amount
of data which is transferred is rather high.
Question:
8. Where can I get specific runtime information for one query?
Answers:
o Again you can use ST03N -> BW System Load
o Depending on the time frame you select, you get historical data or
current data.
o To get to a specific query you need to drill down using the InfoCube
name
o Use Aggregation Query to get more runtime information about a
single query. Use tab All data to get to the details.
(DB, OLAP, and Frontend time, plus Select/ Transferred records,
plus number of cells and formats)
Question:
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
Answers:
(Use Details to get the runtime segments)
o High Database Runtime
o High OLAP Runtime

o High Frontend Runtime


Question:
10. What can I do if a query has a high database runtime?
Answers:
o Check if an aggregate is suitable (use All data to get values
"selected records to transferred records", a high number here would
be an indicator for query performance improvement using an aggregate)
o Check if database statistics are update to data for the
Cube/Aggregate, use TX RSRV output (use database check for statistics
and indexes)
o Check if the read mode of the query is unfavourable - Recommended (H)
Question:
11. What can I do if a query has a high OLAP runtime?
Answers:
o Check if a high number of Cells transferred to the OLAP (use
"All data" to get value "No. of Cells")
o Use RSRT technical Information to check if any extra OLAP-processing
is necessary (Stock Query, Exception Aggregation, Calc. before
Aggregation, Virtual Char. Key Figures, Attributes in Calculated
Key Figs, Time-dependent Currency Translation)
together with a high number of records transferred.
o Check if a user exit Usage is involved in the OLAP runtime?
o Check if large hierarchies are used and the entry hierarchy level is
as deep as possible. This limits the levels of the
hierarchy that must be processed. Use SE16 on the inclusion
tables and use the List of Value feature on the column successor
and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion table exist
Question:
12. What can I do if a query has a high frontend runtime?
Answers:
o Check if a very high number of cells and formattings are transferred
to the Frontend ( use "All data" to get value "No. of Cells") which
cause high network and frontend (processing) runtime.
o Check if frontend PC are within the recommendation (RAM, CPU Mhz)
o Check if the bandwidth for WAN connection is sufficient
Q) The Three Layers of SAP BW
SAP BW has three layers:
Business Explorer: As the top layer in the SAP BW architecture, the Business Exp
lorer (BEx) serves as the reporting environment (presentation and analysis) for
end users. It consists of the BEx Analyzer, BEx Browser, BEx Web, and BEx Map fo
r analysis and reporting activities.
Business Information Warehouse Server: The SAP BW server, as the middle layer, h
as two primary roles:
Data warehouse management and administration: These tasks are handled by the pro
duction data extractor (a set of programs for the extraction of data from R/3 OL
TP applications such as logistics, and controlling), the staging engine, and the
Administrator Workbench.
Data storage and representation: These tasks are handled by the InfoCubes in con
junction with the data manager, Metadata repository, and Operational Data Store
(ODS).
Source Systems: The source systems, as the bottom layer, serve as the data sourc
es for raw business data. SAP BW supports various data sources:

R/3 Systems as of Release 3.1H (with Business Content) and R/3 Systems prior to
Release 3.1H (SAP BW regards them as external systems)
Non-SAP systems or external systems
mySAP.com components (such as mySAP SCM, mySAP SEM, mySAP CRM, or R/3 components
) or another SAP BW system.
Q) What Is SPRO In BW Project?
1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?
1. SPRO is the transaction code for Implementation Guide, where you can do conf
iguration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse
Information.
2. SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Setti
ngs, Authorisation settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and
BW, and other data sources, link between BW system and Microsoft Analysis serv
ices, and crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RF
C Desitination for SAP BW for J2EEE Engine, Installation of Availability monitor
ing for UD Connect.
* Automated Processes: like settings for batch processes, background processes
etc., etc.,
* Transport Settings : like settings for source system name change after transpo
rt and create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.
3. PSA : Persistant Staging Area : is a holding area of raw data. It contains d
etailed requests in the format of the transfer structure. It is defined accordin
g to the Datasource and source system, and is source system dependent.
IDOCS : Intermediate DOCuments : Data Structures used as API working storage for
applications, which need to move data in or out of SAP Systems.
Q) What the difference between data validation and data reconciliation?
By : Anuradha
Data validation is nothing but:
Validation allows solid data entry regarding special rules. According to previou
s rules, the system can evaluate an entry and a message can appear on the user's
terminal if a check statement is not met. A validation step contains prerequisi
te statement and check statement. Both of them are defined using Boolean Logic o
r calling an ABAP/4 form.
Data Reconcialtion:
Reconcilation is the process of comparing the data after it is transferred to th
e BW system with the source system. The procedure to do reconcilation is either
you can check the data from the SE16 if the data is coming from a particular tab
le only or if the datasource is any std datasource then the data is coming from
the many tables in that scenario what I used to do ask the R/3 consultant to rep
ort on that particular selections and used to get the data in the excel sheet an
d then used to reconcile with the data in BW . If you are familiar with the repo
rts of R/3 then you are good to go meaning you need not be dependant on the R/3
consultant ( its better to know which reports to run to check the data ).
How to do Reconciliation?

There are two ways for Reconciliation:


1) Create Basic Cube and load the data from the source system. In the same way c
reate another cube of type Virtual cube. After creating those two cubes, create
one multiprovider by using the Basic Cube and Virtual Cube, in the Identificatio
n of the Multiprovider select two cube. Then go to reporting create the query an
d write on formule to compare the values of these two cubes.
2) See the contents of the basic cube which is there is BW. In that screen one B
utton is there as "SAVE AS". Click that button and select as "Spread sheet". Sav
e as .xls. In the Source system side also go to T-Code RSA3, select your data so
urce which you assigned to the basic cube. Click on execute and see the contents
.
Now again here also select the "SAVE AS" button and select the spread sheet and
save under .xls file. Ok now your two flat file are ready. Now move one file int
o other by using "move copy". Now two flat files are in one excel sheet only in
different sheets. Now write a formula to compare the values of sheet 1 and sheet
2 in either in sheet1 or sheet2.
Q) How can we compare the R/3 data with B/W after the data loading. What are th
e procedures to be followed?
Data validation Steps:
Following step-by-step solutions are only an example.
1. Run transaction SE11 in the R/3 system to create a view which is based on the
table of COEP and COBK.
These two tables are the source information for extractor 0CO_OM_CCA_9 (CO cost
on the line item level).
2. Define selection conditions.
Only CO objects with prefix KS or KL should be selected because only these objects a
re relevant for the extraction and relevant for the reconciliation.
KS means controlling area; KL means cost element.
3. Setting the Maintenance Status . Status Display/Maintenance Allowed allows you to
display and edit this view.
4. Create a DataSource in transaction RSO2.
Assign the DataSource to the appropriate application component.
The view, which is created by following the steps above, should be used in this
field.
Click the Save button to save this DataSource.
You will get a pop-up for the development class.
For testing purposes you can save this DataSource as a local object. If you want
to transport this DataSource into any other systems it should be saved with the
appropriate development class.
5. Replicate this new Datasource ZCOVA_DS1 to BI and create InfoSource / Transfer
Rule in BI the system.
6. Because the value of InfoObject 0costcenter is determined in extractor 0CO_OM_C
CA_9 and this logic cannot be replaced by the view
ZCORECONCILIATION , this InfoObject has to be determined in the transfer rule using
formula:
0costcenter = substring (object number, 6, 10).
7. InfoObject 0fiscvarnt can be assigned to a constant for testing purposes.
In this example we assume that K4 is the fiscal year variant for the company.
You can also determine the value of InfoObject 0fiscvarnt by reading the attribute
value of InfoObject 0COMP_CODE which is available in the transfer structure.
8. In ZCOVA_DS1 InfoObject 0Fiscper (fiscal period) can be added to the InfoSource t
o make the comparison fairly easy. This InfoObject can be determined in the
transfer rule using formula:
0FISCPER = CONCATENATE
Q) What are all the differences between RSA5 and RSA6?
RSA5 - Contains all the Business content data source in Delivered version.
RSA6 - After activation from RSA5, the delivered objects will come to RSA6 as a
Active Version.

T-code used in Extraction:


RS02 -> Generic Datasource
SE11 -> Database dictionary
SE37 -> Function module
LBWE -> Logistic Datasource
LBWG -> Deletion of setup table data
RSA5 --- Transfer business content data source
** Make available these data source to bw side for extracting data.
RSA6 --- Data source enhancement
** Enhancement of data source to include extra fields in it.. editing, displayin
g, test extraction of data source (rsa3) these are functions available in rsa6.
In transaction RSA5 you will get the DataSources in their Delivered State wherea
s in Transaction RSA6 (Post Process DataSources and Hierarchy) you can view the
Activated DataSources in their activated state which is being done in RSA5 only.
Elaborating the main point.
RSA5 - Transaction from which business content data sources delivered by SAP can
be activated/installed for productive use with live data.
RSA6 - Transaction to maintain the currently active data sources in your system.
Here you would find custom and SAP delivered installed datasources. You could
branch to changing the data source from here.
Now, in RSA6 you can not only see SAP datasources currently active in the system
, but also the custom datasources (Y* or Z*) the you have created and are activa
ted in the system.
So, in a nutshell we can say that RSA5 gives all the DELIVERED datasources in th
e system and RSA6 gives all the ACTIVE datasources available for use.
RSA5 ->
In the BW system, we call transaction RSA5 Install Data Sources from Business Co
ntent to install the DataSources from the application components.
We have to install business content using RSA5 before we can use it in SAP R/3.
By means of installing business content (BC) we are changing version of BC compo
nent from delivered "D" to active. No modifications to the datasource are possib
le here. After installing only, we can use the datasources in LBWE.
RSA6 ->
Once you activate the datasource in RSA5, it will be available in RSA6.
Rsa6 list active datasources, as you can see in menu, functions : create applica
tion component, display/change datasource, test extraction (similar to rsa3), an
d enhance datasource. Here user could modify the Data Sources. You can change t
he data source in RSA6 like you can append the fields you can hide the fields an
d make the fields selection enabled.
The function is same in r/3 and bw.
Q) Data load in SAP BW
What is the strategy to load for example 500,000 entries in BW (material master,
transactional data)?
How to separate this entries in small packages and transfer it to BW in automati
c?
Is there some strategy for that?
Is there some configuration for that?
See OSS note 411464 (example concerning Info Structures from purchasing document
s) to create smaller jobs in order to integrate a large amount of data.
For example, if you wish to split your 500,000 entries in five intervals:
- Create 5 variants in RMCENEAU for each interval
- Create 5 jobs (SM36) that execute RMCENEAU for each variant
- Schedule your jobs
- You can then see the result in RSA3
Loading Data From a Data Target
Can you please guide me for carrying out his activity with some important steps?

I am having few request with the without data mart status. How can I use only th
em & create a export datasource?
Can you please tell me how my data mechanism will work after the loading?
Follow these steps:
1. Select Source data target( in u r case X) , in the context menu click on Crea
te Export Datasources.
DataSource ( InfoSource) with name 8(name of datatarget) will be generated.
2. In Modelling menu click on Source Systems, Select the logical Source System o
f your BW server, in the context menu click on Replicate DataSource.
3. In the DataModelling click on Infosources and search for infosource 8(name of
datatarget). If not found in the search refresh it. Still not find then from
DataModelling click on Infosources, in right side window again select Infosource
s, in the context menu click on insert Lost Nodes.
Now search you will definately found.
4. No goto Receiving DataTargets ( in your case
In the next screen select Infocube radio button
get (in u r case X). click Next screen Button (
radio button, then select Source keyfield radio
m Source cube to target cube.

Y1,Y2,Y3) create update rules.


and enter name of Source Datatar
Shift F7), here select Addition
button and map the keyfields for

5. In the DataModelling click on Infosources select infoSource which u replicate


d earlier and create infopackage to load data..
Q) SAP R/3 BW Source and SID Table
R/3 Source Table.field - How To Find?
What is the quickest way to find the R/3 source table and field name for a field
appearing on the BW InfoSource?
By: Sahil
With some ABAP-knowledge you can find some info:
1, Start ST05 (SQL-trace) in R/3
2, Start RSA3 in R/3 just for some records
3, After RSA3 finishes, stop SQL-trace in ST05
4, Analyze SQL-statements in ST05
You can find the tables - but this process doesn't help e.g for the LO-cockpit d
atasources.
Explain tables and sid tables.
A basic cube consists of fact table surrounded by dimension table. SID table lin
ks these dimension tables to master data tables.
SID is surrogate ID generated by the system. The SID tables are created when we
create a master data IO. In SAP BW star schema, the distinction is made between
two self contained areas: Infocube & master data tables/SID tables.
The master data doesn't reside in the satr schema but resides in separate tables
which are shared across all the star schemas in SAP BW. A numer ID is generated
which connects the dimension tables of the infocube to that of the master data
tables.
The dimension tables contain the dim ID and SID of a particular IO. Using this S
ID the attributes and texts of an master data Io is accessed.
The SID table is connected to the associated master data tables via teh char key
Sid Tables are like pointers in C
The details of the tables in Bw :
Tables Starting with Description:
M - View of master data table

Q - Time Dependent master data table


H
K
I
J

- Hierarchy table
- Hierarchy SID table
- SID Hierarchy structure
- Hierarchy interval table

S - SID table
Y - Time Dependent SID table
T - Text Table
F - Fact Table - Direct data for cube ( B-Tree Index )
E - Fact Table - Compress cube ( Bitmap Index )
Q) Explain the what is primary and secondary index.
When you activate an object say ODS / DSO, the system automatically generate an
index based on the key fields and this is primary index.
In addition if you wish to create more indexes , then they are called secondary
indexes.
The primary index is distinguished from the secondary indexes of a table. The pr
imary index contains the key fields of the table and a pointer to the non-key fi
elds of the table. The primary index is created automatically when the table is
created in the database.
You can also create further indexes on a table. These are called secondary index
es. This is necessary if the table is frequently accessed in a way that does not
take advantage of the sorting of the primary index for the access. Different in
dexes on the same table are distinguished with a three-place index identifier.
Lets say you have an ODS and the Primary Key is defined as Document Nbr, Cal_day
. These two fields insure that the records are unqiue, but lets lay you frequent
ly want to run queries where you selct data based on the Bus Area and Document T
ype. In this case, we could create a secondary index on Bus Area, Doc Type. Then
when the query runs, instead of having to read every record, it can use the ind
ex to select records that contain just the Bus Area and Doc type values you are
looking for.
Just because you have a secondary index however, does not mean it will be used o
r should be used. This gets into the cardinality of the fields you are thinking
about indexing. For most DBs, an index must be fairly selective to be of any val
ue. That is, given the values you provide in a query for Bus Area and Doc Type,
if it will retrieve a very small percentage of the rows form the table, the DB p
robably should use the index, but if the it would result in retrieving say 40% o
f the rows, it si almost always better to just read the entire table.
Having current DB statististics and possibly histograms can be very important as
well. The DB statistics hold information on how many distinct values a field ha
s, e.g. how many distinct values of Business Area are there, how many doc types.
Secondary indexes are usally added to ODS (which you can add using Admin Wkbench
) based on your most frequently used queries. Secondary indexes might also be ad
ded to selected Dimension and Master data tables as well, but that usually requi
res a DBA, or someone with similar privileges to create in BW.
Q) Types of Update Methods
What are these update methods and which one has to use at what purpose.
R/3 update methods
----------------------------------------------1. Serialized V3 Update
2. Direct Delta
3. Queed Delta
4. Un Serialized Delta Update
By: Anoo
a) Serialized V3 Update

This is the conventional update method in which the document data is collected i
n the sequence of attachment and transferred to BW by batch job.The sequence of
the transfer does not always match the sequence in which the data was created.
b) Queued Delta
In this mode, extraction data is collected from document postings in an extracti
on queue from which the data is transferred into the BW delta queue using a peri
odic collective run. The transfer sequence is the same as the sequence in which
the data was created
c) Direct delta.
When a Document is posted it first saved to the application table and also direc
tly saved to the RSA7 (delta queue) from here it is being moved to BW.
So you can understand that for Delta flow in R/3 Delta queue is the exit point.
d) Queued Delta
When a document is posted it is saved to application table, and also saved to th
e Extraction Queue ( here is the different to direct delta) and you have to sche
dule a V3 job to move the data to the delta queue periodically and from their it
is moved to BW.
e) Unserialized V3 Update
This method is largely identical to the serialized V3 update. The difference lie
s in the fact that the sequence of document data in the BW delta queue does not
have to agree with the posting sequence. It is recommended only when the sequenc
e that data is transferred into BW does not matter (due to the design of the dat
a targets in BW).
You can use it for Inventory Management, because once a Material Document is cre
ated, it is not edited. The sequence of records matters when a document can be e
dited multiple times. But again, if you are using an ODS in your inventory desig
n, you should switch to the serialized V3 update.
Q) Deltas Not Working for Installation Master Data
I am having trouble with the deltas for master data object "installation". The c
hanges are clearly recorded in the time dependent and time independent tables, E
ANL/EANLH. The delta update mode is using ALE pointers, does anyone know of a ta
ble where I can go check where these deltas/changes are temporarily stored, or w
hat's the process behind this type of delta?
The following steps must be executed:
1. Check, whether the ALE changepointer are active in your source system (Transa
ction BD61) and whether the number range is maintained (Transaction BDCP).
2. In addition, check in the ALE Customizing, whether all message types you need
are active (Transaction SALE -> Model and implement business processes -> Confi
gure the distribution of master data -> Set the replication of changed data -> A
ctivate the change pointer for each message type ).
3. Check, whether the number range for the message type BI_MSTYPE is maintained
(Transaction SNUM -> Entry 'BI_MSTYPE' -> Number range -> Intervals). The entry
for 'No.' must be exactly '01'. In addition, the interval must start with 000000
0001, and the upper limit must be set to 0000009999.
4. Go to your BW system and restart the Admin. workbench.
All of the following activities occur in the InfoSource tree of the Admin. Workb
ench.
5. Carry out the function "Replicate DataSource" on the affected attached source
system for the InfoObject carrying the master data and texts.
4. Activate the X'fer structure
All changes, all initial data creations and deletions of records from now on are
recorded in the source system.
5. Create an InfoPackage for the source system. In the tabstrip 'update paramete
r', there are three alternative extraction modes:
Full update
Delta update
Initialization of delta procedure
First, initialize the delta procedure and then carry out the delta update.
An update on this issue:

In the EMIGALL process, SAP decided to bypass all the standard proces to update
the delta queues on IS-U, because it would cause too much overhead during the mi
gration. It is still possible to modify the standard programs, but it is not re
commended, except if you want to crash you system.
The other options are as follows :
- Extract MD with full extractions using intervalls..
- modify the standard to put data in a custom table on which you are going to cr
eate a generic delta;
- modify the standard to put the ALE pointers in a custom table and then use a c
opy of the standard functions to extract these data....
- Extract the data you want in a flat file and load it in BW...
By the way, if you want to extract the data from IS-U, forget to do it during mi
gration, find another solution to extract after.
PS: Well if you have generic extractor and huge volume data then you can do it w
ith multiple INITS with RANGES as selection criteria and then a single DELTA(whi
ch is summation of all INITS) in order to improve performance with GENERIC DELTA
.
Q) Explain about deltas load and where we use it exactly.
A data load into a BI ODS/master data/cube can be either FULL or DELTA.
Full load is when you load data into BI for the first time i.e. you are seeding
the destination BI object with initial data. A delta data load means that you ar
e either loading changes to already loaded data or add new transactions.
Usually delta loads are done when the process has to sync any new data/changed d
ata from the OLTP system i.e. SAP ECC or R/3 to SAP BI (DSS/BI). DSS stands for
Decision Support Systems or system that is used for deriving Business Intelligen
ce.
Example:
Let's say you are trying to derive a report to empower the management to figure
out who are the customers who have bought the most from your company.
On the BI side, you create the necessary master data elements. You use the maste
r data elements to create an ODS and a cube. The ODS and the cube will house the
daily transactions that get added to the OLTP systems via a variety of applicat
ions.
Now you identify the datasource in ECC that will bring the necessary transaction
s to BI. You replicate the datasource in BI and map the data source to the ODS a
nd map the ODS to the cube. Hence you create the Transformation and DTP as a ful
l load for the first time.
At this point of time, your ODS and cube has the data for the last last x number
of years where x stands for the life of your company. You also need to capture
the daily transactions from here onwards going forward. What you do now is chang
e the DTP to allow only delta records.
Now you schedule the execution of the datasource and loading of the data in a pr
ocess chain. At run time, the process chain will get the new records from OLTP (
since the datasource is already replicated keeping in mind that the datasource s
tructure has not changed) and import those changes to the ODS and hence to the c
ube.
Any such loads that brings in new transactions or changes to earlier transaction
s will be called delta records and hence the load is called delta load.
Q) Removing '#' in Analyzer (Report)
In ODS, there are records having a value BLANK/EMPTY for some of the fields. EX:
Field: `Creation Date' and there is no value for some of the records.
For the same, when I execute the query in Analyzer, the value `#' is displaying
in place of `BLANK' value for DATE and other Characteristic fields. Here, I wan

t to show it as `BLANK/SPACE' instead of `#'. How to do this?


I had a similar problem and our client didn't want to see # signs in the report.
And this is what I did. I created a marco in Workbook as SAPBEXonRefresh and ru
n my code in Visual Basic editor. You can run same code in query also as when yo
u will refresh query this # sign will be taken care. You can see similar code in
SAP market place.
I will still suggest not to take out # sign as this represent no value in DataMa
rt. And this is SAP standard. I did convince my client for this and later they w
ere OK with it.
The codes are below:
Sub SAPBEXonRefresh(queryID As String, resultArea As Range)
If queryID = "SAPBEXq0001" Then
resultArea.Select
'Remove '#'
Selection.Cells.Replace What:="#", Replacement:="", LookAt:=xlWhole, _ SearchOrd
er:=xlByRows, MatchCase:=False, MatchByte:=True
'Remove 'Not assigned'
Selection.Cells.Replace What:="Not assigned", Replacement:="", LookAt:=xlWhole,
_ SearchOrder:=xlByRows, MatchCase:=False, MatchByte:=True
End If
' Set focus back to top of results
resultArea(1, 1).Select
End Sub
Q) How To Convert Base Units Into Target Units In BW Reports
My client has a requirement to convert the base units of measurements into targe
t units of measurements in BW reports. How to write the conversion routine and
otherwise pls refer conversion routine used so that the characteristic value(key
) of an infoobject can be displayed or used in a different format to how they ar
e stored in the database.
Have a look at the how to document "HOWTO_ALTERNATE_UOM2"
or
You can use the function module 'UNIT_CONVERSION_SIMPLE'
CALL FUNCTION 'UNIT_CONVERSION_SIMPLE'
EXPORTING
input
= ACTUAL QUANTITY
*
NO_TYPE_CHECK
= 'X'
*
ROUND_SIGN
= ' '
unit_in
= ACTUAL UOM
unit_out
= 'KG' ( UOM YOU WANT TO CONVERY
)
IMPORTING
*
ADD_CONST
=
*
DECIMALS
=
*
DENOMINATOR
=
*
NUMERATOR
=
output
= w_output-h_qtyin_kg
*
EXCEPTIONS
*
CONVERSION_NOT_FOUND
= 1
*
DIVISION_BY_ZERO
= 2
*
INPUT_INVALID
= 3
*
OUTPUT_INVALID
= 4
*
OVERFLOW
= 5
*
TYPE_INVALID
= 6
*
UNITS_MISSING
= 7
*
UNIT_IN_NOT_FOUND
= 8
*
UNIT_OUT_NOT_FOUND
= 9
*
OTHERS
= 10
.

IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
Q) Non Cumulative key figures are nothing but the key figure which will not be c
umulative depending on some characteristic values. You will find these Non Cumul
ative KF's while you extract the data from MM data sources.
For example,you have a requirement of showing this month stock in the report. Me
ans a key figure has not to be cumulated based on the char. While you create a
KF, you will get the aggregation tab in the middle, there you have something ca
lled aggregation and summation aggregation. We put aggregation as summation and
summation aggregation as last value. Once you select Non cumulative then it will
ask for depending on what char this characteristic this KF has not to be cumula
ted.
Non-cumulative with inflow or outflow!
There has to be two additional cumulative key figures as InfoObjects for non-cum
ulative key figures - one for inflows and one for outflows. The cumulative key f
igures have to have the same technical properties as the non-cumulative key figu
re, and the aggregation and exception aggregation have to be SUM.
You can evaluate separately the non-cumulative changes on their own, or also the
inflow and outflow, according to the type of chosen non-cumulative key figure i
n addition to the non-cumulative. For Example Sales volume (cumulative value):
Sales volume 01.20 + sales volume 01.21 + sales volume 01.23 gives the total sal
es volume for these three days.
Warehouse stock (non-cumulative key figure):
Stock 01.20 + stock 01.21 + stock 01.23 does not give the total stock for these
three days.
Technically, non-cumulatives are stored using a marker for the current time (cur
rent non-cumulative) and the storage of non-cumulative changes, or inflows and o
utflows. The current, valid end non-cumulative (to 12.31.9999) is stored in the
marker. You can determine the current non-cumulative or the non-cumulative at a
particular point in time. You can do this from the current, end non-cumulative a
nd the non-cumulative changes and/or the inflows and outflows.
Queries for the current non-cumulative can be answered very quickly, since the c
urrent non-cumulative is created as a directly accessible value. There is only o
ne marker for each combination of characteristic values that is always updated w
hen the non-cumulative InfoCube (InfoCube that includes the non-cumulative key f
igures) is compressed. So that access to queries is as quick as possible, compre
ss the non-cumulative InfoCubes regularly
Cumulative Keyfigures With Exception Aggregation:
It's a 'normal' KF (with summation, min or max as aggregation behaviour), but yo
u set some exception in this behaviour...for example, you can say that a KF, nor
mally aggregated by 'summation', have to show the max value (or the average, or
'0' or something else), that is the 'exception aggregation' when you use it in c
ombination with 0DOC_DATE (or other char), that is the 'exception aggregation ch
ar reference'...in this case OLAP processor give to you the possibility to see y
our KF with different behaviour depending from whether did you use 0DOC_DATE (in
our example, MAX) or something else (SUMMATION).
Q) In a real time scenario where do we use cell definition in query designing.
A cell is the intersection between two structural components. The term cell for
the function Defining Exception Cells should not be confused with the term cell
in Microsoft Excel. The formulas or selection conditions that you define for a c
ell always take effect at the intersection between two structural components. If
a drilldown characteristic has two different characteristic values, the cell de
finition always takes effect at the intersection between the characteristic valu
e and the key figure.
Use of cell definition:

When you define selection criteria and formulas for structural components and th
ere are two structural components of a query, generic cell definitions are creat
ed at the intersection of the structural components that determine the values to
be presented in the cell.
Cell-specific definitions allow you to define explicit formulas and selection co
nditions for cells as well as implicit cell definitions. This means that you can
override implicitly created cell values. This function allows you to design muc
h more detailed queries.
In addition, you can define cells that have no direct relationship to the struct
ural components. These cells are not displayed and serve as containers for help
selections or help formulas.
For example:
You have already implemented the sales order system in your company. You have gi
ven the reports to the end users including open order reports.
User come and tell you that they what some special calculations for some particu
lar customers. Say for example, in your report you have 5 customers like Nike, C
oke, Philips, Sony and Microsoft. For your users requirement you need to provide
some discount or giving some special exceptions for only Microsoft on 5th month
only. Microsoft's 5th month detail is always come in our report at fifth column
fifth row.
For this scenario you can use have accurate column and row that you need to calc
ulate. So here you can utilize the function Cell Editor to calculate for the par
ticular column.
Q) How to extract data using BW from CRM?
Steps for Extracting data from CRM:
Configuration Steps:
1. Click on -> Assign Dialog RFC destination
If your default RFC destination is not a dialog RFC destination, you need to cre
ate an additional dialog RFC destination in addition and then assign it to your
default RFC destination.
2. Execute Transaction SBIW in CRM
3. Open BC DataSources.
4. Click on Transfer Application Component Hierarchy
Application Component hierarchy is transferred.
5. SPRO in CRM .Go to CRM -> CRM Analytics
6. Go to transaction SBIW -> Settings for Application specific Data Source ->Set
tings for BW adapter
7. Click on Activate BW Adapter Metadata
Select the relevant data sources for CRM sales
8. Click on Copy data sources
Say yes and proceed
9. Logon to BW system and execute transaction RSA1.
Create a source system to establish connectivity with CRM Server
A source system is created. (LSYSCRM200)(Prerequisites: Both BW and CRM should h
ave defined Back ground, RFC users and logical systems)
10. Business content activation for CRM sales area is done

11. Click on source system and choose replicate datasources.


In CRM6.0, do we need to use BWA1 tcode to map the fields between CRM and BW, th
e way we used to do in earlier CRM versions?
Below are the steps for CRM(6.0) extraction as per my knowledge:
1.
2.
3.
4.
5.

Activate the DS in RSA5.


Replicate into BI.
Schedule Init data load.
Schedule Delta.
Use Rsa3 and RSA7 tcodes to check data in CRM system.

Notes:
If you are have SAP CRM 5.x or later, you would activate the DataSources in RSA5
and maintain in RSA6 as you do for FI DataSources in R3/ECC.
The only difference in the technology is that the extraction goes through a BW A
daptor on SAP CRM and passes through a Service API to SAP BW. In the end, though
, it's really no difference than FI.
Same as FI extractors in R3/ECC.
After you have activated the DataSource in RSA5:
1)
2)
3)
4)
5)
6)

Go to RSA6 and click on the Enhance Extraction Structure button.


Append your custom fields to the structure and activate.
Create the User Exit in CMOD to populate the custom fields.
Re-activate the DataSource.
Test extraction in RSA3.
Replicate the DataSource in BW.

There is no interval settings required, like there are for FI. Here's the techni
cal description of the CRM extraction process for both Full and Delta extraction
.
1)
2)
3)
4)
5)

SAP BW calls the BW Adapter using the Service API.


BW Adapter reads the data from the source table for the DataSource.
Selected data is converted into the extract structure as a BDoc.
The BDoc type determine the BAdI that is called by the BW Adapter.
Data Package is transferred to SAP BW using the Service API.

Some considerations for Delta are:


1)
2)
3)
4)
5)
6)

Net changes in CRM are communicated via BDoc.


The flow controller for BDocs calls the BW Adapter.
BW Adapter checks if net change is in BDoc that's BW relevant.
Non-relevant net changes are not sent to SAP BW.
Relevant net changes are transferred to SAP BW.
CRM standard DataSources use AIMD delta method.

CRM systems use what is called as a BW adapter to extract data - for other syste
ms it is the Service API - hence these tcodes will be used - this is because CRM
systems are based on BDocs and traditional R/3 systems are based on iDocs and A
LE technology.

Tcodes:
BWA5 is used to activate 'delta' for CRM datasource.
BWA1 is used for mapping fields in extract structure with BDoc.
Q) Transport Process Chains to Test System
What is the best way to transport process chains to test system?
I got many other additional and unwanted objects got collected when I tried for
collection of process chains from transport connection.
To transport a process chain the best is to transport only objects created for t
he process chain. On my system I created specific obejcts for PC : Infopackages,
jobs, variant. those objects are only use for PC. By this way I avoid errors wh
en users restart load or job manually.
So when I want to transport a process chain I go in the monitor and select the P
C make a grouping on only necessary objects, and I go through the tree generated
to select only what I need. Then I go in SE10 to check if the transport contain
s not other objects which can impact my target system.
You can avoid some uncessary objects by clicking in Grouping > Data flow before
& Data Flow After . For example you already have infopackages in your target sys
tem but not process chains & you only want to transport only process chain witho
ut any other objects like transfer structure or infopackages . You can choose be
fore or after option .
You can also choose hierachries or display option from the Display tab too if yo
u have objects in bulk but make sure all object are selected ( in case when diff
erent-2 process chain having different kind of object then better use Hierarchy,
not list )
While Creating these TR some objects may be in use or locked in other TR so firs
t release them by Tcode Se03 ,using unclock object ( Expert Tool ).
These options can reduce your effort while collecting your objects , even after
so much effort you get some warning or Error like :- objects are already in syst
em then ask basis to use overwrite mode.
Transport a specific infoobject
How to transport a specific info object? I tried to change it and then save but
the transport request won't appear. How to manually transport that object?
1. Administrator Workbench (RSA1), then Transport Connection
2. Object Types, then from the list of objects put the requested one on the righ
t side of the screen (drag & drop)
3. Click "Transport Objects", put the development class name and specify the tra
nsport (or create the new one)
4. Transaction SE01, check transport and release it
5. Move the transport up to the another system.
If you change and reactivate the infoobject, but get no transport request, this
means that your infoobject is still in $tmp class.
go in to the maintenance of the infoobject, menu extras, object directory entry
and change the development class. at this point you should have a pop-up request
ing a transport request
If you're not getting a transport request when you change and activate, it could
also be that the InfoObject is already on an open transport.
When you collect the object in the transport connection as described above, you
will see in the right hand pane an entry called Transport Request. If there is a
n entry here, the object is already on a transport and this gives you the transp
ort number.
You can then use SE01 or SE10 to delete the object from the existing transport i
f that is what you want to do then, when you change and activate the object agai
n, you should be prompted for a transport request. Alternatively, you can use th
e existing transport depending on what else is on it.
How To Do Transports in BW?
Step by step procedure for transporting in BW:

1.
2.
3.
4.
5.
6.

In RSA1 go to Transport Connection


Select Object Types Your Object that you want to transfer.
Choose grouping method (in data flow before and after)
Drag and drop your object.
Click the Truck button to transfer
In Source System (e.g Dev SE09).
a. Expand to choose your child request
b. Click on the release button (truck)
c. Choose the parent request and click the Truck button release.
7. In Target System (e.g QA) go to STMS
a. Click on Truck button (Import Overview)
b. Dbl click on your QA system queue
c. Clck on Refresh
d. Clk on adjust import queue
e. Select ur request and click on F11.
*-- David
Kazi
Is it possible to copy a process chain in BW 3.1? If so, how?
In RSPC, double click the process chain so that you can see it in the left hand
pane. In the box where you type in the transaction code, type COPY and hit Enter
.
Q) Infocube Compression
I was dealing with the tab "compression" while managing the infocube, was able t
o compress the infocube and send in the E- table but was unable to find the conc
rete answer on the following isssues:
1. What is the exact scenario when we use compression?
2. What actually happens in the practical scenario when we do compression?
3. What are the advantages of compressing a infocube?
4. What are the disadvantages of compressing a infocube?
1. Compression creates a new cube that has consolidated and summed duplicate inf
ormation.
2. When you compress, BW does a group by on dimensions and a sum on measures...
this eliminates redundent
information.
3. Compressed infocubes require less storage space and are faster for retrieval
of information.
4. Once a cube is compressed, you cannot alter the information in it. This can b
e a big problem if there
is an error in some of the data that has been compressed.
I understand the advantage to compressed the infocube is the performance. But I
have a doubt. If I compressed one or more request ID of my infocube the data i
t will continue to appear in my reports (Analyzer)?
The data will always be there in the Infocube. The only thing that would be miss
ing is the request id's.. you can take a look in to your packet dimension and se
e that it would be empty after you compress.
Compression yeap its for performance. But before doing this compression you shou
ld keep in mind one thing very carefully.
1) If your cube is loading data with custom defined deltas you should check whet
her delta is happening properly or not, procedure is compress some req and sched
ule the delta.
2) If your system having outbounds from cube and this happening with request ids
then you need to follow some other procedure because request ids wont be availa
ble after compression.
These two things are very important when you go for compression.
Q) How to Compress InfoCube Data
How Info cube compression is done?
v\:* {behavior:url(#default#VML);}o\:*
{behavior:url(#default#VML);}w\:* {behavior:url(#default#VML);}.shape
{behavior:url(#default#VML);}
Create aggregates for that infocube
--------------------------------------------------------------------------------

--I guess what the question was how we can compress a data inside a cube, I assume
that's usually done through by deleting the Request ID column value.
This can be done through Manage - > Compress Tab.
---------------------------------------------------------------------------------Go to RSA1
Under Modeling --> Choose InfoProvider --> InfoArea and then --> Select your Inf
oCube
Right Click on your infocube --> from context menu --> choose Manage
Once you are in manager data Targets screen:
Find out the request numbers
decide till what request id you want to compress
Go to Collapse tab under compress --> choose request ID and click Release
The selected request ID and anything below will be compressed.
What is happening behind the scene is
After the compression, the F fact table co
ntains no more data.
Instead, the compressed data now appear in the E fact table.
Q) Cube to Cube Load
You need to move some data from one cube to another.
The steps involved are :You need to first create 'Export Data Source' from original cube (right-click on
the InfoCube and select Generate Export Data Source).
Then, assign the new datasource to new cube. (you may click on 'Source system' a
nd select your BW server and click 'Replicate').
Then, you can configure your infosource, and infopackage.
Lastly, you are ready to load already.
Q) Question:
A datasource was changed and a document date was added to the standard datasourc
e.
How to find which user has changed the datasource?
Answer:
You can use table ROOSOURCE and provide your data source here.
ROOSOURCE is a table in the source system which has information about all the da
ta sources in the system.
When you create a datasource it will update to the three tables:
- ROOSOURCE
- ROOSFIELD
- ROOsourcet
Take OLTP version as A and then execute.
In the output you can see the last change user and its time stamp.
or
You can use the TCODE : RSA2 (Datasource Repository ) to display the datasource.
In the general Tab you can see the Last Changed by : and the date and time of ch
ange.
Error:

Error message: Datasource does not exists in version A.


It means the datasource is not active in the system, you will have to activate t
he datasource from RSA5
Make sure you are providing the right technical name of the datasource while che
cking.
Notes:
You have to activate your Data source first and then check in the table ROOSOURC
E with OLTP version as A and then execute.
Q) What is meant by Selection field, Hide field, Inversion and Field only Known
exit? What is the Use of these?
by: Anoo
Selection
When scheduling a data request in the BW Scheduler, you can enter the selection
criteria for the data transfer. For example, you may want to determine that data
requests are only to apply to data from the previous month.
If you set the Selection indicator for a field within the extract structure, the
data for this field is transferred in correspondence with the selection criteri
a in the scheduler.
Hide field
You should set this indicator to exclude an extract structure field from the dat
a transfer. As a result of your action, the field is no longer made available in
BW when setting the transfer rules and generating the transfer structure.
If you don't want to see this this ..you set this field and you can't see in the
BW which is available in extract structure.
Inversion
The field is inverted in the case of reverse posting. This means that it is mult
iplied by (-1). For this, the extractor has to support a delta record transfer p
rocess, in which the reverse posting is identified as such.
If the option Field recognized only in Customer Exit is chosen for the field, th
is field is not inverted. You cannot activate or deactivate inversion if this op
tion is set.
Field only known:
The indicator Field known only in Exit is set for the fields in an append struct
ure, meaning that by default, these fields are not passed to the extractor in th
e field list and the selection table.
For Example:
You had posted one record in to the cube. All the key figures are updated (some
are added and some are substracted). But you want to revert it back. So what
you can do is if your data is present in the PSA. You can reverse post that requ
est so that all the signs of the key figures are reversed( i.e addition becomes
minus and minus key figures becomes additive) so that the net key figure change
is nullufied. i.e., total change is Zero. In such cases, only those key figures
which have "inversion" set will be reversed.
Q) Explain the steps to load master data hierarchies from R/3 system.
by: Reddy
A summary of the steps are as follows:
1) Goto Hierachy tab in infobject on to which your loading Hierachy data.
2) Select With Hierarchies.
3) Select Hierarchy Properties ( Time Dependent or not time depen..etc..)
4) Click on External Chars in Hierarchies, in that select the characterstics on
which this Hierarchy is depending.
5) Then Create Infosource, assign Datasource.

6) Create Infopackage, to load Hierarchies.


7) Hierarchy section tab in Infoapackage select load Hierarchy and refersh the A
vailable Hierarchies from OLTP, If it is Time dependent select time interval in
update tab.
8) Then start the load.
If you want to load from Flat file, some what different way to do it.
It is normally done by the following:
Transferring the master datasources in RSA5 to RSA6 and then replicating the DS
into BW and assignment of DS to Infosource and cretaion of Infopackage and load
it into the master tables.
Generally, the control parameters for data transfer from a source system are mai
ntained in extractor customizing. In extractor customizing, you can access the c
orresponding source system in the source system tree of the SAP BW Administrator
Workbench by using the context menu.
To display or change the settings for data transfer at source system level, choo
se Business Information Warehouse --> General Settings --> Maintaining Control P
arameters for Data Transfer.
Note: The values for the data transfer are not hard limitations. It depends on t
he DataSource if these limits can be followed.
In the SAP BW Scheduler, you can determine the control parameters for data trans
fer for individual DataSources. You can determine the size of the data packet, t
he number of parallel processes for data transfer and the frequency with which t
he status IDocs are sent, for every possible update method for a DataSource.
To do so, choose Scheduler --> DataSource --> Default Settings for Data transfer
.
In this way you can, for example, update transaction data in larger data packets
in the PSA. If you want to update master data in dialog mode, smaller packets e
nsure faster processing.
Q) eal-time InfoCubes differ from standard InfoCubes in their ability to support
parallel write accesses. Standard InfoCubes are technically optimized for read
accesses to the detriment of write accesses.
Real-time InfoCubes are used in connection with the entry of planning data.
The data is simultaneously written to the InfoCube by multiple users. Standard I
nfoCubes are not suitable for this. You should use standard InfoCubes for read-o
nly access (for example, when reading reference data).
Structure
Real-time InfoCubes can be filled with data using two different methods: using t
he transaction for entering planning data, and using BI staging, whereby plannin
g data cannot be loaded simultaneously. You have the option to convert a real-ti
me InfoCube. To do this, in the context menu of your real-time InfoCube in the I
nfoProvider tree, choose Convert Real-Time InfoCube. By default, Real-Time Cube
Can Be Planned, Data Loading Not Permitted is selected. Switch this setting to R
eal-Time Cube Can Be Loaded With Data; Planning Not Permitted if you want to fil
l the cube with data using BI staging.
When you enter planning data, the data is written to a data request of the realtime InfoCube. As soon as the number of records in a data request exceeds a thre
shold value, the request is closed and a rollup is carried out for this request
in defined aggregates (asynchronously). You can still rollup and define aggregat
es, collapse, and so on, as before.
Depending on the database on which they are based, real-time InfoCubes differ fr
om standard InfoCubes in the way they are indexed and partitioned. For an Oracle
DBMS, this means, for example, no bitmap indexes for the fact table and no part
itioning (initiated by BI) of the fact table according to the package dimension.
Reduced read-only performance is accepted as a drawback of real-time InfoCubes,
in favor of the option of parallel (transactional) writing and improved write pe
rformance.
Creating a Real-Time InfoCube
When creating a new InfoCube in the Data Warehousing Workbench, select the RealTime indicator.
Converting a Standard InfoCube into a Real-Time InfoCube

Conversion with Loss of Transaction Data


If the standard InfoCube already contains transaction data that you no longer ne
ed (for example, test data from the implementation phase of the system), proceed
as follows:
...
1. In the InfoCube maintenance in the Data Warehousing Workbench, from the main
menu, choose InfoCube -> Delete Data Content. The transaction data is deleted an
d the InfoCube is set to inactive.
2. Continue with the same procedure as with creating a real-time InfoCube.
Conversion with Retention of Transaction Data
If the standard InfoCube already contains transaction data from the production o
peration that you still need, proceed as follows:
Execute ABAP report SAP_CONVERT_NORMAL_TRANS under the name of the corresponding
InfoCube. Schedule this report as a background job for InfoCubes with more than
10,000 data records because the runtime could potentially be long.
Q) Difference between 'F' fact table & an 'E' Fact table?
A cube has 2 fact tables - E and F. When the requests in the cube are not compre
ssed the data exists in the F fact table and when the requests are compressed th
e data lies in the E fact table.
When the requests are compressed all the request ids are lost (set to NULL) and
you would not be able to select/delete the data by request id. The data in the E
fact table is compressed and occupies lesser space than F fact table.
When you load a data target, say a cube, the data is stored in the F fact table.
If the cube is compressed, the data in the F fact table is transferred to the E
fact table.
The F-table uses b-tree indexes the E-Table uses bitmap indexes. Index, Primary
Index (The primary index is created automatically when the table is created in t
he database.). Secondary Index (usually abap tables), Bitmap Index(Bitmap index
es are created by default on each dimension column of a fact table), and B-Tree
Index.
Does anybody know what the compression factor is between the F-table and the E-t
able?
I.e. when you move 100 rows from the F-table, how many rows will be added to the
E-table?
There is no conversion factor. All the request id's are deleted when you compres
s the Cube and the records are aggregated based on the remaining dim id's.
Ex- suppose there is only one customer with C100 is doing Transactions & in 100
requests there are 100 records.
Then when you eliminate the request all records are aggregated to 1 record.
If there are 100 customers and you enterd each customer data in each request. Th
en when you do compression there will be 100 records still because the customer
no. Varies.
Bex access the records from F-table or E- Table of InfoCube?
Bex access both F and E fact tables. If data exists in both tables, it picks fro
m both.
If the cube is not compressed it takes from F table, if fully compressed it take
s from E table, partial compression - both F and E.
If Accessing from E- table is true, do we have to move the records from F table
to E table in-order to make the records available for reporting?
Data is automatically moved from F to E fact table when you compress the data in
the cube. You can do this in the cube manage->collapse tab.
E table will have data once you do compression, and compression is not mandatory
for all the cases. try using aggregates for reports.
When we do roll-up in InfoCube maintenance, records are moved to aggregates? or
moved from F table to E table?

Roll-up adds the copy of records from F or E table to the aggregate tables. The
records are not moved from F or E.
Q) How can I list all the inactive objects of a cube. Is there any transaction
code for it?
To check inactive objects,
Goto SE11->Utilities->Inactive Objects.
Goto SE38->Utilities->Inactive Objects.
To check all the objects (pgms, tables, classes, FM etc) of server, if they are
active or not:
There is NO ONE Table which will get you all the info.
1. REPOSRC
For programs, above is the tablename.
R3STATE is the field for status.
Note :
If a program is in ACTIVE state first, and then inactive (due to some modificati
ons), then this table will contain TWO entries for it.
a) A = active
b) I = inactive
2. Same Table for FUNCTION MODULES.
In the case of FM,
You will have to check the INCLUDE name for the corresponding FM.
eg. ZAM_FG01 = function group
ZAM_F06 = Function Name.
LZAM_FG01U02 = include name for this FM.
(it can be 02, 03, 01 etc.)
3. For Tables : DD02L
Field name = AS4LOCAL
(There will more than 1 record, if table is in inactivated state)
A Entry was activated or generated in this form
L Lock entry (first N version)
N Entry was edited, but not activated
S Previously active entry, backup copy
T Tempory version when editing
4. SVRS_GET_OBJECT_STATE
We can also use the above FM.
For the field object type, the following is necessary:
Program : REPS
Table = TABU
Q) How to derive 0FISCYEAR, 0FISCPER & 0FISCPER3 from 0CALMONTH?
Use formulas in the update rules which are avilable under TIME CHARs.
Go to your update Rules > select Time Chars
Code For 0FISCPER
* fill the internal table "MONITOR", to make monitor entries
data: l_fiscper type rsfiscper.
call function 'FISCPER_FROM_CALMONTH_CALC'
exporting
iv_calmonth = COMM_STRUCTURE-calmonth
iv_periv = 'K4'
importing
ev_fiscper = l_fiscper.
* result value of the routine
RESULT = l_fiscper.
Code for 0FISCPER3
data: l_fiscper3 type t009b-poper.
call function 'FISCPER_FROM_CALMONTH_CALC'
exporting
iv_calmonth = COMM_STRUCTURE-calmonth
iv_periv = 'K4'
importing
ev_fiscper3 = l_fiscper3.

* result value of the routine


RESULT = l_fiscper3.
Code for 0FISCYEAR
data: l_fiscyear type t009b-bdatj.
call function 'FISCPER_FROM_CALMONTH_CALC'
exporting
iv_calmonth = COMM_STRUCTURE-calmonth
iv_periv = 'K4'
importing
ev_fiscyear = l_fiscyear.
* result value of the routine
RESULT = l_fiscyear.
Just copy and paste the code in your system.
Note: K4 is the Variant. Change Variant according to your requirement.
Q) There are two ways to measure the size of the cube. One is an estimate and ot
her is the accurate reading in MB or GB.
Before you build the cubes if you want to estimate what will be the size of the
cube, then you you can use the formula.
Formula is:
IC = F x ((ND x 0.30)+2) x NR x NP
where
F = ((ND+3) x 4bytes) + (22 bytes x NK)
= required disk space in bytes
add
30% per dimension in the fact table
100% for aggregates
100% for indexes
F =
ND
NK
NR
NP

fact table size in bytes


= no. of dimension
= no. of key figures
= no. of records
= no. of periods

But as in your case you already have a cube and ODS that is ready so use the fol
lowing calculatioins (This is for the cube)
Data on the BW side is in terms of "number of records" not TB or GB. The size, i
f required has to be calculated only. You have to either use the formulae as giv
en at above, to translate the number of records into TB or GB or the easy way, i
f you want to do it for yourself, is to estimate from the data growth and put an
intellegent guess on it. Depends how accurate you would want to be.
The exact method, however, still remains as under:
Go through SE16. For example if the cube is ZTEST, then look at either E table o
r F table by typing in /BIC/EZTEST or /BIC/FZTEST and clicking on "number of rec
ords", just the way we do for other tables.
If the cube has never been compressed (a rare case if you are working on a reaso
nable project), then you need to bother only on the F Fact table as all the data
is in F Fact table only.
You can get the table width by going to SE11, type in the table name, go to "ext
ras" and "table width". Also you can get the size of each record in the fact tab
le in bytes. Next, you can find out the size of all dimension tables by doing th

is. The complete picture of extended star schema should be clear in your mind to
arrive at the correct figure.
Add all these sizes ( fact table width + all dimension tables widths) and multpl
y it by number of records in the fact table. this gives you total size of the cu
be.
If the cube is compressed, (as may be the case??) then you will need to add reco
rds in E table also becasue after compression, data moves from F Fact table to E
Fact table, hence you need to look into the E Fact table also.. Hope this helps
This is all done for the Cube. For the ODS you can get direct info from DB02
Q) Assume my DataSource is Initialized for the first time today and V3 run colle
cted 10 records to RSA7.
My understanding is, RSA7 displays these 10 records under both Delta Update & De
lta Repetition. When the InfoPackage run, 10 records will transfer to BW but RSA
7 still shows these 10 records under Delta Repetition until next V3 run. Suppose
next V3 run collected 5 records to RSA7. This time RSA7 shows newly added 5 rec
ords under Delta Update and both newly added and old records (15 records) under
Delta Repetition. When the InfoPackage run for next time, 5 records will transfe
r to BW and RSA7 shows newly added 5 records in Delta Repetition until next V3 r
un and deleted old 10 records.
Yes, your assumption is correct. But with one caveat. The data would get deleted
from Delta repeat section only when the next delta run is successful. This is d
one to ensure that no delta is lost.
Explanation:
Assume my DataSource is Initialized for the first time today and V3 run collecte
d 10 records to RSA7. My understanding is, RSA7 displays these 10 records under
both Delta Update & Delta Repetition.
I guess you would see 10 records only for delta update and no records for repeti
tion as you are yet to run the first delta and you cannot do a delta repetition.
When the InfoPackage run, 10 records will transfer to BW but RSA7 still shows th
ese 10 records under Delta Repetition until next V3 run. Suppose next V3 run col
lected 5 records to RSA7. This time RSA7 shows newly added 5 records under Delta
Update and both newly added and old records (15 records) under Delta Repetition
.
If the status in the monitor is green for the delta update, you shall have 10 re
cords in delta repetition. These will be cleared from delta update. If the next
V3 brings 5 records, you shall have 10 in repetition and 5 in delta update.
When the InfoPackage run for next time, 5 records will transfer to BW and RSA7 s
hows newly added 5 records in Delta Repetition until next V3 run and deleted old
10 records.
5 records in repetition and any records from V3 under delta update.
Q) Explain the steps for performance tuning in the bw R/3 system.
by: Anoo

Indices
With an increasing number of data records in the InfoCube, not only the load but
also the query performance can be reduced. This is attributed to the increasing
demands on the system for maintaining indexes. The indexes that are created in
the fact table for each dimension allow you to easily find and select the data.
Partitioning
By using partitioning you can split up the whole dataset for an InfoCube into se
veral, smaller, physically independent and redundancy-free units. Thanks to this
separation, performance is increased when reporting, or also when deleting data
from the InfoCube.
Aggregates
Aggregates make it possible to access InfoCube data quickly in Reporting. Aggreg
ates serve, in a similar way to database indexes, to improve performance.
Compressing the Infocube
Infocube compression means aggregation of the data ignoring the request id s. Afte
r compression, the system need not perform aggregation using the request ID ever
y time you execute a query.
Basesd on these you may have doughts that!
- Compare and contrast the above techniques?
- Are all of the the above techniques are to improve the query performance?
- What techniques do we follow to improve the dataload performance?
For all these doughts!
Yes, the creation of indices shud be done after loading, because just like a boo
k index, and aggregates improves the query perfo becaz, you can observe at the q
uery execution time when it is above to give you the output for first time, the
OLAP processor takes much time to calculate, but for the next time it will be fa
ster....
In what ways and what combinations should they be implemented in a project?
It means.
In project depending upon the client requirement, if the reports are running slo
w, losding slow, .......for all types of issues, we need to study, by maintainin
g the statistical information, by using tcodes and procedures and tables like, R
SDDSTAT, st22, db02... and then we need to analyse the issue and follow the tech
inques required.
Basically the Following are the points to be kept in mind to improve loading per
formance.
1. When you are extracting data from source system using PSA Transfer method:
Using PSA adn Datat target parallel ---- for faster loading
Using Only PSA & update subsequent data target -----reduces the burden on the se
rver
2. Data packet Size: When extracting data from source system to BW, we use data
packets. As per SAP standard, we prefer to have 50,000 records per one data pack
et.
For every data packet, it does commit & save --- so less no. of data packets req
uired.
If you have 1 lakh records per data packet and there is an error in the last rec
ord, the entire packet gets failed ---3.In a project, we have millions of records to extract from different modules to
BW. All loads will be running in the background for every 1 or two hours aproxi
mately which will be handled by workprocess. We need to make sure that the work
process is neither over utilized not under utilized.
4. Drop index of a cube before loading
5. Distribute work load among multiple server instances

6. Prefer delta load: as it loads only newly added or modified records.


7. We should deploy parellism. Multiple Info packages should be run simultaneous
ly.
8. Update routines and transfer routines should be avoided unless necessary. And
the routine should be a optimized code.
9. We should prefer to laod master data and then transaction data because when u
load master data, SID is generated and this SID is used in Transaction data.
This is all about my celar Picture of Performance Issues!
Q) COPA Extraction Steps
The below are the command steps and explanation. COPA Extraction -steps
R/3 System
1. KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialise
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract
BW
1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA OptionPls r
17. Ready for Delta Load
LIS, CO/PA, and FI/SL are Customer Generated Generic Extractors, and LO is BW Co
ntent Extractors.
LIS is a cross application component LIS of SAP R/3 , which includes, Sales Info
rmation System, Purchasing Information System, Inventory Controlling....
Similarly CO/PA and FI/SL are used for specific Application Component of SAP R/3
.
CO/PA collects all the OLTP data for calculating contribution margins (sales, co
st of sales, overhead costs). FI/SL collects all the OLTP data for financial acc
ounting, special ledger
1) Add the fields to the operating concern. So that the required field is visibl
e in CE1XXXX table and other concerned tables CE2XXXX, CE3XXXX etc.
2) After you have enhanced the operating concern then you are ready to add it to
the CO-PA data source. Since CO-PA is a regenerating application you can't add
the field directly to the CO-PA data source. You need to delete the data source
and then need to re-create using KEB2 transaction.
3) While re-creating the data source use the same old name so that there won't b
e any changes in the BW side when you need to assign the data source to info-sou
rce. Just replicate the new data source in BW side and map the new field in info
-source. If you re-create using a different name then you will be needing extra
build efforts to take the data into BW through IS all the way top to IC. I would
personally suggest keep the same old data source name as before.

If you are adding the fields from the same "Operating concern" then goto KE24 an
d edit the dataaource and add your fields. However if you are adding fields outs
ide the "Operating concern" then you need to append the extract structure and
populate the fields in user exit using ABAP code. Reference OSS note: 852
443
1. Check RSA7 on your R3 to see if there is any delta queue for COPA. (just to s
ee, sometimes there is nothing here for the datasource, sometimes there is)
2. On BW go to SE16 and open the table RSSDLINIT
3. Find the line(s) corresponding to the problem datasource.
4. You can check the load status in RSRQ using the RNR from the table
5. Delete the line(s) in question from RSSDLINIT table
6. Now you will be able to open the infopackage. So now you can ReInit. But befo
re you try to ReInit ....
7. In the infopackage go to the 'Scheduler' menu > 'Initialization options for t
he source system' and delete the existing INIT (if one is listed)
Q) Delete unwanted Objects in QA system
I have deleted unwanted Update rules and InfoSources (that have already been tra
nsported to QA system) in my DEV system. How do I get them out of my QA system?
I cannot find the deletions in any transports that I have created. Although they
could be buried somewhere. Any help would be appreciated.
I had the same problem with you. And I have been told there is a way to delete t
he unwanted objects. You may request the Basis team to open up test box tempora
rily to remove the obsolete Update rules and InfoSources. Remember to delete the
request created in test after you have removed the Update rules and InfoSources
.
When I tried to delete the master data, get the following message"Lock NOT set f
or: Deleting master data attributes". What I need to do in order to allow me can
delete the master data.
Since, technically, the master data tables are not locked via SAP locks but via
a BW-specific locking mechanism, it may occur in certain situations, that a lock
is retained after the termination of one of the above transactions. This always
happens if the monitor no longer has control, for example in the case of a shor
t dump. If the monitor gets the control back after an update termination (regula
r case), it analyzes whether all update processes (data packets) for a request h
ave been updated or whether they have terminated. If this is the case, the lock
is removed.
Since the master data table lock is no SAP lock, this can neither be displayed n
or deleted via Transaction SM12. There is an overview transaction in the BW Syst
em, which can display and delete all currently existing master data table locks.
Via the button in the monitor with the lock icon or via Transaction code RS12 y
ou can branch to this overview.
A maximum of two locks is possible for each basis characteristic:
Lock of the master data attribute tables
Lock of the text table
Changed by, Request number, Date and Time is displayed for every lock. Furthermo
re, a flag in the overview shows whether locks have been created via master data
maintenance or master data deletion.
During a loading process the first update process starting to update data into t
he BW System (several processes update may update in parallel for each data requ
est), sets the lock entry. All other processes only check whether they belong to
the same data request. The last process, which has either been updated or has t
erminated, causes the monitor to trigger the deletion of the lock.
Q) Differences Among Query, Workbook and View
Many people are confused by the differences among: Query, Workbook, and View.
Here are my thoughts on the subject:
A query definition is saved on the server. Never anywhere else.
Although people say a workbook contains a query (or several queries); it does no
t. It contains a reference to a query. The workbook can be saved on the server;

or anywhere else that you might save an Excel workbook.


What happens if someone changes the query definition on the server?
Answer: the next time you refresh the query in the Excel workbook, the new query
definition replaces the old query definition in the workbook. Maybe. It depends
on what change was made.
For example, if someone added a Condition to the query definition, the workbook
will be virtually invisible to this. The Condition is available; but, is not imp
lemented in the workbook. (Until the user of the workbook manually adds the view
of the Condition and then activates it.)
For example, if someone changed the definition of a KF in the query definition,
the revised KF will show up in place of the old KF in the workbook.
But ... if, for example, someone deleted the old KF and added a new KF, we get a
different story. Now the old KF no longer appears (it does not exist); but, the
new KF does not appear (it was not marked to be visible in the workbook).
About workbooks as views ... OK, a workbook may very well have a certain "view"
of the query (drilldown, filters, et cetera). And, if the workbook is saved to t
he server in a Role where everyone can access it, this is good. But, if the work
book is saved to one's favorites, then this "view" is only accessible to that in
dividual. Which may be good. Or may not.
A "saved view", on the other hand is stored on the server. So, it is available t
o all.
If you navigate in a workbook you can back up. You can back up, though, only as
far as you navigated in the current session. You cannot back up to where you wer
e in the middle of last week's session. Unless you saved that navigation state a
s a "saved view". Then, you can jump to that view at any time.
The downside of saved views is that they are easy for anyone to set up and diffi
cult for most to delete.
Q) Customer Exit Variable In Bex
The customer exit works at:
1. Extraction side
After enhancing datasource in RSA6 we need to populate those enhanced fields in
that case we have to create a project in cmod transaction and select the Enhance
ment assignment RSAP0001 and select the appropriate FM and need to write the sel
ect statement in the appropriate include. EXIT_SAPLRSAP_001 - Transaction data E
XIT_SAPLRSAP_002 - Master data EXIT_SAPLRSAP_003 - Text EXIT_SAPLRSAP_004 - Hier
archy The above things we need to do in Source System side Ex: R/3
2. Reporting side
We need to write the user-exit to populate Reporting related variables in the En
hancement assignment RSR00001 and select the FM EXIT_SAPLRRS0_001 and then in th
e include ZXRSRU01 we need to write the code. These are helpful especially we ne
ed to derive any varible.
Along with that:
BEx User Exit allows the creation and population of variables and calculations f
or key figures and variables on a runtime basis.
R/3 User Exit is found in R/3 under CMOD and contains additional programming tha
t is needed to fill field additions to extract structures.
Q) Restricted Key figures:
The key figures that are restricted by one or more characteristic selections can
be basic key figures, calculated key figures or key figures that are already re
stricted.
Calculated key Figure:
Calculated key figures consist of formula definitions containing basic key figur
es, restricted key figures or precalculated key figures.
Procedure for Defining a new restricted key figure:
1. In the InfoProvider screen area, select the Key Figures entry and choose New
Restricted Key Figure from the context menu (secondary mouse button).
If a restricted key figure has already been defined for this InfoProvider, you c
an also select the Restricted Key Figures entry and then choose New Restricted K
ey Figure from the context menu.
The entry New Restricted Key Figure is inserted and the properties for the restr

icted key figure are displayed in the Properties screen area.


2. Select the New Restricted Key Figure entry and choose Edit from the context m
enu (secondary mouse button).
The Change Restricted Key Figure dialog box appears.
You can also call the Change Restricted Key Figure dialog box from the Propertie
s screen area by choosing the Edit pushbutton.
You make the basic settings on the General tab page.
The text field, in which you can enter a description of the restricted key figur
e, is found in the upper part of the screen.
You can use text variables in the description (see Using Text Variables).
Next to that, you can enter a technical name in the Technical Name field.
Underneath the text field, to the left of the Detail View area, the directory of
all objects available in the InfoProvider is displayed. The empty field for def
ining the restricted key figure (Details of the Selection) is on the right-hand
side of the screen.
3. Using drag and drop, choose a key figure from the InfoProvider and restrict i
t by selecting one or more characteristic values. See Restricting Characteristic
s.
You can also use variables instead of characteristic values. However, note that
you cannot use the following variable types in restricted key figures for techni
cal reasons:
Variables with the process type Replacement with Query (see also Replacement Pat
h: Replacement with Querys)
Variables that represent a precalculated value set (see also Details)
You can use these variable types to restrict characteristics in the rows, column
s, or in the filter.
4. Make any necessary settings for the properties of the restricted key figure o
n the other tab pages. See Selection/Formula Propertiess.
5. Choose OK. The new restricted key figure is defined for the InfoProvider.
Q) V1 - Synchronous update
V2 - Asynchronous update
V3 - Batch asynchronous update
These are different work processes on the application server that takes the upda
te LUW (which may have various DB manipulation SQLs) from the running program an
d execute it. These are separated to optimize transaction processing capabilitie
s.
Synchronous Updating (V1 Update)-->>
The statistics update is made synchronously with the document update.
While updating, if problems that result in the termination of the statistics upd
ate occur, the original documents are NOT saved. The cause of the termination sh
ould be investigated and the problem solved. Subsequently, the documents can be
entered again.
Asynchronous Updating (V2 Update)-->>
With this update type, the document update is made separately from the statistic
s update. A termination of the statistics update has NO influence on the documen
t update (see V1 Update).
Asynchronous Updating (V3 Update) -->>
With this update type, updating is made separately from the document update. The
difference between this update type and the V2 Update lies, however, with the t
ime schedule. If the V3 update is active, then the update can be executed at a l
ater time.
If you create/change a purchase order (me21n/me22n), when you press 'SAVE' and s
ee a success message (PO.... changed..), the update to underlying tables EKKO/EK
PO has happened (before you saw the message). This update was executed in the V1
work process.
There are some statistics collecting tables in the system which can capture data
for reporting. For example, LIS table S012 stores purchasing data (it is the sa
me data as EKKO/EKPO stored redundantly, but in a different structure to optimiz
e reporting). Now, these tables are updated with the txn you just posted, in a V
2 process. Depending on system load, this may happen a few seconds later (after

you saw the success message). You can see V1/V2/V3 queues in SM12 or SM13.
V3 is specifically for BW extraction. The update LUW for these is sent to V3 but
is not executed immediately. You have to schedule a job (eg in LBWE definitions
) to process these. This is again to optimize performance.
V2 and V3 are separated from V1 as these are not as realtime critical (updating
statistical data). If all these updates were put together in one LUW, system per
formance (concurrency, locking etc) would be impacted.
Serialized V3 update is called after V2 has happened (this is how the code runni
ng these updates is written) so if you have both V2 and V3 updates from a txn, i
f V2 fails or is waiting, V3 will not happen yet.
BTW, 'serialized' V3 is discontinued now, in later releases of PI you will have
only unserialized V3.
In contrast to V1 and V2 Updates , no single documents are updated. The V3 updat
e is, therefore, also described as a collective update.
1. Application tables (R/3 tables)
2. Statistical tables (for reporting purpose)
3. update tables
4. BW queue
Statistical tables are for reporting on R/3 while update tables are for BW extra
ction. Is data stored redundantly in these two (three if you include application
tables) sets of table?
Yes it is.
Difference is the fact that update tables are temporary, V3 jobs continually ref
resh these tables (as I understand). This is different from statistics tables wh
ich continue to add all the data. Update tables can be thought of as a staging p
lace on R/3 from where data is consolidated into packages and sent to the delta
queue (by the V3 job).
Update tables can be bypassed (if you use 'direct' or 'queued' delta instead of
V3) to send the updates (data) directly to the BW queue (delta queue). V3 is how
ever better for performance and so it is an option alongwith others and it uses
update tables.
Statistical table existed since pre BW era (for analytical reporting) and have c
ontinued and are in use when customers want their reporting on R/3.
The structure of statistical table might be different from the update table/BW q
ueue, so, even though it is based on same data, these might be different subsets
of the same superset.
V3 collective update means that the updates are going to be processed only when
the V3 job has run. I am not sure about 'synchronous V3'. Do you mean serialized
V3?
At the time of oltp transaction, the update entry is made to the update table. O
nce you have posted the txn, it is available in the update table and is waiting
for the V3 job to run. When V3 job runs, it picks up these entries from update t
able and pushes into delta queue from where BW extraction job extracts it.
Q) By thumb rule we can say that aggregates improve Query performance.
Q's : o.k then what is thumb rule ?
Rules for Efficient Aggregates:
"Valuation" column evaluates each aggregate as either good or bad. The valuation
starts at "+++++" for very useful, to "-----" for delete. This valuation is onl
y meant as a rough guide. For a more detailed valuation, refer to the following
rules:
1. An aggregate must be considerably smaller than its source, meaning the InfoCu
be or the aggregate from which it was built. Aggregates that are not often affec
ted by a change run have to be 108 times smaller than their source. Other aggreg
ates have to be even smaller. The number of records contained in a filled aggreg
ate is found in the "Records" column in the aggregates maintenance. The "Summari
zed Records (Mean Value)" column tells you how many records on average have to b
e read from the source, to create a record in the aggregate. Since the aggregate
should be ten times smaller than its source, this number should be greater than
ten.
2. Delete aggregates that are no longer used, or that have not been used for a l

ong time. The last time the aggregate was used is in the "Last Call" column, and
the frequency of the calls is in the "Number of Calls" column. Do not delete th
e basic aggregates that you created to speed up the change run. Do not forget th
at particular aggregates might only not be used at particular times (holidays, f
or example).
3. Determine the level of detail you need for the data in the aggregate. Insert
all the characteristics that can be derived from these characteristics. For exam
ple, if you define an aggregate on a month level, you must also include the quar
ter and the year in the aggregate. This enhancement does not increase the quanti
ty of data for the aggregate. It is also only at this point, for example, that y
ou can actually build a year aggregate from this aggregate, or that queries that
need year values are able to use this aggregate.
4. Do not use a characteristic and one of its attributes at the same time in an
aggregate. Since many characteristic values have the same attribute value, the a
ggregate with the attribute is considerably smaller than the aggregate with the
characteristic. The aggregate with the characteristic and the attribute has the
same level of detail and therefore the same size as the aggregate with the chara
cteristic. It is however affected by the change run. The attribute information i
n the aggregate is contained in the aggregate only with the characteristic using
the join with the master table. The aggregate with the characteristic and the a
ttribute saves only the database
join. For this reason, you cannot create this k
ind of aggregate. If they are ever going to be useful, since otherwise the datab
ase optimizer creates bad execution plans, you can create an aggregate of this k
ind in the expert mode (in 2.0B: In the aggregate maintenance select an aggregat
e: Extras > Expert Mode, otherwise enter "EXPT" in the OK code field).
The factor 10 used in the following, is only meant as a rule of thumb. The exact
value depends on the user, the system, and the database. If, for example, the d
atabase optimizer has problems creating a useful plan for SQL statements with a
lot of joins, aggregates with smaller summarization are also useful, if this mea
ns that joins are saved.
Q) Explain the way to use return table option.
The return table is basically another key for each key figure and since you have
the possibility to dynamically update a key figure or not, (RETURN CODE), the q
ty can be updated, but not the value (or vice versa).
If you want to split data for more key figures it is better to do in start routi
ne.
The main difference between start routine and field wise routine is:
You no need to maintain new or more fields in data target in infosource. But if
you want to write start routine all the fields should available in infosource as
well and you have to map one-to-one in updaterules.
If you loading into cube then time distribution is available,(ex: you are gettin
g month level data want to distribute to week level, you can use time distributi
on, no need of coding).
For sample codes, plz search in this forum with search term " start routine" you
will get a lot.
For simple example you are getting price and quantity from data source and you w
ant calculate Value and you are having separate key figure for Value in your tar
get.
You achive this in 2 ways.
1. Field wise routine.
2. Start Routine.
If you want to write at field level you no need to maintain new keyfigure(value)
in infosurce, you can calculate this either by formula or by routine.
But if you want to calculate at start routine, this nes KF(value should availabl
e in infosouce), then only it will be available DATA_PACKAGE in start routine. T
hen only you can assign formula. and also you have to map one-to-one in update r
ules.
Actually if you are creating a new infosource you can modify accordingly by addi
ng requed fields, but when you are loading from one ODS/CUBE to another ODS/CUBE

, data mart scenario, system will generate datasource and infosources. Then it w
ill be a bit difficult.
Q) RDA - Real time data acquistion - it brings real time data to BW from the r3/
web services.
It uses a program called Dameon that controls the data flow in BW and takes care
of extraction from source system.
With Remote cube we can't access large volume of data & large number of users.
With RDA we can do the reporting on large volume od data & large number of users
.
Here in RDA we are going to store the data Physically.
DEAMON : Data Extraction and Monitoring
Through RDA only to DSO we store the Data.
Datasource should be realtime supported.
Deamon supports two levels only i.e Deltaqueue to PSA and then DSO only.
Sources that support RDA are Webservices & SAP.
Daemon : RSRDA
We can't schedule RDA through Processchain. Only through DAEMON.
We can create only one Realtime IP on Datasource.
Q) What are the steps to create RFC connection on Business Warehouse?
Step1 :- On the BW side :1. Create a logical System. SPRO->ALE-> Sending &Receiving Systems -> Logical Sy
stem-> New Entries (E.g 800 BWCLNT800)
2. Assign client to logical System.
Step 2 :- Same Procedure for r/3 on r/3 side to create a logical system.
Step3 :- BW side :- Create RFC Connection in SM59.
RFC destination name - Name should be logical system in r/3.
Connection type:- 3
1st tab technical settings
Target host :- IP address of r/3 server.
Sytem :03
2nd tab Logon/Security
Lang-En
Client-r/3 client no
user- r/3 user
Password - r/3 password.
Step 4:- R/3 same procedure SM59
RFC destination name - Name should be logical system in bw.
Connection type:- 0
1st tab technical settings
Target host :- IP address of r/3 server.
Sytem :03
2nd tab Logon/Security
Lang-En
Client-bw client no
user- bw user
Password - bw password.
Step 5 :- spro -> select img -> biw->links to other sytems -> links between r/3
and bw
create ALE user in S.S -> select bwaleremote -> back
Step 6 :- In bw
su01
username BWREMOTE
profiles S_BI_WHM_RFC
S_BI_WX_RFC
Save.
Step 7 :- In R/3
su01
username ALEREMOTE
profiles S_BI_WHM_RFC
S_BI_WX_RFC

Save
Step 8 :- In R/3
Create RFC user
su01
user RFCUser create
usertype system
pwd 1234
profiles SAP_ALL
SAP_NEW
S_BI_WX_RFC
Step9 :RSA1
se16
Table RSAMIN enter default client in the field ?BWMANDT RZ10
Step10 :- In bw
su01
user RFCUser create
usertype system
pwd 1234
profiles SAP_ALL
SAP_NEW
S_BI_WHM_RFC
Step11 :- In bw
RSA1 - Source system -> create
RFC destination
Target system host name of r/3
SID:
System no
Source system ALEREMOTE
pwd
BAckgroung user :BWREMOTE
PWD:
Q) Explain about "BW statistics" and how it is useful in improving the performan
ce in detail?
BW statistics is nothing but the SAP deliverd 1multiprovider and 5 cubes which c
an get the statistics of the objects developed. We have to enable and activate
the BW statistics for particular objects which you want to see the statistics an
d to gather required data. But this no way will improve the performance. But w
e can analyze the statistics data and based on the data can decide on the ways t
o improve performance i.e. setting the read mode, compression, partitioning, cre
ation of aggregates etc.....
BW Statistics is a tool
-for the analysis and optimization of Business Information Warehouse processes.
-to get an overview of the BW load and analysis processe
The following objects can be analyzed here:
Roles
SAPBWusers
Aggregates
Queries
InfoCubes
InfoSources
ODS
DataSources
InfoObjects
The BW Statistics sub-area is the most important of the two
1. BW Statistics
2. BW Data Slice
BW Statistics data is stored in the Business Information Warehouse.
This information is provided by a MultiProvider (0BWTC_C10), which is based on s
everal BW BasisCubes.

OLAP (0BWTC_C02)
OLAP Detail Navigation (0BWTC_C03)
Aggregates (0BWTC_C04)
WHM (0BWTC_C05)
Metadata ( 0BWTC_C08 )
Condensing InfoCubes (0BWTC_C09)
Deleting Data from an InfoCube (0BWTC_C11)
BW Data Slice to get an overview of the requested characteristic combinations fo
r particular InfoCubes and of the number of records that were loaded. This infor
mation is based on the following BasisCubes:
-BW Data Slice
-Requests in the InfoCube
BW Data Slice
BW Data Slice contains information about which characteristic combinations of an
InfoCube are to be loaded and with which request, that is, with which data requ
est.
Requests in the InfoCube
The InfoCube Requests in the InfoCube does not contain any characteristic combin
ations you can create queries for this InfoCube that return the number of data r
ecords for the corresponding InfoCube and for the individual requests data flow
fall into below data.
- data load data management
- data analysis
Q) What is the use of match or copy in business content.
Match (X) or Copy
If the SAP delivery version and the active version can be matched, a checkbox is
displayed in this column.
With the most important object types, the active version and the SAP delivery ve
rsion can be matched.
From a technical point of view, the SAP delivery version (D version) is matched
with the M version. As in most cases the M version is identical to the active ve
rsion (A version) in a customer system, this is referred to as a match between t
he D and A versions for reasons of simplification.
When a match is performed, particular properties of the object are compared in t
he A version and the D version. First it has to be decided whether these propert
ies can be matched automatically or whether this has to be done manually. A matc
h can be performed automatically for properties if you can be sure that the obje
ct is to be used in the same way as before it was transferred from Business Cont
ent. When performing matches manually you have to decide whether the characteris
tics of a property from the active version are to be retained, or whether the ch
aracteristics are to be transferred from the delivery version.
Example of an automatic match
Additional customer-specific attributes have been added to an InfoObject in the
A version. In the D version, two additional attributes have been delivered by SA
P that do not contain the customer-specific attributes. In order to be able to u
se the additional attributes, the delivery version has to be installed from Busi
ness Content again. At the same time, the customer-specific attributes are to be
retained. In this case, you have to set the indicator (X) in the checkbox. Afte
r installing the Business Content, the additional attributes are available and t
he customer-specific enhancements have been retained automatically. However, if
you have not checked the match field, the customer-specific enhancements in the
A version are lost.
Example of a manual match
An InfoObject has a different text in the A version than in the D version. In th
is case the two versions have to be matched manually. When Business Content is i
nstalled, a details screen appears which asks you to specify whether the text sh
ould be transferred from the active version or from the D version.
The Match indicator is set as default in order to prevent the customer version b
eing unintentionally overwritten. If the Content of the SAP delivery version is
to be matched to the active version, you have to set the Install indicator separ

ately.
The active version is overwritten with the delivery version if
- the match indicator is not set and
- the install indicator is set.
In other words, the delivery version is copied to the active version.
If the Install indicator is not set, the object is not copied or matched. In thi
s case, the Match indicator has no effect.
In the context menu, two options are available:
a. Merge All Below
The object in the selected hierarchy level and all objects in the lower levels o
f the hierarchy are selected as to Match.
b. Copy All Below
The Match indicators are removed for the object in the selected hierarchy level
and all objects in the lower levels of the hierarchy. If the Install indicator i
s also set, these objects are copied from the delivery version to the active ver
sion.

Das könnte Ihnen auch gefallen