Sie sind auf Seite 1von 67

BAS

Business

Analytical

Real time/interview
questions

systems

B I W Re a l
November 15, 2012

time

Class

A 30 DAYS SESSION
BW/BI consultant roles and responsibilities in support projects:
I.

Monitoring data loads

II.

Monitoring Ticketing tools & resolving issues

III.

Working on enhancements, Break-fix, New Projects.

Monitoring:
Live Monitoring [6-8 days]
Client production server should be monitored every day for 2-3 hours.

How to monitor process chains

How to monitor background jobs

SM37

How to monitor work process

SM50

Load monitor

RSMO

Short dump

ST22

System logs

SM21

TRFC's

SM58

RSPC

Service level aggrements.


Collection of statistics of daily loads
Errors in data load

1. No SID found, No unit found


2. Small case, hexadecimal, special characters examining, break-fix, permanent
solutions
3. Dead lock errors, how to rectify, permanent solutions
4. Unable to process data packets - Delay
5. TRFC stuck up and manual approaches to resolve. - sm58 - delay

6. Error in source system ( RSDBTIME) - Time zone for kernel and application servers
7. Time stamp or PSA error
8. Table space problems
9. How to resume process chain if repeat is not available.

Ticketing:

[3]

Monitoring tickets, ticketing tools

Levels of support
o

Level 0

Level 1

Level 2

Level 3

Ticketing procedure

Status of tickets

SAP NOTES OSS NOTES


sap made some mistakes while developing product to resolve those mistakes, we need to
implement sap notes, OSS notes.
Types of SAP note how to search? [service.sap.com]
How to implement using S-NOTE transaction.
SAP NOTES OSS NOTES examples.

[4]

Support pack implementation and BW/BI consultant role


Coordination with basis persons
Activities in system down time
Prepare strategic planning for outage

[5]

Transports
Packages, transport, landscape of projects, transport connections
process flow for transportation objects
dependencies in transports

[6]

Life Cycle [SDLC, Accelerated or water fall],


Releases
Quality gates
Sign offs
SDLC / Phases of life cycle

[7]

1)

RD activities

2)

Design

3)

Construction

4)

Testing

5)

UAT

6)

Go-live

7)

Support

Documentation
Requirements of documents
High level estimation documents
Designing of Technical, functional documents
Test cases and results
Implementation plan

Production support documents

[8]

Objects
Aggregates in project
Indexes
SAP info cube perfect modeling
Data source enhancements
Customer exit for variables
Flat file extraction and automation

[9]
[10]

Interview questioner [50-75]


Real time preparation & Mock interview

Miscellaneous topics:
Job status, Kill a job, Stop jobs,

DAY 01 - November 1, 2012

BW project scope( vertical)


Manufacturing
Aero space
Foot
Utilities

ECC - BW - Mainframe system


ECIP( Special finance transactions) -- FF -- > BW
Application areas in BW
SD - Misc billing( daliy) SD master(IO, DSO) Mater PC( text, attr, hier) + transactional data(CUBE, DSO) SD tran
PC
FI AP, AR, GL(daily) - master(IO, DSO) + transactional data(CUBE, DSO)
MM Purchasing(daily)
PM ( daily)
CO - (daily basis)
EC_PCA( daily basis)
FIS - Facilities information system ( custom application area) - daily
FERC Federal energy regulatory commission (On demand basis)
ROWVC FULL weekly based
HR Daily , Bi weekly( 1st Monday of month, 3rd Monday of month)
TEXT chains( not important master data) Weekly
System landscape
Landscape - How system were arranged in client network
Training system - Sand box - Only for training purpose Feasibility check
Development system - Develop objects - Create transports More authorizations Development client, Gold
client( spro), , ECC multiple clients - Development client, testing client, Gold client( SPRO)
QA system Do not have auth to create objects, Display and testing purpose. Developer - System and integration
testing UAT (User acceptance testing)
Pre Prod System replica of production system To avoid transport failure and sudden surprises for production box
We can simulate production activities.
Production system Live environment, Data flows happens daily basis.

DA system : Backup system for Production system in case system collapse.


Live monitoring: in BW production system
Process chain logs - RSPC
Process chain monitoring RSPCM
Load monitoring - RSMO IP, DTP
Job monitoring - SM37
Work process SM50
Short dumps analysis - ST22
System logs - ora errors SM21
TRFC stuck up - SM58
In our project systems located in client place, Process chains starts at 12:05 AM (EST) = 9:35 AM (IST) runs for
4 hrs.
SLA on Daily loads
Service level Agreement in between client and service provider.
Daily loads starts at 12:05 AM, it should complete by 6:30 AM. In a month success rate should be 90%.
If Daily chain end up errors , then BW consultant should place system wide messages using transaction SM02.
Daily loads happens because of a big meta chain called Main chain
Main chain consist all master data local chains and all transactional data local chains
Main chain consist below chains

Main chain
Start process type : Define when process chain should start and how it should start?
Transfer settings from source system : it useful to update technical T tables from ECC system to Bw system
Master data main chain : it is useful to extract all master data information from ECC system(SD MD, FI MD,CO
MD,HR MD, ROWVC MD)
Transactional data chains: CO chain( it transfer all co transactional(SD TD,HR TD, FIS TD,PUR TD,treasury TD, PCA
TD) + FI chain AR AP GL transactional data)
Export financial data + BCS data to mainframe system : Using Info spoke and hierarchy down load programs this
chain transfer data from BW cubes and info objects to application s.
Start process type : Mandatory process type in Process chain, We cant include multiple start process types in a
single process chain.
Immediate : we are going to use this start type to trigger on demand loads.
Date / time : If we want to run process chains periodically based on the Date and time. Most of process chains
trigger using this.
After job : If you want to trigger process chain once extraction program job completes( once required data has
been extracted using program), then we need use job name in After job tab.
ECC
3 tables Extraction program (Job) TABLE(DS) ---PC( Start variant (After job) .
After event:
Create in SM62, we can manually trigger using SM64, If we want trigger process chains based on the event (Running
transaction, Placing Flat file in Application server is a event).
Factory calendar:
Special calendar (SCAL) 1st Monday 3rd Monday loads.
Job status
Planned: Job without any start condition
Released: Job with future start condition. Waiting for something (Future Dates/Date, After job, After event).
Ready: Job will in the status for Msec, Release-Ready-- Active
Active(Yellow): Job is running actively then we can call that job as active job.
Finished(Green): Job completed successfully.
Cancelled(Red): Job failed due to some reason.
How to convert released status jobs to scheduled or planned status?
2A : Select process chain go to start process type RC display all jobs Select the job which in Released
status Job menu released schedule.
How to stop periodically running chains?
1A : Select process chain MENU execution remove from scheduling.

How to convert schedule or planned status jobs to release status.


Schedule ( No start conditions)
Select Schedule status job release give start conditions and save.
Case study:
Choose program. Create job1( sm36) .Insert program in this job schedule this job to run every day at 12:05 AM.
( batch jobs).
Choose program Create Job 2(sm36). Insert program in this job schedule this job to run after first job.

Selection criteria options

Definition
In this field you specify variables for periodically scheduled load processes.
If you want to load data from the source system into BI periodically and therefore periodically want to change the entries in
the selection fields (for example, the date filed), you can select one of the following options for all the fields that can be
selected:

ABAP Routine (Type 6)

You can define an ABAP routine which the system processes at runtime. This routine has access to all selection
fields and is the last to be processed at runtime.
OLAP Variable (Type 7)
You can use variables here.

You can enter the following variable types directly in the field. For reasons of compatibility to future releases, however, we
recommend that you no longer use these variable types; instead you should perform such selections with suitable routines
or (OLAP) variables.

For date fields:

Get Yesterday (Type 0)

Yesterday's date is automatically entered in the date field.


Get Last Week (Type 1)

The last week is selected.


Get Last Month (Type 2)

The last month is selected.


Get Last Quarter (Type 3)

The last quarter is selected.


Get Last Year (Type 4)
The last year is selected.

For fields that are not date fields:

Dynamic Selection (Type 5)


You can change fields periodically.
To define the dynamic selection, after selecting the variable type select Details for Type and enter the required
data.

Transfer global settings from source system


If there is new unit created in source system or new currency is there in source then target system BW should know
about these newly created units and currencies .
Manual option to transfer global settings:
RSA1 Select source system RC Transfer global settings.
System is going to execute a program called RSIMPCUST
Transfer global table contents
Currencies
UOM
Fiscal year variants
Factory calendars (Set of T tables)
When we run this transfer global settings Data from set of T tables like ( T006 T006A TCURV) will be transfer into
Same set of tables of BW.
We can automate this process by using RSIMPCUST program in process type called ABAP program.
We have transfer exchange rates to transfer exchange rate changes from source system BW system.
Program called RSIMPCURR useful to transfer data from source system to BW
We can automate this process by using RSIMPCURR program in process type called ABAP program.

d01af99d-1bad-2d10
-25a3-a342fd36c12f.pdf

Master data process chains


MD (text, attribute, Hierarchy) -------- IO,

DSO

Master process chains consist text load, Attribute load, Hierarchy load into IO
Text load IP(3.x) , IP and DTP(BI 7) , process types Execute info package, Data transfer process( Start mandatory)
Attribute load IP, DTP, ACR( Attribute change run). TO get latest and greatest information about attributes after
attribute has been loaded into BW system we need to run ACR
Hierarchy load : IP, SAVE HIER( to get latest and greatest information in Hierarchy table),ACR <=BI7
IP, DTP,SAVE HIER,ACR - BW 7.3

SAP_BW_-_Hierarch
y_Attribute_Changerun_Management.pdf

Transactional data Process chains


TD -- >ODS and Info cube
ODS in Process chains important process types are
Start Execute IP DTP Activation of DSO.
Cube in the process chain
Start Delete index IP DTP Create Index Roll up
Loading optimization techniques
1. Delete indexes
2. Compression Optimization for delete and create steps
3. Line item
4. Number range buffering
5. parallel processing
6. Create small chunks of data load IPs and run them parallel
Delete Overlapping requests
When we run full IPs into info cubes, to avoid duplicate data in info cubes we need to keep delete overlapping step
as sub sequent step after IP.
Pre requisites or settings in IP.
Data targets tab in IP, select data target for which you want delete overlapping request. Click on automatic loading
of similar Request

Roll up step : writing data from cube to aggregate based on request.


If request is deleted from info cube then related data will be deleted from Aggregate.
Aggregate is same as info cube, and it consist F And E fact tables , we can repair indexes and we can compress
aggregates.
Initial fill of aggregates : writing data from info cube to aggregate first time. Aggregate maintenance window is
having this option(Activate and Fill). Reload chains uses this process type.
Rollup of filled aggregates : Writing data from info cube to Aggregate based on the request. Daily chains uses
this process types.
Cube will not be available for reporting until unless we write data to the aggregates on top it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Live monitoring
Live monitoring of daily loads/ Nightly loads monitor using process chain log view.
From process chain log view , For a particular process type can be monitor using Display messages context
menu option,

From Context menu option of display messages will leads to job monitoring(SM37) transaction to display job
status and
job logs,
Same option can leads to process monitor ( RSMO). From there we can get more information about data loads
failures using detail tab.
Transactions used in live monitoring
Job overview in the source system - SM37 too see more information about the job, we need to see job log

Job overview in the BW( data warehouse system)

Process overview in source system: SM50 - To see whether work processors are busy or idle. If all work processors
are busy then job will be in delayed status.
Process overview in BW System : SM50 - To see whether work processors are busy or idle. If all work processors are
busy then job will be in delayed status.
Work process :

DIA DIALOG
BGD Back ground
UPD Update
SPO SPOOL
UP2 UPDATE 2

ALEREMOTE User id is responsible to establish the connection and to extract the data from source(ECC) to BW

Short dump analysis : ST22 Source system and BW system


Using short dump analysis we can analyze where code is going into dump.
TRFC - Transactional RFC : SM58 . IDOC stuck up, TRFC stuck up. Loads will be delayed because of these stuck up.
We need to manually push these loads to resume our loading process.
We need to select delayed TRFC and goto EDIT( MENU) click excute LUW.

System log Oracle errors SM21 Source system side , in the BW system wise.
Exporting data from BW system to Down stream system using info spokes and hierarchy down load programs.
Info spokes( PC info spokes) 3.x and BI 7, Open hub destination(DTP) only bI 7.x
We use IO(TEXT, ATTR), DSO, IC as data sources for Info spokes and OHD , Hierarchy from IO can down loaded using
SAP delivered program called SAP_HIERARCHY_DOWNLOAD.
If we want keep info spokes in PC chains then generated flat files should be in application server.
Flat files should be in application server then only IP can be used as process types in process chain.
Statistics
Collection of main process chain timings. To calculate SLAs . >90%
Another SLA
We have on demand chain, This will be trigger by business users by running ZBCS transaction. SLA for this process
chain should be completed within 15 minutes. Fetched records should be below 100000. >95%
Decision process type for multiple alternatives:
A chain should trigger only planning information on 1st day of every year, for other days it should trigger actual
information.
Errors
1 no Sid found : When we are loading transactional without loading master data for a particular info object, then
transactional data load may ended up with an error called No Sid found for info object.
Break fix : Found the info object causing no sid found error, Load Master data into that info object then repeat
transactional data load.
Permanent solution or best practice: Load all system wide master data first then load transactional data.

2.No currency found or No unit found.


In Source system if we have new currency or new UOM then when we upload this information into BW system,
then load may fail because of No currency found or no unit found or No sid found.
Break fix: we need to run transfer global settings manually
Run Program RSIMPCUST.
Permanent solution : Include this program in process chain, Run this process chain on daily basis before running
actual loads.
3.Small case, hexadecimal, special characters examining, break-fix, permanent solutions

Small case letters : DSO/ODS will allow small case letters, but while activating it will through an error.
Info cube wont allow small case letters it will give an error while loading data into.

Break fix:
Small case letters can converted into capital in PSA in 3.x. or in Error stack BI 7
Permanent Fix:
Write routine in transfer rules(3.x) or transformations ( Bi7)to convert small case into capital letters.
Result : Translate to Upper case.
Or
We can convert using Formula builder
Special characteristics
ATO Z 0 TO 9 allowable char in info cube
!@#%^&* not allowed in BW info cube.
!SRIRAM Load will fail
SRIRAM! Load will not fail
SRI!RAM Load will not fail.
Same with #
# is only char in data then load will fail, SRIRAM#
Break fix

Invalid char can be converted valid char in PSA in 3.x. or in Error stack BI 7
Permanent fix
Write Routine to convert invalid char into valid char

Two types of routines


Replace routine : Replace routine will replace invalid char with valid char
DATA: l_zznote TYPE /bic/oizznote.
l_zznote = TRAN_STRUCTURE-zcomment.
DATA: l_offset TYPE i.
FIND FIRST OCCURRENCE OF '!' IN l_zznote MATCH OFFSET l_offset.
IF l_offset EQ 0.
REPLACE FIRST OCCURRENCE OF '!' IN l_zznote WITH '1'.
ENDIF.
RESULT = l_zznote.
TRANSLATE RESULT TO UPPER CASE.
Cleaning routine : it removes invalid information form data.
data: l_ZZEND_PL TYPE /BIC/OIZZEND_PL.
l_ZZEND_PL = TRAN_STRUCTURE-END_LOC_NO.
CALL METHOD zcl_bw_custom_methods=>cleanse_string
CHANGING
ch_string_to_cleanse = l_ZZEND_PL.
RESULT = l_ZZEND_PL.
Translate RESULT to UPPER CASE.
Clean string method
method CLEANSE_STRING.
Data: l_strlen type i,
l_offset type i value 0,
l_length type i value 0,
l_need_to_clean type c.
l_strlen = STRLEN( ch_string_to_cleanse ).
* found invalid character # in position 1
IF ch_string_to_cleanse+l_offset(1) = '#'.
IF l_strlen > 1.
*
string is longer than 1
ADD 1 TO l_offset.
IF ch_string_to_cleanse+l_offset(1) = ' '.
*
position 2 is a space
*
set flag to need to clean
l_need_to_clean = 'X'.
IF l_strlen > 2.
*
string is longer than 2
ADD 1 TO l_offset.
l_length = l_strlen - l_offset.
IF ch_string_to_cleanse+l_offset(l_length) CO ' '.
*
position 3 thru end are all spaces
l_need_to_clean = 'X'.
ELSE.
*
something other than space are in position 3 thru end
l_need_to_clean = ' '.
ENDIF.
ENDIF.
ENDIF.
ELSE.
*
string is =1
l_need_to_clean = 'X'.
ENDIF.
IF l_need_to_clean = 'X'.
*
either there is a single # or # followed space(s) and nothing else

CLEAR ch_string_to_cleanse.
ENDIF.
ELSEIF ch_string_to_cleanse+l_offset(1) = '!'.
* found invalid character ! in position 1
* correct position 1
ch_string_to_cleanse+l_offset(1) = ' '.
ENDIF.
l_offset = 0.
* process postxt until the end of the string
DO l_strlen TIMES.
*determine the length of the string or the portion we haven't verified
*yet
l_length = l_strlen - l_offset.
* check the portion we havent verified yet
IF ch_string_to_cleanse+l_offset(l_length) CN c_valid_cleanse_chars.
* we found bad data, correct it and setup to read rest of string
l_offset = l_offset + SY-FDPOS.
ch_string_to_cleanse+l_offset(1) = ' '.
ELSE.
* no more bad data found
EXIT.
ENDIF.
ENDDO.

3.Dead lock errors, how to rectify, permanent solutions

Locking errors
One user opened Info package which is part of process chain, If Other user running same process chain
then load will failed because locking issue.
Solution: Go SM12 and an delete entries in the table, then repeat the process chain. Or ask first user to
come out of info package then second user can repeat the process chain from info package.

PSA Deletion - If we dont want data in PSA for older request. We can more DB space within system. It
will smoothen system performance including loading. We can avoid table space errors during loading.
Manual approach
BI 7
Data source - manage- select requests and delete.
3.x
RSA1 PSA Select IS Expand PSA RC- Request-Delete.

Automate the PSA deletion process then we need to include Delete PSA requests Process type in process
chain

Load into the PSA using IP in Process chain and Deletion PSA table using delete PSA request step, If both
are running same time. Then one the job will terminated most of the load job will terminated because of
locking issue with PSA table.
We need wait until one job is get completed in SM37, then we need to repeat the chain.

Loading locking issue with info cube


When we trying to delete requests from info cube manually, and someone triggered process chain to
load the data into same info cube then process chain will terminated at execute IP step because of
locking issue.
Wait until delete request step get completes then load the data using process chain.
Delete index Job - Load job - Create index job
While Delete index and Create index or repair indexes are running for info cube then we should not load
data into that info cube. Loading process may end up with locking issue.
Permanent solution: We need to avoid the job conflicts by running process chains or jobs with right time
frames.
Unable to process data packets - For huge amount data load( reload)
From PSA data packets wont be updated to DT
We can see these types of data packets in yellow status in details tab of monitor.
Source system and Data warehousing system jobs will be in finished status, but load monitoring status in
yellow status. We cant see any increase of loading records in monitoring screen.
Un processed data packets can be processed in dialog process or background process.
Dialog process: Using one session we can process one data packet only. Right click the data packet and
manual update.
Back ground process: We need to keep load status in red.

Select radio button packet should be processed in back ground.


Whenever we process data packet using this setting for each data packet we can see back ground job
called BI_BOOK*

5.TRFC stuck up and manual approaches to resolve.(IDOC Delay)


Load will be in yellow status, We can see no active data loading. Blue colored arrow button we can
observe in the details tab of monitoring. And if we see TRFC in source by clicking blue color Icon or by
using menu option , we can see stuck up or un processed TRFC for long time, WE need to select that
TRFC and go to EDIT menu option from there click execute LUW(F6)
Permanent solution provided by SAP note: 1573359should be implemented.
6. Error in source system ( RSDBTIME) - Time zone for kernel and application servers
1576331 - Delta loads fail in the load monitor(RSMO) for timestamp based datasources with
'Errors in source system' Message no RSM340.
Program call RSDBTIME we need to run in both system , then we can get message like
there is time inconsistency.
We need to contact basis to change the server timings( Kernel and application server).
The four servers supporting EP1 & BP1. ZAPERP01 (SAP instance EP1),
ZAPERP02 (database EP1), ZAPBIW01 (SAP instance BP1) and ZAPBIW02 (database
BP1) were checked. They have the same timezone defined but there was about a one minute time
difference.
ZAPERP01 is the correct time, the other three servers had lost the ntp synchronization and their
time was behind the rest of the AYE servers.

The NTP service was restarted on ZAPERP02, ZAPBIW01 and ZAPBIW02, the time on these servers
is now within one second of the time on ZAPERP01.
7. Time stamp and PSA error
Whenever we change data source after changes if we wont replicate and activate transfer rules
then loads may end up with Time stamp or PSA errors
Display messages shows time zone differences for ERP and BW systems
Whenever there is support pack upgrade in source system. ( there might be change in data
sources then when we run loads from BW without replication and activation of transfer rules then
loads may end up with PSA errors or time stamp errors)
Solution: whenever we get time stamp errors replicate data sources and activate them and
activate transfer rules( RStrancu_atcivate_all) or transformations.
8. Table space problems: ST22 , SM58 source - Table space problems - TRFC will end up with errors

We need give more DB space - We can contact basis to increase table space. Basis is going to add table
space for you if really required.
By deleting data from PSA and change log table and compressing info cubes , by deleting unwanted
aggregates we can gain the DB space.

Alerts
System - PC Errors Messages - Pagers or email ids or Phones.
To Transfer messages to emails of user group. Basis need create a periodic program SCOT transaction
and they need to take care exchange server information.
We need to create distribution list for the recipients
Distribution list is collection of employee email ids and pager number, we can use this distribution list in
process chain message maintenance recipient list. This is reusable one.
From Office work place we can create and we can edit distribution list.
Watch dog program: If there are any delay then this program will be notified through emails and pagers

Monitor program.
RSPCPROCESSLOG

it is table give information about logs of the process chain.

For each process type there will be and entry in this table. If any process type with process chain failed
then message will be circulated using distribution list.

Monitor and
Watchdog programs_Draft.docx

RSPC_PROCESS_FINISH program to change entries in RSPCPROCESSLOG table


Q: We have meta chain , in that we have to local chains, First local chain ended up with errors, After
rectification it is unable to trigger next local chain, then how to start second or next process chain?
Using RSPC_PROCESS_FINISH Program we can change R status to G or F status then chain will resumed.

Overview:
The following procedure can be used if the process chain does not offer a repeat on a failed process or if you do not
want or need to do a repeat, but you want to continue the process chain.

Steps:
1) Go to SE16 and bring up table RSPCPROCESSLOG. Get the Log ID from the Log run of the chain (left side of
screen in log of chain) cut and paste in log id on the table selection screen and execute.
2) Find the failed line in RSPCPROCESSLOG, that is the line that matches the failed process in the chain and has a
status of R, X or blank.
3) Open another session. Run RSPC_PROCESS_FINISH, using z-tcode ZRSPC_PROCESS_FINISH. Put in logid,
type, variant, and instance from the RSPCPROCESSLOG table from the record of the one that failed. Put a G in the
state field. Press execute and you will get no messages if it was successful. If it was not successful, recheck the
values you entered in the program selection options.
This should put the process in the chain to green and allow it to continue.
4) Refresh the chain to see that it is proceeding as expected.

Tickets
If Client is facing any issue in production environment , then do call Help desk and they explain
about their problem to the help desk executive, then help desk executive going to generate a ticket
or incident with customer complaint using ticketing tool, and they assign this ticket to respective
team.
Service center or help desk support is level one support.
Ticket will be available in respective Queue( BW,FI). This ticket status is known as Open status.
BW support consultant who is in On-call need to pick the ticket. He needs to contact User who raised
the ticket and need to mention first customer contact time then ticket will turned into WIP.
Work in progress
In most project the above step should be completed in 30 minutes.( there will be SLA also)
Ticket response time for all ticket types it should be 30 minutes.
WIP Status : Work in progress status SLA clock start clicking from this point onwards.

You need work actively on this ticket, need to contact user for more information on the ticket, We
need to come up with right resolution, it might be break fix or New CR.
If we are able to break fix error directly in Production environment then we need to ask user to
check the results then upon user acceptance we can close the ticket by providing ROOT cause
analysis.
Once we close ticket , ticket status will be closed.
Status of the tickets
OPEN
WIP
CLOSE
SUSPEND Whenever we are waiting for an answer from user then for time being we can keep ticket
status as suspend. SLA clock will be stopped.
PENDING TO CLOSE- If solution is not break fix, Fix should be come from development then until
development hits the Production system using transport request, we can keep respective ticket in
pending to close.
TOOL DW ticketing . peregrine
SLA Sev 1 should be completed in 2 hrs. -In Production environment if any batch job fails then we
receive Sev 1 ticket.
Sev 2 ticket should be completed in 4 hrs IF file generation for other systems not happened in time
then other system folks are going to call sev 2 ticket. IF any user group is complain about report is
not executable or latest information is not appearing in the report then they call Sev 2 ticket.
Sev 3 ticket should be completed in 16 hrs If user requires any on demand load then they will call
sev 3 ticket. If user is having any access related to access or data discrepancy problems then they
will call these type tickets.
Sev 4 ; No time line New user looking for BW connection,

Service level 0 Help desk


Service level1 Responsible for tickets and data loading
Service level 2 Responsible for Enhancements
Service level 3 Responsible for new Projects
Ticket example 1
User is unable to see latest information in the queries
Reason : develop create new aggregates on info cube and he missed to include roll up step addition
in the corresponding process chain. After initial fill of aggregates in production system the delta
request are not written into Aggregates so requests ended up with no reporting ICON,

Then user complained about latest information is not reportable


Break fix: select latest request in cube manage, then perform roll up step manually, then for this
request reporting ICON will be available.
Permanent solution: In development box include roll up after load processing, then transport them
into Production environment.
Ticket 2
User complained about few WBS hierarchies are missing in the reports
Analysis : We have 175000 records in RSMO monitor, And same records in RSA3. We observed After
WBS element hierarchy load we dont have SAVE hierarchy step, So all Data records 175000 not
updated in the Hierarchy table.
Break fix: Run IP manually for WBS hierarchy. Automatically data will be updated in Hierarchy table
Permanent solution : Insert Save hierarchy step in the process chain after load step.
SAP Note:
If SAP did some mistakes while developing the product, then SAP will release Notes to rectify their
won errors when customer reported product error.
How to search note:
We need to search in Service.sap.com with search term ( Based on your error).
Service.sap.com Support portal Provide SUSER ID InformationHelp and supportsearch for sap
note

SAP notes :
1.Corrective notes : SAP will provide corrections using this note, Customer needs to implement this
note in their systems to resolve the problem. Using Snote transaction we can down load sap note and
we can implement this SAP note. When we implement Sap note system will ask for Transport request
to collect the changes because of SAP note implementation. So SAP note is transportable and we
need to apply in Development system and transport it to QA and Prod.

OSS Note
1573359.pdf
How to implement corrective notes
BD1 -> Tcode SNOTE -> Select Goto -> SAP Note Browser -> Enter the note 1573359 -> Execute -> Select the note
and select Implement SAP Note

Example :
We have TRFC stuck up, we need to clear these TRFC stuck up using SM58 Transaction F6
Above one manual approach to clear stuck TRFCs, SAP provided permanent solution for TRFC stuck
up with Note 1573359.
Above note will give new program called RSTRFCCK in system. This program will clear the TRFC stuck
up. So run this program as a batch JOB for every five minutes in ECC and BW.

2. Procedural notes : SAP provides Procedures or Steps to be maintained or followed to resolve the
error.
Note 1576331 is example for Procedural note, this note will give information about How to run
RSDBTIME program to check inconsistencies between application server and kernel time zones.

3. informative notes : We wont apply these notes, this information about best practices to be
followed.
Example: note 1073268 will give information about how to implement 0IC_C03 best way.

[11]

Support pack implementation and BW/BI consultant role

SPS - Patches - Collection SAP notes. Patch 1 ---------25 , Support contain new functionalities
also.( New data sources, data sources changes, new changes to extractors, New cubes)
Customers at least once in year try to implement latest patches to their systems.
Patches or Sps will be upgraded by Basis consultants. SPs are not transportable. Basis need to apply
these Sps System wise.
First basis apply on Dev systems - Functional and technical testing will be carried in that system,
once testing passed. Basis apply same Sps to QA systems, Then again technical functional tesing
need s to be carried out. Once everything is fine then basis is going to apply SPs in Production
environment.
Roles of BW consultant when Basis consultant applying Support packs.( Down time activity).
Outage plan or down time plan( Bw role - Clearing queues)
Queues in ECC system
Bw consultant role is to make sure all entries in Below queues are 0
If user is posting the data( creating the data), then entries come and sit in below stated areas, So we
need to lock the user then he cant log into the system to post the data. Security consultants are
going to lock the system wide business users.

SMQ1 Whenever we run Delta IP from BW side, V3 job


RSA7 Whenever we run Delta IP from BW side(PC)
LBWQ V3 job( Job control) LBWQ RSA7 RMBWV3** Program ( ** Application area)
SM13 V3 job (Job control) Sm13 RSA7 RMBWV3** Program ( ** Application area)
SETUP TABLES Deletion- LBWG
Plan for this system down time activity.

Draft communication to all users


Send out EP1 walkthrough plan
Schedule cutover walkthrough
BSI 9 Technical Upgrade
Notify AutoSys teams (Mike Swejk/Joe Cronin) of upcoming
outage
Finalize EP1 Unlock List
Provide List of ED1K transports
Cutover walkthrough Meeting
Make sure we have a clean backup
Send communication to all outage particpants
Send communication to Help Desk to be sent to all employees
Preshutdown
Create Group Chat
QA Approve ED1K Transports
Initial clearing of BI Queues in ERP
ISOLATE SYSTEMS
Place Autosys Jobs On Hold/Ice - everything ON ICE
Verify all jobs are complete
Lock All Users Except Unlock list on attached tab
Remove any active users from EP1
Final BI queues in ERP
Email confirmation of Final BI Queue clearing
Final clear queues (SMQ1, SMQ2, SM58)/Updates - Final Check
Shutdown
Notify SNOC to Stop Monitoring Systems
System shutdown
Implementation
Apply HRSP 60/61 + Apply DMIS +Techincal Verification
apply ED1K 930348 SPAU adjustments
apply tax factory cyclic S
apply TUBS 209-214
apply L6DK* transports
Import RemainingED1K Transports using TMS
Implementation Complete
System Restart
unlock ALL users locked in step PRE-003 above
System Restart

Release Autosys Jobs On Hold/Ice - everything ON ICE


End group chat

Development process
Water fall, SDLC, ASAP methods to develop objects or to implement SAP projects

Development phases:
1.Requirement Phase. : Collect requirements from the user by conducting interviews and meetings. Prepare
Requirement document and get sign off from the user
2. Design phase : Based on the naming convention prepare Functional and technical specs or design documents. Get
approval from the Sr consultant for these documents.
3. Construction phase: Construct objects in Development system according to design documentation. Capture
objects into transport requests. Prepare unit test cases, conduct unit testing and capture results. Get approvals on
unit test cases and results and on transports from Sr consultants(WPR). Release objects into QA system. Prepare
implementation plan, Get WPR approval from Sr consultant.
4.Testing phase: After successful import of objects into testing system. Prepare system test cases and conduct
system testing. Ask WPR for these . Once we get approval then ask user to do UAT(User acceptance testing). Once
UAT passed then CR is ready to move into production.
5.implementation or go live phase
Ask basis or migration team to import the object( transport requests) into Production system according to
Implementation plan.
Perform pre and post implementation steps if necessary.
6.Support
Give support to object( loading , reporting)
12.0 Development Phases:..................................Error: Reference source not found
12.1 Requirements Phase - Analyst....................Error: Reference source not found
12.1.1 Create the Requirements.....................Error: Reference source not found
12.1.2 Revisit the HLE and Complete the Detailed Estimate Error: Reference source not found
12.1.3 WPR for Requirements and Estimate.....Error: Reference source not found
12.1.4 Requestor Approval of Requirements.....Error: Reference source not found
12.1.5 Q-Gate 1 Approvals.............................Error: Reference source not found
12.2 Design Phase - Designer........................... Error: Reference source not found
12.2.1 Create the Design...............................Error: Reference source not found
12.2.2 WPR for Design.................................. Error: Reference source not found
12.2.3 Q-Gate 2 Approvals.............................Error: Reference source not found
12.3 Construction Phase Developer and Tester..Error: Reference source not found
12.3.1 Development - Developer....................Error: Reference source not found
12.3.2 Unit Test Plans, Cases, and Results - Developer Error: Reference source not found
12.3.3 WPR for Construction and Unit Test DeveloperError: Reference source not found
12.3.4 System Test Strategy, Plan and Cases - TesterError: Reference source not found
12.3.5 WPR for System Test Strategy, Plan and Cases - Tester Error: Reference source not found
12.3.6 Q-Gate 3 Approvals - Developer...........Error: Reference source not found
12.4 Testing Phase- Developer and Tester...........Error: Reference source not found
12.4.1 Complete Implementation Plan, Release of Transports and System Test Preparations Developer....................................................Error: Reference source not found
12.4.2 WPR for Implementation Plan - Developer Error: Reference source not found
12.4.3 Conduct System Testing Tester..........Error: Reference source not found
12.4.4 WPR for System Test Results Tester....Error: Reference source not found
12.4.5 Q-Gate 4.1 Approvals - Tester.............Error: Reference source not found

12.4.6 Request UAT be Performed by the Requestor Tester Error: Reference source not found
12.4.7 Q-Gate 4.2 Approvals - Tester.............Error: Reference source not found
12.5 Implementation Phase Responsible Person and Implementer Error: Reference source not
found
12.5.1 Preparation for Go/No-Go Decision Meeting Responsible PersonError: Reference source not
found
12.5.2 Preparation for Transport to Production - Implementer Error: Reference source not found
12.5.3 Implement - Implementer....................Error: Reference source not found
12.5.4 Post Production - Implementer

[12]

Error: Reference source not found

Life Cycle [SDLC, Accelerated or water fall],

Releases: M Monthly, Q - quarterly, special release for projects. Immediate release to break fix

production issues, Year end releases. Release management will take care of these releases.
WPR Work product review, Quality process

Whenever we create products other person needs review and approve(endorse) the product.
Quality gates: Q1, Q2,Q3,Q4,Q5

Time frames or dead line to complete each phase. Q1 gate is a time frame to complete
requirements gathering.
Sign offs - User approvals, WPR approvals. We receive these sign off using emails.
HLE : High level Estimation

Based on the initial requirements we need to give estimation to complete the objects and
documents. Hr will be calculated here based on the different phases.
Documents we create in Development process
Phase 0 : Prior to requirement gathering.
HLE Document
Phase 1: Requirement phase
RD
Phase 2: Design phase
Design documents( functional specs & technical specs) and Detail estimation documents
Phase 3: Construction phase
Unit test cases and results Documents
Phase 4: Testing phase
System test case and results docs & UAT
Phase 5 : Implementation

Implementation Document
Phase 6 : Support
Production Synopsis document This document will provide information to support the delivered
objects in production system.

Object 1 Query enhancement


Object2 Flat file automation
Object 3 Aggregates
Reporting performance.
Index
Compress
Olap Cache
BIA
Aggregate
Partitioning
Free char reporting
Pre cal value set
Aggregates
Mini cubes contain data physically, Flat aggregates. Model will be just like info cube.
Aggregate can be created in the Development system, after import into the QA and Production
system we need activate and fill them in each individual system
So Same aggregate technical name wont be same in all systems.
Aggregates are transportable , but after transportation aggregate needs to activated in the target
system ( Just like Process chain and Info spoke, DTP with Direct access)

How to find out whether aggregate is using by query or not?


RSRT - Execute +Debug Display found aggregates

Threshold Value (0-99): Delta -> Reconstruct Aggregates ( SPRO)


ACR step performance increase.

Determines, during the hierarchy attribute realignment run, the level of percentage change at which the delta process is to
be switched to reconstruction.

Use
Aggregates are adjusted to the new attributes and hierarchies during hierarchy, attribute and realignment runs. There are
various adjustment strategies. The aggregate can be completely reconstructed or the old records can be updated negatively
and the new records positively (delta process). The procedure that is used depends, among other things, on how much has
actually changed.
Enter a number between 0 and 99. 0 means that the aggregate is reconstructed generally. Change the parameter until the
system is running at its fastest.

Designing the Star Schema


. Small dimensions.
. Few dimensions (less important than small dimensions).
. Only as many details as necessary.
. Hierarchies only if necessary.
. Time-dependent structures only if necessary.
. Avoid MIN, MAX - Aggregation for key figures in huge InfoCubes
Small dimensions means entries in dimension tables should be lesser than 15% when compare to fact
table entries
Rule of thumb
Dimension table entries/Fact table entries * 100 <= 15%.
200/2000* 100 = 10%
Whenever we develop info cubes( our own info cubes) we need to define dimensions. While
defining dimensions we need to whether same kind info objects how placed in dimensions SAP
delivered info cubes ( use sap delivered info cubes as templates wile designing custom info cubes).

Custom application ( No SAP std cubes) . We need sit with SME ( subject matter expert) need to get
information like what is strong entities in the application area. We need to finalize weak entities
then build relation model for entities ( ER). Bubble model.
For each strong entity we create a dimension.
Once new is created then we need to load data into the cube , to see percentage of entries in the
dimension table with compare to the fact tale we need to run program called
SAP_ INFOCUBE_DESIGNS.
Run RSRV test for Database Information about Info Provider Tables,
Entries Not Used in the Dimension of an Info Cube.
Object 5 - Data source enhancement - CMOD
Data source types

1. Transactional data source EXIT_SAPLRSAP_001


2. ATTR EXIT_SAPLRSAP_002
3. TEXT EXIT_SAPLRSAP_003
4. HIER EXIT_SAPLRSAP_002
1.Enhance extract structure with Append structure with required data fields
2.Maintain data source - select Field only known in customer exit

3. CMOD Create/ display project - Select components - Select right customer exit then select
include program and write code.
Requirement :
User would like to see Withhold tax, tax type information in the report.
First whether required IO are there in the Info cube then modify query only( Query enhance ment).
Whether required IO are there in the BW side. for possibility of look up or turning attributes

WHEN '0VENDOR_ATTR'.( Data source)


DATA : WA_BIW_LFA1_S( Work area) LIKE BIW_LFA1_S.
DATA : BEGIN OF IT_VENDOR,( data declaration using I tab)
ZZWITHT(Custom Data filed) LIKE LFBW( application table)-WITHT( data field),
ZZWT_WITHCD( Custom data field) LIKE LFBW( application table)-WT_WITHCD( data field),
END OF IT_VENDOR.
LOOP AT I_T_DATA(table) INTO WA_BIW_LFA1_S.
CLEAR IT_VENDOR.
SELECT SINGLE WITHT

"withhoding tax

WT_WITHCD "withholding tax type


FROM LFBW INTO IT_VENDOR WHERE LIFNR = WA_BIW_LFA1_S-LIFNR.
IF SY-SUBRC = 0.
MOVE-CORRESPONDING IT_VENDOR TO WA_BIW_LFA1_S.
ENDIF.
MODIFY I_T_DATA FROM WA_BIW_LFA1_S INDEX SY-TABIX.
ENDLOOP.

Logistics extraction
It is part of business content extraction, Useful extract data from logistics application areas of ECC.
In ECC logistics application areas are

Data flow diagram for logistics


Application areas in logistics
5. 02 Purchasing - ***
6. 03 Inventory -*****
7. 04 shop floor control
8. 05 Quality management - ***
9. 06 - Invoice verification - ***
10. 08 Shipment
11. 11 SD sales BW - Sales orders - ***
12. 12 LE Shipping BW Sales deliveries -***
13. 13 SD Billing -***
14. 17 Plant Maintenance - ***
15. 18 Customer services - ***
16. 40 Retailing

Important transactions in Logistics


1. SE11, SE16,SE16N - TO see data in application tables and set up tables.
2. RSA5 BCT data source activation
3. RSA6 Data source has been activated Post processing data sources

4. RSA7 Delta Queue


5. LBWE Lo extraction customizing cockpit
6. LBWG To delete the data in setup table based on the application area.
7. LBWQ Extraction Queue
8. LBWF Logs for logistics extract structures
9. SM13 To view update tables
10. SMQ1 Out bound queue
11. SE38, SE80, SA38 To run or execute programs.
12. SBIW Dis IMG( Implementation guide) * - all logistics can be access using this.
13. OLI*BW to fill setup tables
14. NPRT to see logs for setup of statistical data( Filling setup tables)

Logistics Extraction Structures Customizing Cockpit


The aim is to manage extract structures, used to transfer Logistics movement data from OLTP into the BW.
The extract structures are completed using the communication structures in the Logistics Information System (LIS).
The Cockpit contains the following functions, entered in the following sequence:
1. Maintenance of extract structures
Each extract structure can be maintained by SAP or by you. The extract structures are provided from the assigned
communication structures. You can only use selected fields from the communication structures. SAP already delivers
extract structures. You can enhance these. After creating the extract structure, they are automatically generated. During
this process, the missing fields (related units and features) are completed. The extract structure is created hierarchically in
accordance with the communication structures. Each communication structure leads to the generation of a sub-structure
of the actual extract structure.
2. Maintaining data sources
At this point, call up general maintenance of Data sources. Here you can set the selection of selectable fields and the
negativability of fields.
3. Activating the update
By setting as active, data is written into the extract structures, both online as well as during completion of setup tables.
(see below).
4. Job control
5. Depending on the Update Mode you have set (see next section), a job may have to be scheduled, with which the
updated data is transferred in the background into the central delta management.
6. Update Mode
Here, you can set how the incurred data is updated during delta posting:
a) Serialized V3 Update ( No in ECC)
This is the normal update method. Here, document data is collected in the order it was created and transferred
into the BW as a batch job.
The transfer sequence is not the same as the order in which the data was created in all scenarios.
b) Direct Delta
In this method, extraction data is transferred directly from document postings into the BW delta queue.
The transfer sequence is the same as the order in which the data was created.
c) Queued Delta
In this method, extraction data from document postings is collected in an extraction queue, from which a periodic
collective run is used to transfer the data into the BW delta queue.
The transfer sequence is the same as the order in which the data was created.
d) Unserialized V3 Update
This method is almost exactly identical to the serialized update method. The only difference is that the order of
document data in the BW delta queue does not have to be the same as the order in which it was posted. We only
recommend this method when the order in which the data is transferred is not important, a consequence of the
data target design in the BW.

Transfer Business Content DataSources


In this step, you transfer and activate the DataSources delivered by SAP as well as partner DataSources delivered in your own
namespace where applicable. After this step, for all connected BW systems, you can extract and transfer data from DataSources
that have been activated and replicated into the current BW.
Activities
1. In the Install DataSources from Business Content screen, the DataSources for the application components assigned
to you are displayed in an overview tree.
Under an application component, the RECONCILITATION node indicates that DataSources for data reconciliation with one
or more Content DataSources are assigned to the application component. You use these reconciliation DataSources, that
are delivered with Business COntent, to check that data loaded from other DataSources is correct.
If DataSources that are flagged as DataSources for data reconciliation exist for the application component, this is
displayed by the RECONCILIATION subnode. This node is not displayed if reconciliation DataSources do not exist for an
application component.
2. In the application component hierarchy, select the node for which you want to transfer the DataSources to the active
version. Do this by placing the cursor over the node and choosing Select Subtree.
The DataSource and subtrees positioned beneath it are selected.
3. Choose Select Delta.
DataSources are highlighted in yellow when the check shows differences between their active and delivered versions (for
example, due to extractor changes).
4. To analyze the differences between the active and delivered versions of a particular DataSource, select the
DataSource, and choose Version Comparison. The application log that appears contains more detailed information
about the two versions.
5. To transfer DataSources from the delivered version into the active version, select the DataSources you want to transfer
in the overview tree using the Select Subtree pushbutton, and choose Activate DataSources.
The error log appears if an error occurs.
You can also call up the log regardless of whether your transfer into the active version was successful or not, under
Display Log.
With a metadata upload, when replicating DataSources in BW, the active version of the DataSource is recognized by the
BW.

Edit DataSources and Application Component Hierarchy


To adjust existing DataSources to your requirements, you edit them in this step and transport them from a test system into a
productive system.
You can also use this procedure to post-process the application component hierarchy.
Activities
DataSource

Transporting DataSources

Select the DataSources that you want to transport from the test system into the productive system, and choose
Transport. Specify a development class and a transport request, so that the DataSources can be transported.
Maintaining DataSources
To edit a DataSource, select it, and choose Maintain DataSource. The following editing options are available:
o Selection
When scheduling a data request in the BW Scheduler, you can enter selection conditions for the data transfer.
You can, for example, determine that data requests are applied only to data from the last month.

If you set the Selection indicator for a field in the extract structure, the data for this field is transferred according
to the selection conditions determined in the scheduler.
Hide Field

To exclude a field in the extract structure from the data transfer, you must set this indicator. The field is used in
the BW to determine the transfer rules, and can no longer be used to generate the transfer structure.
Cancelation Field

Reverse postings are possible for customer-defined key figures. Cancelations are therefore only active with
certain transaction DataSources. These are DataSources that have a field designated as a cancelation field, for
example, the Update Mode field in the DataSource 0FI_AP_3. If this field has a value, the data records are
interpreted as reversal records in the BW.
If you want to carry out a cancelation posting for a customer-defined field (key figure), set the Cancel indicator.
The value of the key figure is transferred inverted (multiplied by -1)into the BW system.
Field Known Only in Exit

You can improve the quality of data by adding fields in append structures to the extract structure of a
DataSource.
For fields in an append structure, the indicator Field Known Only in Exit is set, meaning that, by default, these
fields are not passed to the field list and the selection table in the extractor.
Remove the Field Known Only in Exit indicator if you want the Service API to pass the field in the append
structure to the extractor, along with the fields from the delivered extract structures in the field list and in the
selection table.
Enhancing Extract Structures

If you want to transfer additional information for an existing DataSource from a source system into the BW, you have to
enhance the extract structure of the DataSource with additional fields.
To do this, you create an append structure for the extract struture.
a) Use the Enhance Extract Structure pushbutton to reach the field maintenance for the append structure. The
name of the append structure is generated from the extract structure name in the customer namespace.
b) Enter the fields you want to append and the data elements based on them into the field list. All functions
available for field maintenance for tables and structures are available here.
c) Save and activate the append.
For more information on the append structure, see the ABAP Dictionary documentation for maintaining tables.
Function Enhancements

To fill the fields of the append structure with data, create a customer-specific function module. Information on enhancing
the SAP standard with customer-specific function modules can be found in the R/3 library under Basis -> ABAP
Workbench -> Enhancements to the SAP Standard -> R/3 Enhancement Concept or under Enhancing
DataSources.
Testing Extractions
If you want to test the extraction in the source system independent of a BW system, choose DataSource -> Test
Extraction.

Application Component Hierarchy

To create a node on the same level or under it, place the cursor on this node and choose Object -> Create Node. You can
also create subordinate nodes by choosing >LS>Object -> Create Children.

To rename a node, to expand or compress it, place the cursor on the node and click the corresponding pushbutton.

To reassign a node or a subtree, select the node to be reassigned ( by positioning the cursor over it and clicking the
Select Subtree pushbutton), position the cursor over the node to which the selected node is to be assigned, and click on
the Reassign pushbutton.

If you select a node with the cursor and choose Set Section, the system displays this node with its subnodes. You can
use the respective links in the line above the subtree to jump to subordinate nodes for this subtree.

When you select a node with the cursor and choose Position, the node in the first line in the view is displayed.

All DataSources for which no valid (assigned) application component could be found appear under the node
NODESNOTCOLLECTED. The node and its sub-nodes are only constructed during the transaction runtime, and updated
when saving in the display.

NODESNOTCONNECTED is not stored persistently in the database. For this reason, it is not transferred into other
systems when the application component hierarchy is transferred.
Note that hierarchy nodes created under the NODESNOTCONNECTED node are lost when you save. After saving, only
those nodes under NODESNOT- CONNECTED are displayed that were moved with DataSources under these nodes.
Example
A DataSource lies under an application component X. You transfer a new application component hierarchy from Business
Content, which does not contain the application component X. The system then automatically assigns this DataSource
under the component NODESNOTCONNECTED in this application component.
Special DataSources can be delivered with Business Content that are not used to extract data but to reconcile data with
one of more Content DataSources. With these reconciliation DataSources you can check that the data loaded from other
DataSources is correct.
The RECONCILIATION node of the application component indicates that reconciliation DataSources of this type are
assigned to the application component. If DataSources exist for an application component that can be flagged as
DataSources for reconciliation, this is displayed in the corresponding RECONCILIATION lower-level node. If no
DataSources exist for an application component that can be used for reconciliation, this node is not displayed.

Note that changes made to the application component hierarchy are only valid until the next transfer from Business Content takes
place.

Delta Queue RSA7

Activities
The status symbol indicates whether the update in a delta queue is activated for a DataSource. If the status symbol is green, the
delta queue is activated, meaning it is filled with data records when an update process or a data request from BW is running. A
prerequisite for the delta update is the successful completion of the delta process initialization in the BW scheduler.

Displaying data records


1. To check whether and how many data a delta queue contains, select the delta queue and choose Display data
records.
2. You get to a dialog box where you to specify how you want the data records to be displayed.
a) You can choose the data packets containing the data records that you want to see.
b) You can choose to display particular data records in a data packet.
c) You can use a simulation of the extraction parameters to choose how you want the data records displayed.
3. Choose Execute to display the data records.

Displaying current status of the delta-relevant field


For DataSources that support generic delta, you can display the current value of the delta-relevant field in the delta queue. In the
Status column, choose Detail to do this. The value displayed contains the largest value of the delta-relevant field for the last
extraction. It acts as the lower limit for the next extraction.

Refresh
If you choose Refresh,
1. recently activated delta queues are displayed,
2. data records recently written to the delta queue are taken into account, and
3. data records that were deleted when reading the data records are no longer displayed.

Deleting queue data


If you want to delete the data in a delta queue for a DataSource, select the delta queue, and choose Delete Data from the context
menu (right mouse-click).
If you delete the data in a delta queue, you do not have to reinitialize the delta process before you are able to write the data records
of the DataSource to the delta queue.
Note:
Please note that data that has not yet been read from the delta queue is also deleted. This invalidates an existing delta update. Use
this function only if you are aware of the consequences this will have.

Deleting queues - data sources will be deleted from RSA7( Initialization option
for the data source deletion)
You delete the entire queue by choosing Queue -> Delete Queue. To write data records from the corresponding DataSource into a
delta queue, you need to reinitialize the delta process.

Why we need to delete set up tables


1.
2.

To avoid invalid data transfer into BW and to avoid data duplicates into the BW system , before
reconstructing( Filling data) we need delete set up tables data.
If Basis applying support pack. If data is there in any application areas set up tables, then basis cant apply SPs
So we need to delete setup tables data.

3.

In development system when we change/ enhance extract structure then system wont allow you to make
changes until unless we delete set up tables information from the all clients of development environment.

4.

When we import data source/ extract structure changes into target systems(QA.PRD) then to avoid transport
failure we need to delete set up tables from the target systems.

Lo cock pit Procedure

1 . Activate lo data source. RSA5


2. Maintain extract structure LBWE
You want to maintain the extract structure although the update is
active. If you change the structure, the update is automatically
deactivated. Before you do this, you should note the following points:
o The changes should be carried out in a posting free time. Otherwise
initialization is necessary so that the documents due during the
change do not get lost.
o Is there still data in the V3 update? You can see this in the update
overview. If you are not sure, start the V3 update directly and
deactivate the update.
o Is there stil data in central delta management? You can check this
in BW Maintenance Delta Queue. Before a change, you should collect
the data from BW.
o To ensure that a change does not affect the V3 update, you should
first deactivate the update of all extractors of the relevant
application in all clients.
If you have already restructured data, it will be worthless after
the change. You need to delete the data again.
If the update log is active, the log data can no longer be read
after a change. This is only possible after an update and
overwriting the last log entry.
Important points for logistics when importing the transports into target system.
Run the transport when the target system is not being booked.
Otherwise you will need to initialize it because documents are lost
during this time.

None of the clients in the target system in the V3 update for the
application 11 should contain data. If you are unsure, start the V3
update of the application 11 in all clients.
If there is still data in the central delta management of the target
system, it must be retrieved by BW before the transport takes place.
If you have already reconstructed within this target system, it may
be that the data still exists in the reconstruction tables. After
the transport, you can no longer transfer these entries into BW. You
must delete the contents of the reconstruction tables of the
application 11 in the target system and run another reconstruction
if need be.
o If there is an update log in the target system, you cannot read the
log data of the application 11 after the transport has been run. You
can only read it again once data is booked and the last log entry
overwritten.
Use the report RMCSBWCC to display a log for the changed extract
structure in the target system, to see if any of the above problems
exist. An additional switch deletes all update logs for the application
of the selected extractor.
3. Data source maintenance
4.Fill set up tables

No marker update in Inventory.

Non cumulative key figures in inventory( Stock, Emp Head count)

How to
inventory.pdf

Sequence of inventory loads


First we load BX data source - Initial stock - this is one time load( every time full load No deltas) .
For this set up tables fill happens with transaction MCNB.
BF load - We can have initialization and deltas for these two loads
BX load - These two are part of daily loads.

Transports
1. SE01,SE03, SE09,SE10.
In real time we do release transports from Development environment to QA only, then from
queue Basis or migration team will import the transports in production environment based on
the sequence we propose in Implementation documentation.
Precautions:
1.We need to take of transport sequence based on the objects dependencies .
Query should not move before Info providers.
Work book should not move before query.
Info cube should not move without required info objects
2.Objects in all three systems should be in sync. Otherwise we may end up transport failures.

Return codes
RC = 0 Successful
RC = 4 Warning
RC = 8 Error ( Object missing, Updates unable to activate)
RC = 16 Error ( Not define target systems in STMS, Package not available in the target system)

Post processing transport activities.


When we work with PCs, Aggregates, Info spokes, we need to activate these objects in target
systems after successful import.
Transport connection in RSA1 is also check point before we release transports into target
systems
We can merge Multiple transports into single transport.
We can delete unwanted transport from the systems SE01, SE09, and SE10.
We can unlock objects from transport - SE03.

Q&A

Answers
Q1. Logistics extraction step by step approach in implementation phase.
LO Cockpit Step By Step
ECC - Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
SD Sales BW -> Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
Select the fields of your choice and continue
Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
- Next step is to Delete the setup tables
7). Go to T-Code SBIW. Select Business Information Warehouse
Setting for Application-Specific Data sources -> Logistics -> Managing Extract Structures -> Initialization -> Delete the
content of Setup tables (T-Code LBWG) -> Select the application (01 Sales & Distribution) and Execute
- Now, Fill the Setup tables
9). Select Business Information Warehouse
Setting for Application-Specific DataSource -> Logistics -> Managing Extract Structures -> Initialization -> Filling the
Setup tables -> Application-Specific Setup of statistical data -> SD Sales Orders Perform Setup (T-Code OLI7BW)
a. Specify a Run Name and time and Date (put future date)
b. Execute
- Check the data in Setup tables at RSA3

Plan Collective Run (if necessary after you run Full Load).
BW

1.
2.
3.
4.
5.
6.

Replicate the DataSource.


Install the required Business Content ( if necessary).
Create Transformations.
Create DTP and Update Rules.
Create IP for data INIT. (Next Plan Collective Run if required)
Create IP for delta.

Q2. Explain the Dataflow in logistics in daily delta. (V3 Un-serialized)

Q3. Significance of setup table.


Setup tables are cluster tables and are used o extract tables from ECC.
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that
you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables.

Q4. When we need to delete the setup tables.


Setup tables are kind of interface between the extractor and application tables. LO extractor takes data from set up
table while initialization and full upload and hitting the application table for selection is avoided. As these tables are
required only for full and init load, you can delete the data after loading in order to avoid duplicate data.

Q5. How to clear the data from Delta Queue (RSA 7)? Is it possible to delete directly?
To delete the data in a delta queue, select the delta queue and, from the context menu, choose Delete Data.

Q6. What are all the implementation steps we need to perform when support packs are
applying on production environment?
Q7. Is it possible to re - initialize logistics without outage? Then how?
Yes, Early Delta Initialization
With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the
application during the initialization request in the source system. This means that you are able to execute the
initialization of the delta process (the init request), without having to stop the updating of data in the source system.
You can only execute an early delta initialization if the DataSource extractor called in the source system with this data
request supports this.
Early init allows you to do Delta settings first and then pull the records. Once the delta settings are done your update
queue will then start collecting the changed and new records.

Q8. How to transfer entries from extraction queue to delta queue manually?

Use R/3 program RMBWV3<application-component>


02 Purchasing RMBWV302
03 Inventory Controlling RMBWV303
04 Shop Floor Control RMBWV304
08 Shipment RMBWV308
11 SD Sales BW RMBWV311
12 LE Shipping BW RMBWV312
13 SD Billing BW RMBWV313

Q9. What are all the precautions need to be taken when logistics implementation or
enhancement is going in to production environment?
1. Run Transports when target system is not being booked (Outage). Otherwise you will need to initialize it because
docs are lost during this time.
2. None of the clients in target system in V3 update for application <no> should contain data. If you are unsure, start
the v3 update of the application X in all clients.
3. Del contents of re-construct table of application in target system and run another re-construct if needed.
4. Use Transport RMCSBWCC to display log for changed extract structure in target system.

Q10. What is marker update?


Marker Update is used to reduce the time of fetching the non-cumulative key figures while reporting. It helps to
easily get the values of previous stock quantities while reporting. The marker is a point in time which marks an
opening stock balance. Data up to the marker is compressed.
The No Marker Update concept arises if the target InfoCube contains a non-cumulative key figure. For example, take
the Material Movements InfoCube 0IC_C03 where stock quantity is a non-cumulative key figure. The process of
loading the data into the cube involves in two steps:
1) In the first step, one should load the records pertaining to the opening stock balance/or the stock present at the time
of implementation. At this time we will set marker to update (uncheck 'no marker update') so that the value of current
stock quantity is stored in the marker. After that, when loading the historical movements (stock movements made
previous to the time of implementing the cube) we must check marker update so that the marker should not be
updated (because of these historical movements, only the stock balance / opening stock quantity has been updated;
i.e. we have already loaded the present stock and the aggregation of previous/historical data accumulates to the
present data).
2) After every successful delta load, we should not check marker update (we should allow to update marker) so that
the changes in the stock quantity should be updated in the marker value. The marker will be updated to those records
which are compressed. The marker will not be updated to those uncompressed requests. Hence for every delta load
request, the request should be compressed.
Check or uncheck the Marker Option:
Compress the request with stock marker => uncheck the marker update option.
Compress the loads without the stock maker => check the marker update option.

Q11. DataSource involved in 0ic_c03 cube, Explain each DataSource.


This InfoCube is based on the following InfoSources:

2LIS_03_BX - This structure is used to extract the stock data from MM Inventory Management for initialization
to a BW system.

2LIS_03_BF - This structure is used to extract the material movement data from MM Inventory Management
(MM-IM) consistently to a BW system.

2LIS_03_UM - This structure is used to extract the revaluation data from MM Inventory Management (MM-IM)
consistently to a BW system.

Q12. Explain a single report starting from requirements to your approach to accomplish the
task.
Requirement: Report to analyze Avg. delivery time by vendor.
This report can answer questions like:
a. Who can deliver a material the fastest?
b. What procurement lead time must you reckon with for a certain material?
Approach: Info Provider 0PUR_C03 Purchasing Data.
Required Char and KF:
0VERSION (Filter).

INFO OBJECT

DESCRIPTION

0CALMONTH

(Free Char)

Calendar Month

0PLANT

(Free Char)

Plant

0VENDOR

(Free Char)

Vendor

0MATERIAL

(Row)

Material

0AVGDTIME

(Column)

Avg. Delivery Time

With the

0AVGWDTIME
(Column)
Avg. Weighted Delivery Time
information from KFs 0AVGDTIME and 0AVGWDTIME we can know the avg. delivery time for a shipment to arrive
from the vendor, so his gives an idea about how many days in advance an order for the required material has to be
placed.

Q13. Difference between filters and free characteristics.


Filters and Free characteristics are two similar methods to restrict data in BW and both restrict the values of
characteristics in a query. However the main difference is with the filter, you cannot see the restricted data in a query.
Conversely, a free characteristic allows you to navigate or use drill down on the restricted data.

Q14. Explain difference between selection and restricted key figures.


Q15. Difference between formula and calculated key figure
Calculated key figure is global element (Created at Info Provider Level) we can use this in all the queries which
designed for that particular cube.
New formula is local element (created at query level).

Q16. What is the option to make global structure to local structure? When is this option
used?
Scenario: Add structure elements that are unique to the specific query.
Changing the global structure changes the structure for all the queries that use the global structure. That is reason
you go for a local structure.
Coming to the navigation part-In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query icon. On the SAP BEx Open
dialog box: Choose Queries. Select the desired InfoCube Choose New. On the Define the query screen: In the left
frame, expand the Structure node. Drag and drop the desired structure into either the Rows or Columns frame. Select
the global structure. Right-click and choose Remove reference. A local structure is created.
Remember that you cannot revert back the changes made to global structure in this regard. You will have to delete the
local structure and then drag n drop global structure into query definition.
When you try to save a global structure, a dialogue box prompts you to confirm changes to all queries.

Q17. Explain the significance of I_step 0, I_step 1, I_step 2 and I_step 3.


If you execute a query that contains variables with the customer exit replacement path (and these variables are filled
depending on input-ready variables), sometimes, the variable exit is not run or incorrect data is selected. To avoid this,
you can control the dependencies using the I_STEP parameter.

The enhancement RSR00001 (BI: Enhancements for Global Variables in Reporting; transaction SMOD; component or
function module EXIT_SAPLRRS0_001) is called several times during the execution of the report. The I_STEP
parameter specifies when the enhancement is called.
The following values are valid for I_STEP:
I_STEP = 1
Call is made directly before variable entry.
I_STEP = 2
Call is made directly after variable entry. This step is only executed if the same variable is not input-ready and
could not be filled for I_STEP = 1.
I_STEP = 3
In this call, you can check the values of the variables. When an exception (RAISE) is triggered, the variable
screen appears again. I_STEP = 2 is then also called again.

Q18. Explain business scenario where Cmod is used?


Q19. How many structures we can create at Max in query definition? Is it possible to create
two structures in columns?
You can use a maximum of only two structures in a query.

Yes, it is possible to have two structures in the same column.

Q20. When Text elements are used and what are are the places they can be created?
Text variables represent a text and can be used in descriptions of queries, calculated key figures and structural
components.
You can use text variables when you create calculated key figures, restricted key figures, selections and formulas in
the description of these objects

Q21. Explain the business scenario for formula variable with replacement path and Cmod.
Business scenario: To find out the document count for analyzing Number of Orders in a given period. This can be
done using Formula variable with processing type as Replacement path.
1. Create a Calculated KF to determine No of Documents/orders.
2. Create a formula Variable for getting Document Count. ( processing type -> Replacement path).
3. In select Char Field choose Document Number.
4. In the Replace Variable with drop down box, choose Attribute Value.
5. In the Attribute drop down, select Characteristic Reference (Constant 1)

6. Query Properties.

Q22. Explain the business scenario for formula collision and Cell editor.
Q23. When should we use offset values?
To analyze key figure to have that have a fixed time-relationship with one another, you can use
variable offset. For example compare current sales figures with same time-period figures in the
previous year.
Q24. Ways to improve the performance of set up table filling.
Q25. What is the difference between repair full request and full request?
50) E_T_DATA, I_T_DATA.
50) I_T_DATA used in Master data source with attribute,
E_T_DATA used in Master data source with Text
and C_T_DATA used in Transactional Data source
49). what is options Request by request and Delta at once

49) 1. Only Get Deltas Once - its useful for a snapshot scenario, which will get all the
out having a delta pointer (data mart status) set

requests in the PSA, with

Only Get Delta Once:


Source requests of a DTP for which this indicator is set are only transferred once, even if the DTP request is deleted in
the target.
Use
If this indicator is set for a delta DTP, a snapshot scenario is built.
A scenario of this type may be required if you always want an Info Provider to contain the most up-to-date dataset for
a query but the Data Source on which it is based cannot deliver a delta (new, changed or deleted data records) for
technical reasons. For this type of Data Source, the current dataset for the required selection can only be transferred
using a 'full update'.
In this case, a Data Store object cannot usually be used to determine the missing delta information (overwrite and
creation of delta). If this is not logically possible because, for example, data is deleted in the source without delivering
reverse records, you can set this indicator and perform a snapshot scenario. Only the most up-to-date request for the
Data Source is retained in the Info Provider. Earlier requests for the Data Source are deleted from the (target) Info
Provider before a new one is requested (this is done by a process in a process chain, for example). They are not
transferred again during the DTP delta process. When the system determines the delta when a new DTP is generated,
these earlier (source) requests are seen as 'already fetched'.
Setting this indicator ensures that the content of the Info Provider is an exact representation of the source data.
Dependencies
Requests that need to be fetched appear with this indicator in the where-used list of the PSA request, even is they
have been deleted. Instead of a traffic light you have a delete indicator.
2. Get Data by Request - will get the first request that was loaded to PSA/DSO in
like first in - first out. The "Get data by request" is only for delta loads and not for full loads.

sequence. Its more

Get Data by Request


this indicator belongs to a DTP that gets data from a Data Source or Info Package in delta mode.
Use
if you set this indicator, a DTP request only gets data from an individual request in the source.
This is necessary, for example, if the dataset in the source is too large to be transferred to the target in a single
request.
Dependencies
If you set this indicator, note that there is a risk of a backlog: If the source receives new requests faster than the DTP
can fetch them, the amount of source requests to be transferred will increase steadily.
48) How DTP delta mechanism work
48) Delta from PSA to DATATARGET works based on the Request. There is no Mechanism except taking the Last
updated Request.
If there is no new request from Source to PSA, 2nd time if you run delta from PSA to Data target, you will get 0
records in the 2nd attempt.
47) Deleting ODS data request wise. How to delete data mart request from ODS.
47) Deleting ODS data request wise: go to RSICCONT give ur DSO name and click on display, u can see the
request loaded into DSO. Select the request what u want to delete and go to table contents and click delete entries
and refresh the target u will see the request will not be there.
deleting data mart request from ods: Go to rsmo and select ur request loaded
there(via ur
Info package), change tech QM status to red and then go to ur
HEADER tab and click on ur data targets tab--> it will take u to

from ODS

the manage screen for all the


targets. Select one each time and delete the red requests.
This way u can delete all ur bad requests from ur targets (via
same screen)
46) How delete data incase data was already compressed
Selective deletion, go to manage of the info cube.
go to contents tab and click on delete selections.
You will get a popup where in you need to click on delete selections again. now give the selections based on which the
data will be deleted.
Click on start and this will carry on with selective deletion.
use TCode delete_facts and delete from there.
If in case you want to schedule this deletion in a process chain, you will need to generate the program using
delete_facts in dev and copy it to a Z program in Dev and add that in a process chain. Now migrate the program and
the process chain.
Schedule it howvere you want it to be.
45) Compression - use of compression.

45) Each Data Load done in an info-cube is uniquely identified with a request ID associated with them. The
Concept of deleting these meaningful load identifiers in info-cubes is called as Compression.
Compression improves Performance Tuning as it removes the redundant data.
- Compression reduces memory consumption due to following:
1. Deletes request IDs associated with the Data.
2. It reduces redundancy by grouping by on dimension & aggregating on cumulative key-figures.
SAP compression reduces the number of rows in the F fact table (sometimes to zero). This is because when Requests
are compressed data moves from F - Fact Table to E- Fact Table.
- A smaller F fact table results in
1. Accelerated loading into the F fact table.
2. Faster updating of the F fact table indexes.
3. Accelerates aggregate rollups, since the F fact table is the source of data for rollup.
4. Shortens RUNSTATS processing time on the F fact table.
5. Reduces index REBUILD processing time if indexes are dropped as part of load.
44) Repartioning and remodeling.
44) Repartitioning is a great tool provided by SAP which gives the business another chance to rethink about the
performance needs without the need of disturbing the data that is already existing in the infocubes. It is a feature that
lets us to redefine partitions without deleting the data in the infocubes.

Process Overview
1. Creates shadow tables for the E and F fact tables, starting with /BIC/4*
2. Copies data from the E and F fact tables to the shadow tables.
3. Creates partitions on the E and F fact tables.
4. Copies data back to the E and F fact tables.
5. Recreates indexes.

Pre-requisites
1. Make sure you have enough tablespace in the PSAP*FACT.

2. If you are using the 0FISCPER for repartitioning, make sure that the Fiscal Year Variant is constant in
the Infoprovider Settings.If not, use the program RSDU_SET_FV_TO_FIX_VALUE.
3. Take pre-snapshots of data for later validation after repartitioning.
Repartitioning Activity
1. Right click on the cube and select Additional Functions->Repartitioning.
2. There are three options to add, merge or complete partitioning. Select the desired option and click on
Initialize. Continue through the popups.
3. Check the monitor .
Post Repartitioning.
1. Check and rebuild the indexes and/or aggregates if needed.
2. Validate the data against the pre-snapshots.
3. Once the data is validated the shadow tables can be cleaned up.

Remodeling is a new concept introduced in BI 2004s to manage changes to the structure effectively in an
infoprovider, where the data is loaded and running.
Characteristics can be remodeled on the following ways
- Inserting, or replacing characteristics with:
o Constants
o Attribute of an InfoObject within the same dimension
o Value of another InfoObject within the same dimension
o Customer exit (for user-specific coding).
- Delete
key Figures can be remodeled on the following ways
Inserting:
o Constants
o Customer exit (for user-specific coding).
Replacing key figures with:
o Customer exit (for user-specific coding).
Delete
43)Index types:

Physical index:
Logical index
BIA index
Compound index
Memory index
Temporary index

42) Line item dimension - scenario

42) 1. When the number of master data is very high - like for example - customer master running into
millions.
2. When you add this to a dimension - the DIM table size increases dramatically and the cube
performance is affected. Since a DIM ID is created for every combination of characteristics possiblewhere if you put customer master ( 1 mill records ) and the material master ( 50000 records ) you get
1 mill * 50,000 records in DIM Table.
3. TO avoid this you would separate the customer master into a separate dimension but even then if
the size is high you can declare it as a line item dimension which would make the SID part of the fact

table and hence improve cube performance. However you can have only one characteristic in a line
item dimension.
41) Business scenario for end routine. parameters
40) special info objects in INFOSET and MULTIPROVIDER
39) How to extract the data selectively from source system. Give info package options to do this.

In the Data Selection tab of Info Package we had the privilege provided to load the data selectively.
You need to select variable type for a field, confirm the selection, and then choose Detail for Type, an
additional dialog box appears. In this dialog box you can freely restrict the values of the fields.
InfoPackage:

In the Info Package select the Types, which results the below Variables.

In this scenario, let us load the data or transactions for the month s of 05.2009 to 06.2009 using the option
of Free Temporal Selection 5. Type 5 gives you a free selection of all fields.
For the desired field 0CALMONTH we have confirmed variable type 5.
On conformation it pops up a screen for entering from value of 0Calmonth, To Value of 0Calmonth, Next
Period from Values and Period indicator. By providing these values you will be driven for entering the value
for Number of Periods until Repetition.

In this scenario, from value is 05.2009 and to value of 06.2009

With the above selection the data requested for time span 05.2009(200905) to 06.2009(200906) including
both the limit values.
Next period from value indicates the period for Second Data Request which spans between
07.2009(200907) to 08.2009(2000908),
In Period Indicator Type we have Year-Like Period (ongoing) and Year/Period-like.
We need to select the Period Indicator 1 Year/Period Like which is sensible, with period type 0 the
request is carried out with ongoing values.
After the 8th run the system starts counting from 01.2010 (201001), as the Period Indicator is 1
Year/Period Like.
If the Period Indicator Type 0 (Year-Like Period (ongoing) is selected for the above scenario
After the 8th run instead of starting the load from 01.2010 (201001), the scheduler starts over again with
time span 05.2009(200905) to 06.2009(200906) which is not sensible.

On scheduling the Info Package, in the monitor screen you can observe the Selection field with Temporal
Free Selection Variable Value (200905-200906) in the header tab.

Below is the required data in Target ZSELEC, which are the list of products in the months 05.2009 to
06.2009.

38) How to find out whether query is using aggregates or cube?

38) Go to RSRT. Give the name of the query you have created on the cube. Click "Execute + Debug".
Check the box for "Select Aggregate" and press Enter. This will automatically select "Display Aggregate
Found". Once the execution is started you can see whether query / queries use aggregates or not and also
you can see which aggregate is been used.
37) How to Debug CMOD Code in Data source enhancement
37) There are many ways to debug the code if you wish to enhance the DataSource. In RSA3 you can do
that by clicking Debug mode option.
Another way to do.
If you want to enhance the Transactional data then you need to choose EXIT_SAPLRSAP_001 component
of RSAP001 enhancement. It has 4 components for master data (Text, hierarchy...etc).
This is a function module enhancement. If you goto the source code of the function module, you will find an
include program i.e. ZXRSAU01. double click on this program which will take you to the body of the
program. This is the place where you need to write code.
Here you need to write the code as follows.
case i_datasource.
When '2LIS_XX_XXXXX'.
Now here you can set a break point. This can be soft or hardcoded break point if you need to debug.
Better Hardcode it. Write
BREAK-POINT or BREAK username.
Now when you execute the DS through RSA3, The program will stop here. The data extracted from DS will
be available in an internal table C_T_DATA. Now you can enhance this by writing code. Again you can
debug by pressing F5 or F6. This is as usual the way you debug an ABAP program
36) How to debug start and end routines?/ How to debug transformations

36) Step.1 put a hard coded break point (BREAK-POINT) in Start Routine/End Routine which you desire to
debug.
Step 2. Open DTP of desired transformation; go to Execute tab. Choose processing mode as Serially in the
Dialog Process (for Debugging).
Step 3. Click on change Breakpoints button in front of Transformation. Select Before Transformation check

box to debug Start Routine. Select Before End Routine check box to debug End Routine.
Step 4. Click on Simulate button to start debugging. Debugging will not actually load data into data target;
instead it will simulate the code behavior for Data Loading.
35) How to debug variables COMD code?

34) How to implement Sap note?


34) Go to Tcode SNOTE and download the SAP note using DOWNLOAD SAP NOTE icon and execute that note and
after implement it.
First read the Note description for any Prerequisites to do and after that implement it.
33) What are all proposals we need to consider before creating Aggregates?
33) Normally before creating an aggregate over an info cube we need to consider few things check for the query
which is the Highest contributor in total run time using TC: st03(work load analysis).
Check the quota of data base time to total run time. As a rule of thumb if % DB is > 30 then the query is suitable for
aggregate. and aggregation ratio> 10 as well.
This data can be seen only if you maintain the technical content .Statistics is also not mandatory. If you want to have
information regarding the performance, utilization of your cubes and reports, you can use Statistics.
32) What is exceptional aggregation? What are all places to define this? Business example.
32) Exception Aggregation are mostly used to get the detailed value or drill down to single instance value or to get nay
grouped value based on some characteristics.
EXAMPLE FOR EXCEPTIONAL AGGREGATION
The scenario below shows you how to count the results of a calculated key figure in the BEx Query Designer.
You have loaded the following data into your InfoCube:
Region

Customer

Sales Volume
USD

NY

CA

400,000

200,000

50,000

A
C

800,000
300,000

You want to use the query to determine the number of customers for which the sales volume is less than 1,000,000
USD. To do so, you create the calculated key figureCustomer sales volume <= 1.000.000 (F1) with the following
properties:
General tab page: Formula definition: Sales Volume <= 1.000.000
Aggregation tab page: Exception Aggregation: Total, Ref. Characteristic: Customer
This query would deliver the following result:
Region

Customer

Sales Volume

F1

USD
NY

CA

400,000

200,000

50,000

Result

650,000

800,000

300,000

Result

1,100,000

Overall result

1,750,000

The overall result of the calculated key figure F1 is calculated as follows: Sales volume of customer A (400,000 +
800,000) -> does not fulfill condition (sales volume <= 1,000,000) -> 0: sales volume of customer B (200,000) -> fulfills
condition -> 1; sales volume of customer C (50,000 + 300,000) -> fulfills condition -> 1. When totaled, this gives 2 as
the overall result for F1.
A query with a drilldown by region would give the following result:
Region

Sales Volume

F1

USD
NY

650,000

CA

1,100,000

Overall result

1,750,000

Due to the assignment of the reference characteristic Customer to the calculated key figure F1 for the exception
aggregation, the query also delivers the required data without a drilldown by reference characteristic.
31) What are Safety upper and lower intervals? How they work?
31) Safety interval should be set so that no document is missed even if it was not stored in the DB table when the
extraction took place.

Safety Interval Upper Limit of Delta Selection


This field is used by DataSources that determine their delta generically
using a repetitively-increasing field in the extract structure.
The field contains the discrepancy between the current maximum when the
delta or delta init extraction took place and the data that has actually
been read.
Leaving the value blank increases the risk that the system could not
extract records arising during extraction.
Safety Interval Lower Limit
This field contains the value taken from the highest value of the
previous delta extraction to determine the lowest value of the time
stamp for the next delta extraction.
30) Delta specific fields in generic extraction? How these fields works?
30) TIME STAMP
Numeric pointer
Calendar day

29) What is the difference between repair full request and full request?
29) Repair Full request:-If you have done Full repair you dont have to reinit. Basically thats one advantage of Full
repair as it wont affect your delta loads.
But in case it was a Full load, you may need to reinit.
Full repair can be said as a Full with selections. But the main use or advantage of Full repair load is that it wont affect
delta loads in the system. If you load a full to a target with deltas running you again will have to initialize them for
deltas to continue. But if you do full repair it wont affect deltas.This is normally done we when we lose some data or
there is data mismatch between source system and BW.

Repair Request:-Editing a request in PSA and loading that into subsequent target. Or doing reconstruction.
28) When should we use offset values
28) To analyze key figure that had a fixed time relationship with one another we use variable offset. For example
compare current sales figure with some time period figures in the previous year.
27) Explain the business scenario for formula variable with replacement path and Cmod

27) A Step-by-Step guide


Have you ever wanted to perform calculations using dates defined as characteristics but have never worked out how it can be
done? Replacement Path Variables are the key.
Using Replacement Path in your Text and Formula variables, replace values held in a characteristic. The Characteristic value
variables are replaced by the results of a query at run-time.
The steps detailed below show a technique to enable a BEx query user to determine the number of days between two dates.

Scenario:
The group HR administrator wants a detailed line item report that lists all employee absences in a given period. The report is to
show the employee number, the absence start date, together with the end date of the absence and show the number of calendar
days the employee was absent.
The good thing about using this technique is that no redesign work is needed from your technical BW team, no ABAP is involved
and best of all, it quick and easy.

Solution:
For this example I created an ODS object called Absence that holds basic employee information along with individual absence
record data.
Follow the steps below:
1.
2.
3.

Open the BEx Query Designer and create a new report on your chosen InfoProvider.
Drag the Employee, Valid from and Valid to characteristics into the Rows section of the screen. If needed, apply data
selection restrictions to the characteristics as shown in Figure 1.
Right click on the Key Figures structure and select New Formula (Figure 1).

Figure 1

4.

In the new formula window right click on Formula Variable and choose New Variable ( Figure 2 ).

Figure 2

5.
6.

The Variables Wizard will launch and will require you to specify the variable details.
( Click the NEXT button if the Introduction screen appears )
Enter the variable details on the General Information as shown in Figure 3 .
Enter the Variable Name , Description and select Replacement Path in the Processing by field.
Click the Next Button.

Figure 3

7.

In the Characteristic screen (Figure 4) select the date characteristic that represents the first date to use in the calculation
(From Date).
Click the Next Button.

Figure 4

8.

In the Replacement Path screen (Figure 5) select Key in the Replace Variable with field. Leave all the other options as
they are (The offset values will be set automatically).
Click the Next Button.

Figure 5

9.

In the Currencies and Units screen (Figure 6) select Date as the Dimension ID.
Click the Next button.

Figure 6

10. The Save Variable screen (Figure 7) displays a summary of the new variable.
Click on the Finish button to save the variable.

Figure 7

11. Repeat steps 4 to 11 to create a second variable for the second date to be used in the calculation. In the example shown,
the characteristic 0DATETO is used to create the variable ABSEND (Absence End Date).

Define the Calculation


We can now use our two new replacement variables to define our new calculated key figure that generates the number of absence
days for each record.
1.

You will now be back at the New Formula screen (Figure 8). Drag and drop the two new variables into the formula section
of the screen and insert the subtract sign (-) between the two.

2.

Give the new formula a description and click the formula syntax check button

to ensure the formula is valid.

Figure 8

3.

The new calculated key figure will now show in the columns section of the BEx query designer (Figure 9).

Figure 9

4.

Save the query and execute it.

In the example shown the Number of Calendar Days Absent is calculated correctly. See the table of results below.
Employee

Valid From

Valid To

Number of Calendar
Days Absent

50000001

17/04/2004

21/04/2004

50000002

16/07/2004

29/09/2004

13

50000003

07/01/2004

09/02/2004

33

50000004

04/08/2004

05/08/2004

Das könnte Ihnen auch gefallen