Beruflich Dokumente
Kultur Dokumente
Business
Analytical
Real time/interview
questions
systems
B I W Re a l
November 15, 2012
time
Class
A 30 DAYS SESSION
BW/BI consultant roles and responsibilities in support projects:
I.
II.
III.
Monitoring:
Live Monitoring [6-8 days]
Client production server should be monitored every day for 2-3 hours.
SM37
SM50
Load monitor
RSMO
Short dump
ST22
System logs
SM21
TRFC's
SM58
RSPC
6. Error in source system ( RSDBTIME) - Time zone for kernel and application servers
7. Time stamp or PSA error
8. Table space problems
9. How to resume process chain if repeat is not available.
Ticketing:
[3]
Levels of support
o
Level 0
Level 1
Level 2
Level 3
Ticketing procedure
Status of tickets
[4]
[5]
Transports
Packages, transport, landscape of projects, transport connections
process flow for transportation objects
dependencies in transports
[6]
[7]
1)
RD activities
2)
Design
3)
Construction
4)
Testing
5)
UAT
6)
Go-live
7)
Support
Documentation
Requirements of documents
High level estimation documents
Designing of Technical, functional documents
Test cases and results
Implementation plan
[8]
Objects
Aggregates in project
Indexes
SAP info cube perfect modeling
Data source enhancements
Customer exit for variables
Flat file extraction and automation
[9]
[10]
Miscellaneous topics:
Job status, Kill a job, Stop jobs,
Main chain
Start process type : Define when process chain should start and how it should start?
Transfer settings from source system : it useful to update technical T tables from ECC system to Bw system
Master data main chain : it is useful to extract all master data information from ECC system(SD MD, FI MD,CO
MD,HR MD, ROWVC MD)
Transactional data chains: CO chain( it transfer all co transactional(SD TD,HR TD, FIS TD,PUR TD,treasury TD, PCA
TD) + FI chain AR AP GL transactional data)
Export financial data + BCS data to mainframe system : Using Info spoke and hierarchy down load programs this
chain transfer data from BW cubes and info objects to application s.
Start process type : Mandatory process type in Process chain, We cant include multiple start process types in a
single process chain.
Immediate : we are going to use this start type to trigger on demand loads.
Date / time : If we want to run process chains periodically based on the Date and time. Most of process chains
trigger using this.
After job : If you want to trigger process chain once extraction program job completes( once required data has
been extracted using program), then we need use job name in After job tab.
ECC
3 tables Extraction program (Job) TABLE(DS) ---PC( Start variant (After job) .
After event:
Create in SM62, we can manually trigger using SM64, If we want trigger process chains based on the event (Running
transaction, Placing Flat file in Application server is a event).
Factory calendar:
Special calendar (SCAL) 1st Monday 3rd Monday loads.
Job status
Planned: Job without any start condition
Released: Job with future start condition. Waiting for something (Future Dates/Date, After job, After event).
Ready: Job will in the status for Msec, Release-Ready-- Active
Active(Yellow): Job is running actively then we can call that job as active job.
Finished(Green): Job completed successfully.
Cancelled(Red): Job failed due to some reason.
How to convert released status jobs to scheduled or planned status?
2A : Select process chain go to start process type RC display all jobs Select the job which in Released
status Job menu released schedule.
How to stop periodically running chains?
1A : Select process chain MENU execution remove from scheduling.
Definition
In this field you specify variables for periodically scheduled load processes.
If you want to load data from the source system into BI periodically and therefore periodically want to change the entries in
the selection fields (for example, the date filed), you can select one of the following options for all the fields that can be
selected:
You can define an ABAP routine which the system processes at runtime. This routine has access to all selection
fields and is the last to be processed at runtime.
OLAP Variable (Type 7)
You can use variables here.
You can enter the following variable types directly in the field. For reasons of compatibility to future releases, however, we
recommend that you no longer use these variable types; instead you should perform such selections with suitable routines
or (OLAP) variables.
d01af99d-1bad-2d10
-25a3-a342fd36c12f.pdf
DSO
Master process chains consist text load, Attribute load, Hierarchy load into IO
Text load IP(3.x) , IP and DTP(BI 7) , process types Execute info package, Data transfer process( Start mandatory)
Attribute load IP, DTP, ACR( Attribute change run). TO get latest and greatest information about attributes after
attribute has been loaded into BW system we need to run ACR
Hierarchy load : IP, SAVE HIER( to get latest and greatest information in Hierarchy table),ACR <=BI7
IP, DTP,SAVE HIER,ACR - BW 7.3
SAP_BW_-_Hierarch
y_Attribute_Changerun_Management.pdf
From Context menu option of display messages will leads to job monitoring(SM37) transaction to display job
status and
job logs,
Same option can leads to process monitor ( RSMO). From there we can get more information about data loads
failures using detail tab.
Transactions used in live monitoring
Job overview in the source system - SM37 too see more information about the job, we need to see job log
Process overview in source system: SM50 - To see whether work processors are busy or idle. If all work processors
are busy then job will be in delayed status.
Process overview in BW System : SM50 - To see whether work processors are busy or idle. If all work processors are
busy then job will be in delayed status.
Work process :
DIA DIALOG
BGD Back ground
UPD Update
SPO SPOOL
UP2 UPDATE 2
ALEREMOTE User id is responsible to establish the connection and to extract the data from source(ECC) to BW
System log Oracle errors SM21 Source system side , in the BW system wise.
Exporting data from BW system to Down stream system using info spokes and hierarchy down load programs.
Info spokes( PC info spokes) 3.x and BI 7, Open hub destination(DTP) only bI 7.x
We use IO(TEXT, ATTR), DSO, IC as data sources for Info spokes and OHD , Hierarchy from IO can down loaded using
SAP delivered program called SAP_HIERARCHY_DOWNLOAD.
If we want keep info spokes in PC chains then generated flat files should be in application server.
Flat files should be in application server then only IP can be used as process types in process chain.
Statistics
Collection of main process chain timings. To calculate SLAs . >90%
Another SLA
We have on demand chain, This will be trigger by business users by running ZBCS transaction. SLA for this process
chain should be completed within 15 minutes. Fetched records should be below 100000. >95%
Decision process type for multiple alternatives:
A chain should trigger only planning information on 1st day of every year, for other days it should trigger actual
information.
Errors
1 no Sid found : When we are loading transactional without loading master data for a particular info object, then
transactional data load may ended up with an error called No Sid found for info object.
Break fix : Found the info object causing no sid found error, Load Master data into that info object then repeat
transactional data load.
Permanent solution or best practice: Load all system wide master data first then load transactional data.
Small case letters : DSO/ODS will allow small case letters, but while activating it will through an error.
Info cube wont allow small case letters it will give an error while loading data into.
Break fix:
Small case letters can converted into capital in PSA in 3.x. or in Error stack BI 7
Permanent Fix:
Write routine in transfer rules(3.x) or transformations ( Bi7)to convert small case into capital letters.
Result : Translate to Upper case.
Or
We can convert using Formula builder
Special characteristics
ATO Z 0 TO 9 allowable char in info cube
!@#%^&* not allowed in BW info cube.
!SRIRAM Load will fail
SRIRAM! Load will not fail
SRI!RAM Load will not fail.
Same with #
# is only char in data then load will fail, SRIRAM#
Break fix
Invalid char can be converted valid char in PSA in 3.x. or in Error stack BI 7
Permanent fix
Write Routine to convert invalid char into valid char
CLEAR ch_string_to_cleanse.
ENDIF.
ELSEIF ch_string_to_cleanse+l_offset(1) = '!'.
* found invalid character ! in position 1
* correct position 1
ch_string_to_cleanse+l_offset(1) = ' '.
ENDIF.
l_offset = 0.
* process postxt until the end of the string
DO l_strlen TIMES.
*determine the length of the string or the portion we haven't verified
*yet
l_length = l_strlen - l_offset.
* check the portion we havent verified yet
IF ch_string_to_cleanse+l_offset(l_length) CN c_valid_cleanse_chars.
* we found bad data, correct it and setup to read rest of string
l_offset = l_offset + SY-FDPOS.
ch_string_to_cleanse+l_offset(1) = ' '.
ELSE.
* no more bad data found
EXIT.
ENDIF.
ENDDO.
Locking errors
One user opened Info package which is part of process chain, If Other user running same process chain
then load will failed because locking issue.
Solution: Go SM12 and an delete entries in the table, then repeat the process chain. Or ask first user to
come out of info package then second user can repeat the process chain from info package.
PSA Deletion - If we dont want data in PSA for older request. We can more DB space within system. It
will smoothen system performance including loading. We can avoid table space errors during loading.
Manual approach
BI 7
Data source - manage- select requests and delete.
3.x
RSA1 PSA Select IS Expand PSA RC- Request-Delete.
Automate the PSA deletion process then we need to include Delete PSA requests Process type in process
chain
Load into the PSA using IP in Process chain and Deletion PSA table using delete PSA request step, If both
are running same time. Then one the job will terminated most of the load job will terminated because of
locking issue with PSA table.
We need wait until one job is get completed in SM37, then we need to repeat the chain.
The NTP service was restarted on ZAPERP02, ZAPBIW01 and ZAPBIW02, the time on these servers
is now within one second of the time on ZAPERP01.
7. Time stamp and PSA error
Whenever we change data source after changes if we wont replicate and activate transfer rules
then loads may end up with Time stamp or PSA errors
Display messages shows time zone differences for ERP and BW systems
Whenever there is support pack upgrade in source system. ( there might be change in data
sources then when we run loads from BW without replication and activation of transfer rules then
loads may end up with PSA errors or time stamp errors)
Solution: whenever we get time stamp errors replicate data sources and activate them and
activate transfer rules( RStrancu_atcivate_all) or transformations.
8. Table space problems: ST22 , SM58 source - Table space problems - TRFC will end up with errors
We need give more DB space - We can contact basis to increase table space. Basis is going to add table
space for you if really required.
By deleting data from PSA and change log table and compressing info cubes , by deleting unwanted
aggregates we can gain the DB space.
Alerts
System - PC Errors Messages - Pagers or email ids or Phones.
To Transfer messages to emails of user group. Basis need create a periodic program SCOT transaction
and they need to take care exchange server information.
We need to create distribution list for the recipients
Distribution list is collection of employee email ids and pager number, we can use this distribution list in
process chain message maintenance recipient list. This is reusable one.
From Office work place we can create and we can edit distribution list.
Watch dog program: If there are any delay then this program will be notified through emails and pagers
Monitor program.
RSPCPROCESSLOG
For each process type there will be and entry in this table. If any process type with process chain failed
then message will be circulated using distribution list.
Monitor and
Watchdog programs_Draft.docx
Overview:
The following procedure can be used if the process chain does not offer a repeat on a failed process or if you do not
want or need to do a repeat, but you want to continue the process chain.
Steps:
1) Go to SE16 and bring up table RSPCPROCESSLOG. Get the Log ID from the Log run of the chain (left side of
screen in log of chain) cut and paste in log id on the table selection screen and execute.
2) Find the failed line in RSPCPROCESSLOG, that is the line that matches the failed process in the chain and has a
status of R, X or blank.
3) Open another session. Run RSPC_PROCESS_FINISH, using z-tcode ZRSPC_PROCESS_FINISH. Put in logid,
type, variant, and instance from the RSPCPROCESSLOG table from the record of the one that failed. Put a G in the
state field. Press execute and you will get no messages if it was successful. If it was not successful, recheck the
values you entered in the program selection options.
This should put the process in the chain to green and allow it to continue.
4) Refresh the chain to see that it is proceeding as expected.
Tickets
If Client is facing any issue in production environment , then do call Help desk and they explain
about their problem to the help desk executive, then help desk executive going to generate a ticket
or incident with customer complaint using ticketing tool, and they assign this ticket to respective
team.
Service center or help desk support is level one support.
Ticket will be available in respective Queue( BW,FI). This ticket status is known as Open status.
BW support consultant who is in On-call need to pick the ticket. He needs to contact User who raised
the ticket and need to mention first customer contact time then ticket will turned into WIP.
Work in progress
In most project the above step should be completed in 30 minutes.( there will be SLA also)
Ticket response time for all ticket types it should be 30 minutes.
WIP Status : Work in progress status SLA clock start clicking from this point onwards.
You need work actively on this ticket, need to contact user for more information on the ticket, We
need to come up with right resolution, it might be break fix or New CR.
If we are able to break fix error directly in Production environment then we need to ask user to
check the results then upon user acceptance we can close the ticket by providing ROOT cause
analysis.
Once we close ticket , ticket status will be closed.
Status of the tickets
OPEN
WIP
CLOSE
SUSPEND Whenever we are waiting for an answer from user then for time being we can keep ticket
status as suspend. SLA clock will be stopped.
PENDING TO CLOSE- If solution is not break fix, Fix should be come from development then until
development hits the Production system using transport request, we can keep respective ticket in
pending to close.
TOOL DW ticketing . peregrine
SLA Sev 1 should be completed in 2 hrs. -In Production environment if any batch job fails then we
receive Sev 1 ticket.
Sev 2 ticket should be completed in 4 hrs IF file generation for other systems not happened in time
then other system folks are going to call sev 2 ticket. IF any user group is complain about report is
not executable or latest information is not appearing in the report then they call Sev 2 ticket.
Sev 3 ticket should be completed in 16 hrs If user requires any on demand load then they will call
sev 3 ticket. If user is having any access related to access or data discrepancy problems then they
will call these type tickets.
Sev 4 ; No time line New user looking for BW connection,
SAP notes :
1.Corrective notes : SAP will provide corrections using this note, Customer needs to implement this
note in their systems to resolve the problem. Using Snote transaction we can down load sap note and
we can implement this SAP note. When we implement Sap note system will ask for Transport request
to collect the changes because of SAP note implementation. So SAP note is transportable and we
need to apply in Development system and transport it to QA and Prod.
OSS Note
1573359.pdf
How to implement corrective notes
BD1 -> Tcode SNOTE -> Select Goto -> SAP Note Browser -> Enter the note 1573359 -> Execute -> Select the note
and select Implement SAP Note
Example :
We have TRFC stuck up, we need to clear these TRFC stuck up using SM58 Transaction F6
Above one manual approach to clear stuck TRFCs, SAP provided permanent solution for TRFC stuck
up with Note 1573359.
Above note will give new program called RSTRFCCK in system. This program will clear the TRFC stuck
up. So run this program as a batch JOB for every five minutes in ECC and BW.
2. Procedural notes : SAP provides Procedures or Steps to be maintained or followed to resolve the
error.
Note 1576331 is example for Procedural note, this note will give information about How to run
RSDBTIME program to check inconsistencies between application server and kernel time zones.
3. informative notes : We wont apply these notes, this information about best practices to be
followed.
Example: note 1073268 will give information about how to implement 0IC_C03 best way.
[11]
SPS - Patches - Collection SAP notes. Patch 1 ---------25 , Support contain new functionalities
also.( New data sources, data sources changes, new changes to extractors, New cubes)
Customers at least once in year try to implement latest patches to their systems.
Patches or Sps will be upgraded by Basis consultants. SPs are not transportable. Basis need to apply
these Sps System wise.
First basis apply on Dev systems - Functional and technical testing will be carried in that system,
once testing passed. Basis apply same Sps to QA systems, Then again technical functional tesing
need s to be carried out. Once everything is fine then basis is going to apply SPs in Production
environment.
Roles of BW consultant when Basis consultant applying Support packs.( Down time activity).
Outage plan or down time plan( Bw role - Clearing queues)
Queues in ECC system
Bw consultant role is to make sure all entries in Below queues are 0
If user is posting the data( creating the data), then entries come and sit in below stated areas, So we
need to lock the user then he cant log into the system to post the data. Security consultants are
going to lock the system wide business users.
Development process
Water fall, SDLC, ASAP methods to develop objects or to implement SAP projects
Development phases:
1.Requirement Phase. : Collect requirements from the user by conducting interviews and meetings. Prepare
Requirement document and get sign off from the user
2. Design phase : Based on the naming convention prepare Functional and technical specs or design documents. Get
approval from the Sr consultant for these documents.
3. Construction phase: Construct objects in Development system according to design documentation. Capture
objects into transport requests. Prepare unit test cases, conduct unit testing and capture results. Get approvals on
unit test cases and results and on transports from Sr consultants(WPR). Release objects into QA system. Prepare
implementation plan, Get WPR approval from Sr consultant.
4.Testing phase: After successful import of objects into testing system. Prepare system test cases and conduct
system testing. Ask WPR for these . Once we get approval then ask user to do UAT(User acceptance testing). Once
UAT passed then CR is ready to move into production.
5.implementation or go live phase
Ask basis or migration team to import the object( transport requests) into Production system according to
Implementation plan.
Perform pre and post implementation steps if necessary.
6.Support
Give support to object( loading , reporting)
12.0 Development Phases:..................................Error: Reference source not found
12.1 Requirements Phase - Analyst....................Error: Reference source not found
12.1.1 Create the Requirements.....................Error: Reference source not found
12.1.2 Revisit the HLE and Complete the Detailed Estimate Error: Reference source not found
12.1.3 WPR for Requirements and Estimate.....Error: Reference source not found
12.1.4 Requestor Approval of Requirements.....Error: Reference source not found
12.1.5 Q-Gate 1 Approvals.............................Error: Reference source not found
12.2 Design Phase - Designer........................... Error: Reference source not found
12.2.1 Create the Design...............................Error: Reference source not found
12.2.2 WPR for Design.................................. Error: Reference source not found
12.2.3 Q-Gate 2 Approvals.............................Error: Reference source not found
12.3 Construction Phase Developer and Tester..Error: Reference source not found
12.3.1 Development - Developer....................Error: Reference source not found
12.3.2 Unit Test Plans, Cases, and Results - Developer Error: Reference source not found
12.3.3 WPR for Construction and Unit Test DeveloperError: Reference source not found
12.3.4 System Test Strategy, Plan and Cases - TesterError: Reference source not found
12.3.5 WPR for System Test Strategy, Plan and Cases - Tester Error: Reference source not found
12.3.6 Q-Gate 3 Approvals - Developer...........Error: Reference source not found
12.4 Testing Phase- Developer and Tester...........Error: Reference source not found
12.4.1 Complete Implementation Plan, Release of Transports and System Test Preparations Developer....................................................Error: Reference source not found
12.4.2 WPR for Implementation Plan - Developer Error: Reference source not found
12.4.3 Conduct System Testing Tester..........Error: Reference source not found
12.4.4 WPR for System Test Results Tester....Error: Reference source not found
12.4.5 Q-Gate 4.1 Approvals - Tester.............Error: Reference source not found
12.4.6 Request UAT be Performed by the Requestor Tester Error: Reference source not found
12.4.7 Q-Gate 4.2 Approvals - Tester.............Error: Reference source not found
12.5 Implementation Phase Responsible Person and Implementer Error: Reference source not
found
12.5.1 Preparation for Go/No-Go Decision Meeting Responsible PersonError: Reference source not
found
12.5.2 Preparation for Transport to Production - Implementer Error: Reference source not found
12.5.3 Implement - Implementer....................Error: Reference source not found
12.5.4 Post Production - Implementer
[12]
Releases: M Monthly, Q - quarterly, special release for projects. Immediate release to break fix
production issues, Year end releases. Release management will take care of these releases.
WPR Work product review, Quality process
Whenever we create products other person needs review and approve(endorse) the product.
Quality gates: Q1, Q2,Q3,Q4,Q5
Time frames or dead line to complete each phase. Q1 gate is a time frame to complete
requirements gathering.
Sign offs - User approvals, WPR approvals. We receive these sign off using emails.
HLE : High level Estimation
Based on the initial requirements we need to give estimation to complete the objects and
documents. Hr will be calculated here based on the different phases.
Documents we create in Development process
Phase 0 : Prior to requirement gathering.
HLE Document
Phase 1: Requirement phase
RD
Phase 2: Design phase
Design documents( functional specs & technical specs) and Detail estimation documents
Phase 3: Construction phase
Unit test cases and results Documents
Phase 4: Testing phase
System test case and results docs & UAT
Phase 5 : Implementation
Implementation Document
Phase 6 : Support
Production Synopsis document This document will provide information to support the delivered
objects in production system.
Determines, during the hierarchy attribute realignment run, the level of percentage change at which the delta process is to
be switched to reconstruction.
Use
Aggregates are adjusted to the new attributes and hierarchies during hierarchy, attribute and realignment runs. There are
various adjustment strategies. The aggregate can be completely reconstructed or the old records can be updated negatively
and the new records positively (delta process). The procedure that is used depends, among other things, on how much has
actually changed.
Enter a number between 0 and 99. 0 means that the aggregate is reconstructed generally. Change the parameter until the
system is running at its fastest.
Custom application ( No SAP std cubes) . We need sit with SME ( subject matter expert) need to get
information like what is strong entities in the application area. We need to finalize weak entities
then build relation model for entities ( ER). Bubble model.
For each strong entity we create a dimension.
Once new is created then we need to load data into the cube , to see percentage of entries in the
dimension table with compare to the fact tale we need to run program called
SAP_ INFOCUBE_DESIGNS.
Run RSRV test for Database Information about Info Provider Tables,
Entries Not Used in the Dimension of an Info Cube.
Object 5 - Data source enhancement - CMOD
Data source types
3. CMOD Create/ display project - Select components - Select right customer exit then select
include program and write code.
Requirement :
User would like to see Withhold tax, tax type information in the report.
First whether required IO are there in the Info cube then modify query only( Query enhance ment).
Whether required IO are there in the BW side. for possibility of look up or turning attributes
"withhoding tax
Logistics extraction
It is part of business content extraction, Useful extract data from logistics application areas of ECC.
In ECC logistics application areas are
Transporting DataSources
Select the DataSources that you want to transport from the test system into the productive system, and choose
Transport. Specify a development class and a transport request, so that the DataSources can be transported.
Maintaining DataSources
To edit a DataSource, select it, and choose Maintain DataSource. The following editing options are available:
o Selection
When scheduling a data request in the BW Scheduler, you can enter selection conditions for the data transfer.
You can, for example, determine that data requests are applied only to data from the last month.
If you set the Selection indicator for a field in the extract structure, the data for this field is transferred according
to the selection conditions determined in the scheduler.
Hide Field
To exclude a field in the extract structure from the data transfer, you must set this indicator. The field is used in
the BW to determine the transfer rules, and can no longer be used to generate the transfer structure.
Cancelation Field
Reverse postings are possible for customer-defined key figures. Cancelations are therefore only active with
certain transaction DataSources. These are DataSources that have a field designated as a cancelation field, for
example, the Update Mode field in the DataSource 0FI_AP_3. If this field has a value, the data records are
interpreted as reversal records in the BW.
If you want to carry out a cancelation posting for a customer-defined field (key figure), set the Cancel indicator.
The value of the key figure is transferred inverted (multiplied by -1)into the BW system.
Field Known Only in Exit
You can improve the quality of data by adding fields in append structures to the extract structure of a
DataSource.
For fields in an append structure, the indicator Field Known Only in Exit is set, meaning that, by default, these
fields are not passed to the field list and the selection table in the extractor.
Remove the Field Known Only in Exit indicator if you want the Service API to pass the field in the append
structure to the extractor, along with the fields from the delivered extract structures in the field list and in the
selection table.
Enhancing Extract Structures
If you want to transfer additional information for an existing DataSource from a source system into the BW, you have to
enhance the extract structure of the DataSource with additional fields.
To do this, you create an append structure for the extract struture.
a) Use the Enhance Extract Structure pushbutton to reach the field maintenance for the append structure. The
name of the append structure is generated from the extract structure name in the customer namespace.
b) Enter the fields you want to append and the data elements based on them into the field list. All functions
available for field maintenance for tables and structures are available here.
c) Save and activate the append.
For more information on the append structure, see the ABAP Dictionary documentation for maintaining tables.
Function Enhancements
To fill the fields of the append structure with data, create a customer-specific function module. Information on enhancing
the SAP standard with customer-specific function modules can be found in the R/3 library under Basis -> ABAP
Workbench -> Enhancements to the SAP Standard -> R/3 Enhancement Concept or under Enhancing
DataSources.
Testing Extractions
If you want to test the extraction in the source system independent of a BW system, choose DataSource -> Test
Extraction.
To create a node on the same level or under it, place the cursor on this node and choose Object -> Create Node. You can
also create subordinate nodes by choosing >LS>Object -> Create Children.
To rename a node, to expand or compress it, place the cursor on the node and click the corresponding pushbutton.
To reassign a node or a subtree, select the node to be reassigned ( by positioning the cursor over it and clicking the
Select Subtree pushbutton), position the cursor over the node to which the selected node is to be assigned, and click on
the Reassign pushbutton.
If you select a node with the cursor and choose Set Section, the system displays this node with its subnodes. You can
use the respective links in the line above the subtree to jump to subordinate nodes for this subtree.
When you select a node with the cursor and choose Position, the node in the first line in the view is displayed.
All DataSources for which no valid (assigned) application component could be found appear under the node
NODESNOTCOLLECTED. The node and its sub-nodes are only constructed during the transaction runtime, and updated
when saving in the display.
NODESNOTCONNECTED is not stored persistently in the database. For this reason, it is not transferred into other
systems when the application component hierarchy is transferred.
Note that hierarchy nodes created under the NODESNOTCONNECTED node are lost when you save. After saving, only
those nodes under NODESNOT- CONNECTED are displayed that were moved with DataSources under these nodes.
Example
A DataSource lies under an application component X. You transfer a new application component hierarchy from Business
Content, which does not contain the application component X. The system then automatically assigns this DataSource
under the component NODESNOTCONNECTED in this application component.
Special DataSources can be delivered with Business Content that are not used to extract data but to reconcile data with
one of more Content DataSources. With these reconciliation DataSources you can check that the data loaded from other
DataSources is correct.
The RECONCILIATION node of the application component indicates that reconciliation DataSources of this type are
assigned to the application component. If DataSources exist for an application component that can be flagged as
DataSources for reconciliation, this is displayed in the corresponding RECONCILIATION lower-level node. If no
DataSources exist for an application component that can be used for reconciliation, this node is not displayed.
Note that changes made to the application component hierarchy are only valid until the next transfer from Business Content takes
place.
Activities
The status symbol indicates whether the update in a delta queue is activated for a DataSource. If the status symbol is green, the
delta queue is activated, meaning it is filled with data records when an update process or a data request from BW is running. A
prerequisite for the delta update is the successful completion of the delta process initialization in the BW scheduler.
Refresh
If you choose Refresh,
1. recently activated delta queues are displayed,
2. data records recently written to the delta queue are taken into account, and
3. data records that were deleted when reading the data records are no longer displayed.
Deleting queues - data sources will be deleted from RSA7( Initialization option
for the data source deletion)
You delete the entire queue by choosing Queue -> Delete Queue. To write data records from the corresponding DataSource into a
delta queue, you need to reinitialize the delta process.
To avoid invalid data transfer into BW and to avoid data duplicates into the BW system , before
reconstructing( Filling data) we need delete set up tables data.
If Basis applying support pack. If data is there in any application areas set up tables, then basis cant apply SPs
So we need to delete setup tables data.
3.
In development system when we change/ enhance extract structure then system wont allow you to make
changes until unless we delete set up tables information from the all clients of development environment.
4.
When we import data source/ extract structure changes into target systems(QA.PRD) then to avoid transport
failure we need to delete set up tables from the target systems.
None of the clients in the target system in the V3 update for the
application 11 should contain data. If you are unsure, start the V3
update of the application 11 in all clients.
If there is still data in the central delta management of the target
system, it must be retrieved by BW before the transport takes place.
If you have already reconstructed within this target system, it may
be that the data still exists in the reconstruction tables. After
the transport, you can no longer transfer these entries into BW. You
must delete the contents of the reconstruction tables of the
application 11 in the target system and run another reconstruction
if need be.
o If there is an update log in the target system, you cannot read the
log data of the application 11 after the transport has been run. You
can only read it again once data is booked and the last log entry
overwritten.
Use the report RMCSBWCC to display a log for the changed extract
structure in the target system, to see if any of the above problems
exist. An additional switch deletes all update logs for the application
of the selected extractor.
3. Data source maintenance
4.Fill set up tables
How to
inventory.pdf
Transports
1. SE01,SE03, SE09,SE10.
In real time we do release transports from Development environment to QA only, then from
queue Basis or migration team will import the transports in production environment based on
the sequence we propose in Implementation documentation.
Precautions:
1.We need to take of transport sequence based on the objects dependencies .
Query should not move before Info providers.
Work book should not move before query.
Info cube should not move without required info objects
2.Objects in all three systems should be in sync. Otherwise we may end up transport failures.
Return codes
RC = 0 Successful
RC = 4 Warning
RC = 8 Error ( Object missing, Updates unable to activate)
RC = 16 Error ( Not define target systems in STMS, Package not available in the target system)
Q&A
Answers
Q1. Logistics extraction step by step approach in implementation phase.
LO Cockpit Step By Step
ECC - Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
SD Sales BW -> Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
Select the fields of your choice and continue
Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
- Next step is to Delete the setup tables
7). Go to T-Code SBIW. Select Business Information Warehouse
Setting for Application-Specific Data sources -> Logistics -> Managing Extract Structures -> Initialization -> Delete the
content of Setup tables (T-Code LBWG) -> Select the application (01 Sales & Distribution) and Execute
- Now, Fill the Setup tables
9). Select Business Information Warehouse
Setting for Application-Specific DataSource -> Logistics -> Managing Extract Structures -> Initialization -> Filling the
Setup tables -> Application-Specific Setup of statistical data -> SD Sales Orders Perform Setup (T-Code OLI7BW)
a. Specify a Run Name and time and Date (put future date)
b. Execute
- Check the data in Setup tables at RSA3
Plan Collective Run (if necessary after you run Full Load).
BW
1.
2.
3.
4.
5.
6.
Q5. How to clear the data from Delta Queue (RSA 7)? Is it possible to delete directly?
To delete the data in a delta queue, select the delta queue and, from the context menu, choose Delete Data.
Q6. What are all the implementation steps we need to perform when support packs are
applying on production environment?
Q7. Is it possible to re - initialize logistics without outage? Then how?
Yes, Early Delta Initialization
With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the
application during the initialization request in the source system. This means that you are able to execute the
initialization of the delta process (the init request), without having to stop the updating of data in the source system.
You can only execute an early delta initialization if the DataSource extractor called in the source system with this data
request supports this.
Early init allows you to do Delta settings first and then pull the records. Once the delta settings are done your update
queue will then start collecting the changed and new records.
Q8. How to transfer entries from extraction queue to delta queue manually?
Q9. What are all the precautions need to be taken when logistics implementation or
enhancement is going in to production environment?
1. Run Transports when target system is not being booked (Outage). Otherwise you will need to initialize it because
docs are lost during this time.
2. None of the clients in target system in V3 update for application <no> should contain data. If you are unsure, start
the v3 update of the application X in all clients.
3. Del contents of re-construct table of application in target system and run another re-construct if needed.
4. Use Transport RMCSBWCC to display log for changed extract structure in target system.
2LIS_03_BX - This structure is used to extract the stock data from MM Inventory Management for initialization
to a BW system.
2LIS_03_BF - This structure is used to extract the material movement data from MM Inventory Management
(MM-IM) consistently to a BW system.
2LIS_03_UM - This structure is used to extract the revaluation data from MM Inventory Management (MM-IM)
consistently to a BW system.
Q12. Explain a single report starting from requirements to your approach to accomplish the
task.
Requirement: Report to analyze Avg. delivery time by vendor.
This report can answer questions like:
a. Who can deliver a material the fastest?
b. What procurement lead time must you reckon with for a certain material?
Approach: Info Provider 0PUR_C03 Purchasing Data.
Required Char and KF:
0VERSION (Filter).
INFO OBJECT
DESCRIPTION
0CALMONTH
(Free Char)
Calendar Month
0PLANT
(Free Char)
Plant
0VENDOR
(Free Char)
Vendor
0MATERIAL
(Row)
Material
0AVGDTIME
(Column)
With the
0AVGWDTIME
(Column)
Avg. Weighted Delivery Time
information from KFs 0AVGDTIME and 0AVGWDTIME we can know the avg. delivery time for a shipment to arrive
from the vendor, so his gives an idea about how many days in advance an order for the required material has to be
placed.
Q16. What is the option to make global structure to local structure? When is this option
used?
Scenario: Add structure elements that are unique to the specific query.
Changing the global structure changes the structure for all the queries that use the global structure. That is reason
you go for a local structure.
Coming to the navigation part-In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query icon. On the SAP BEx Open
dialog box: Choose Queries. Select the desired InfoCube Choose New. On the Define the query screen: In the left
frame, expand the Structure node. Drag and drop the desired structure into either the Rows or Columns frame. Select
the global structure. Right-click and choose Remove reference. A local structure is created.
Remember that you cannot revert back the changes made to global structure in this regard. You will have to delete the
local structure and then drag n drop global structure into query definition.
When you try to save a global structure, a dialogue box prompts you to confirm changes to all queries.
The enhancement RSR00001 (BI: Enhancements for Global Variables in Reporting; transaction SMOD; component or
function module EXIT_SAPLRRS0_001) is called several times during the execution of the report. The I_STEP
parameter specifies when the enhancement is called.
The following values are valid for I_STEP:
I_STEP = 1
Call is made directly before variable entry.
I_STEP = 2
Call is made directly after variable entry. This step is only executed if the same variable is not input-ready and
could not be filled for I_STEP = 1.
I_STEP = 3
In this call, you can check the values of the variables. When an exception (RAISE) is triggered, the variable
screen appears again. I_STEP = 2 is then also called again.
Q20. When Text elements are used and what are are the places they can be created?
Text variables represent a text and can be used in descriptions of queries, calculated key figures and structural
components.
You can use text variables when you create calculated key figures, restricted key figures, selections and formulas in
the description of these objects
Q21. Explain the business scenario for formula variable with replacement path and Cmod.
Business scenario: To find out the document count for analyzing Number of Orders in a given period. This can be
done using Formula variable with processing type as Replacement path.
1. Create a Calculated KF to determine No of Documents/orders.
2. Create a formula Variable for getting Document Count. ( processing type -> Replacement path).
3. In select Char Field choose Document Number.
4. In the Replace Variable with drop down box, choose Attribute Value.
5. In the Attribute drop down, select Characteristic Reference (Constant 1)
6. Query Properties.
Q22. Explain the business scenario for formula collision and Cell editor.
Q23. When should we use offset values?
To analyze key figure to have that have a fixed time-relationship with one another, you can use
variable offset. For example compare current sales figures with same time-period figures in the
previous year.
Q24. Ways to improve the performance of set up table filling.
Q25. What is the difference between repair full request and full request?
50) E_T_DATA, I_T_DATA.
50) I_T_DATA used in Master data source with attribute,
E_T_DATA used in Master data source with Text
and C_T_DATA used in Transactional Data source
49). what is options Request by request and Delta at once
49) 1. Only Get Deltas Once - its useful for a snapshot scenario, which will get all the
out having a delta pointer (data mart status) set
from ODS
45) Each Data Load done in an info-cube is uniquely identified with a request ID associated with them. The
Concept of deleting these meaningful load identifiers in info-cubes is called as Compression.
Compression improves Performance Tuning as it removes the redundant data.
- Compression reduces memory consumption due to following:
1. Deletes request IDs associated with the Data.
2. It reduces redundancy by grouping by on dimension & aggregating on cumulative key-figures.
SAP compression reduces the number of rows in the F fact table (sometimes to zero). This is because when Requests
are compressed data moves from F - Fact Table to E- Fact Table.
- A smaller F fact table results in
1. Accelerated loading into the F fact table.
2. Faster updating of the F fact table indexes.
3. Accelerates aggregate rollups, since the F fact table is the source of data for rollup.
4. Shortens RUNSTATS processing time on the F fact table.
5. Reduces index REBUILD processing time if indexes are dropped as part of load.
44) Repartioning and remodeling.
44) Repartitioning is a great tool provided by SAP which gives the business another chance to rethink about the
performance needs without the need of disturbing the data that is already existing in the infocubes. It is a feature that
lets us to redefine partitions without deleting the data in the infocubes.
Process Overview
1. Creates shadow tables for the E and F fact tables, starting with /BIC/4*
2. Copies data from the E and F fact tables to the shadow tables.
3. Creates partitions on the E and F fact tables.
4. Copies data back to the E and F fact tables.
5. Recreates indexes.
Pre-requisites
1. Make sure you have enough tablespace in the PSAP*FACT.
2. If you are using the 0FISCPER for repartitioning, make sure that the Fiscal Year Variant is constant in
the Infoprovider Settings.If not, use the program RSDU_SET_FV_TO_FIX_VALUE.
3. Take pre-snapshots of data for later validation after repartitioning.
Repartitioning Activity
1. Right click on the cube and select Additional Functions->Repartitioning.
2. There are three options to add, merge or complete partitioning. Select the desired option and click on
Initialize. Continue through the popups.
3. Check the monitor .
Post Repartitioning.
1. Check and rebuild the indexes and/or aggregates if needed.
2. Validate the data against the pre-snapshots.
3. Once the data is validated the shadow tables can be cleaned up.
Remodeling is a new concept introduced in BI 2004s to manage changes to the structure effectively in an
infoprovider, where the data is loaded and running.
Characteristics can be remodeled on the following ways
- Inserting, or replacing characteristics with:
o Constants
o Attribute of an InfoObject within the same dimension
o Value of another InfoObject within the same dimension
o Customer exit (for user-specific coding).
- Delete
key Figures can be remodeled on the following ways
Inserting:
o Constants
o Customer exit (for user-specific coding).
Replacing key figures with:
o Customer exit (for user-specific coding).
Delete
43)Index types:
Physical index:
Logical index
BIA index
Compound index
Memory index
Temporary index
42) 1. When the number of master data is very high - like for example - customer master running into
millions.
2. When you add this to a dimension - the DIM table size increases dramatically and the cube
performance is affected. Since a DIM ID is created for every combination of characteristics possiblewhere if you put customer master ( 1 mill records ) and the material master ( 50000 records ) you get
1 mill * 50,000 records in DIM Table.
3. TO avoid this you would separate the customer master into a separate dimension but even then if
the size is high you can declare it as a line item dimension which would make the SID part of the fact
table and hence improve cube performance. However you can have only one characteristic in a line
item dimension.
41) Business scenario for end routine. parameters
40) special info objects in INFOSET and MULTIPROVIDER
39) How to extract the data selectively from source system. Give info package options to do this.
In the Data Selection tab of Info Package we had the privilege provided to load the data selectively.
You need to select variable type for a field, confirm the selection, and then choose Detail for Type, an
additional dialog box appears. In this dialog box you can freely restrict the values of the fields.
InfoPackage:
In the Info Package select the Types, which results the below Variables.
In this scenario, let us load the data or transactions for the month s of 05.2009 to 06.2009 using the option
of Free Temporal Selection 5. Type 5 gives you a free selection of all fields.
For the desired field 0CALMONTH we have confirmed variable type 5.
On conformation it pops up a screen for entering from value of 0Calmonth, To Value of 0Calmonth, Next
Period from Values and Period indicator. By providing these values you will be driven for entering the value
for Number of Periods until Repetition.
With the above selection the data requested for time span 05.2009(200905) to 06.2009(200906) including
both the limit values.
Next period from value indicates the period for Second Data Request which spans between
07.2009(200907) to 08.2009(2000908),
In Period Indicator Type we have Year-Like Period (ongoing) and Year/Period-like.
We need to select the Period Indicator 1 Year/Period Like which is sensible, with period type 0 the
request is carried out with ongoing values.
After the 8th run the system starts counting from 01.2010 (201001), as the Period Indicator is 1
Year/Period Like.
If the Period Indicator Type 0 (Year-Like Period (ongoing) is selected for the above scenario
After the 8th run instead of starting the load from 01.2010 (201001), the scheduler starts over again with
time span 05.2009(200905) to 06.2009(200906) which is not sensible.
On scheduling the Info Package, in the monitor screen you can observe the Selection field with Temporal
Free Selection Variable Value (200905-200906) in the header tab.
Below is the required data in Target ZSELEC, which are the list of products in the months 05.2009 to
06.2009.
38) Go to RSRT. Give the name of the query you have created on the cube. Click "Execute + Debug".
Check the box for "Select Aggregate" and press Enter. This will automatically select "Display Aggregate
Found". Once the execution is started you can see whether query / queries use aggregates or not and also
you can see which aggregate is been used.
37) How to Debug CMOD Code in Data source enhancement
37) There are many ways to debug the code if you wish to enhance the DataSource. In RSA3 you can do
that by clicking Debug mode option.
Another way to do.
If you want to enhance the Transactional data then you need to choose EXIT_SAPLRSAP_001 component
of RSAP001 enhancement. It has 4 components for master data (Text, hierarchy...etc).
This is a function module enhancement. If you goto the source code of the function module, you will find an
include program i.e. ZXRSAU01. double click on this program which will take you to the body of the
program. This is the place where you need to write code.
Here you need to write the code as follows.
case i_datasource.
When '2LIS_XX_XXXXX'.
Now here you can set a break point. This can be soft or hardcoded break point if you need to debug.
Better Hardcode it. Write
BREAK-POINT or BREAK username.
Now when you execute the DS through RSA3, The program will stop here. The data extracted from DS will
be available in an internal table C_T_DATA. Now you can enhance this by writing code. Again you can
debug by pressing F5 or F6. This is as usual the way you debug an ABAP program
36) How to debug start and end routines?/ How to debug transformations
36) Step.1 put a hard coded break point (BREAK-POINT) in Start Routine/End Routine which you desire to
debug.
Step 2. Open DTP of desired transformation; go to Execute tab. Choose processing mode as Serially in the
Dialog Process (for Debugging).
Step 3. Click on change Breakpoints button in front of Transformation. Select Before Transformation check
box to debug Start Routine. Select Before End Routine check box to debug End Routine.
Step 4. Click on Simulate button to start debugging. Debugging will not actually load data into data target;
instead it will simulate the code behavior for Data Loading.
35) How to debug variables COMD code?
Customer
Sales Volume
USD
NY
CA
400,000
200,000
50,000
A
C
800,000
300,000
You want to use the query to determine the number of customers for which the sales volume is less than 1,000,000
USD. To do so, you create the calculated key figureCustomer sales volume <= 1.000.000 (F1) with the following
properties:
General tab page: Formula definition: Sales Volume <= 1.000.000
Aggregation tab page: Exception Aggregation: Total, Ref. Characteristic: Customer
This query would deliver the following result:
Region
Customer
Sales Volume
F1
USD
NY
CA
400,000
200,000
50,000
Result
650,000
800,000
300,000
Result
1,100,000
Overall result
1,750,000
The overall result of the calculated key figure F1 is calculated as follows: Sales volume of customer A (400,000 +
800,000) -> does not fulfill condition (sales volume <= 1,000,000) -> 0: sales volume of customer B (200,000) -> fulfills
condition -> 1; sales volume of customer C (50,000 + 300,000) -> fulfills condition -> 1. When totaled, this gives 2 as
the overall result for F1.
A query with a drilldown by region would give the following result:
Region
Sales Volume
F1
USD
NY
650,000
CA
1,100,000
Overall result
1,750,000
Due to the assignment of the reference characteristic Customer to the calculated key figure F1 for the exception
aggregation, the query also delivers the required data without a drilldown by reference characteristic.
31) What are Safety upper and lower intervals? How they work?
31) Safety interval should be set so that no document is missed even if it was not stored in the DB table when the
extraction took place.
29) What is the difference between repair full request and full request?
29) Repair Full request:-If you have done Full repair you dont have to reinit. Basically thats one advantage of Full
repair as it wont affect your delta loads.
But in case it was a Full load, you may need to reinit.
Full repair can be said as a Full with selections. But the main use or advantage of Full repair load is that it wont affect
delta loads in the system. If you load a full to a target with deltas running you again will have to initialize them for
deltas to continue. But if you do full repair it wont affect deltas.This is normally done we when we lose some data or
there is data mismatch between source system and BW.
Repair Request:-Editing a request in PSA and loading that into subsequent target. Or doing reconstruction.
28) When should we use offset values
28) To analyze key figure that had a fixed time relationship with one another we use variable offset. For example
compare current sales figure with some time period figures in the previous year.
27) Explain the business scenario for formula variable with replacement path and Cmod
Scenario:
The group HR administrator wants a detailed line item report that lists all employee absences in a given period. The report is to
show the employee number, the absence start date, together with the end date of the absence and show the number of calendar
days the employee was absent.
The good thing about using this technique is that no redesign work is needed from your technical BW team, no ABAP is involved
and best of all, it quick and easy.
Solution:
For this example I created an ODS object called Absence that holds basic employee information along with individual absence
record data.
Follow the steps below:
1.
2.
3.
Open the BEx Query Designer and create a new report on your chosen InfoProvider.
Drag the Employee, Valid from and Valid to characteristics into the Rows section of the screen. If needed, apply data
selection restrictions to the characteristics as shown in Figure 1.
Right click on the Key Figures structure and select New Formula (Figure 1).
Figure 1
4.
In the new formula window right click on Formula Variable and choose New Variable ( Figure 2 ).
Figure 2
5.
6.
The Variables Wizard will launch and will require you to specify the variable details.
( Click the NEXT button if the Introduction screen appears )
Enter the variable details on the General Information as shown in Figure 3 .
Enter the Variable Name , Description and select Replacement Path in the Processing by field.
Click the Next Button.
Figure 3
7.
In the Characteristic screen (Figure 4) select the date characteristic that represents the first date to use in the calculation
(From Date).
Click the Next Button.
Figure 4
8.
In the Replacement Path screen (Figure 5) select Key in the Replace Variable with field. Leave all the other options as
they are (The offset values will be set automatically).
Click the Next Button.
Figure 5
9.
In the Currencies and Units screen (Figure 6) select Date as the Dimension ID.
Click the Next button.
Figure 6
10. The Save Variable screen (Figure 7) displays a summary of the new variable.
Click on the Finish button to save the variable.
Figure 7
11. Repeat steps 4 to 11 to create a second variable for the second date to be used in the calculation. In the example shown,
the characteristic 0DATETO is used to create the variable ABSEND (Absence End Date).
You will now be back at the New Formula screen (Figure 8). Drag and drop the two new variables into the formula section
of the screen and insert the subtract sign (-) between the two.
2.
Give the new formula a description and click the formula syntax check button
Figure 8
3.
The new calculated key figure will now show in the columns section of the BEx query designer (Figure 9).
Figure 9
4.
In the example shown the Number of Calendar Days Absent is calculated correctly. See the table of results below.
Employee
Valid From
Valid To
Number of Calendar
Days Absent
50000001
17/04/2004
21/04/2004
50000002
16/07/2004
29/09/2004
13
50000003
07/01/2004
09/02/2004
33
50000004
04/08/2004
05/08/2004