Sie sind auf Seite 1von 5

Short dump when opening InfoPackage “Request running in OLTP”

Use report RSSM_RSSDLINIT_RESET_CRASH


Or table RSSDLINIT – delete line for datasource

Allowed characters in characteristics values


• Note 173241
• Transaction RSKC
• F.m. RSKC_CHAVL_CHECK, RSKC_ALLOWED_CHAR_GET
• Values: ALL_CAPITAL, ALL_CAPITAL_PLUS_HEX

Requests
Mapping short ID  long ID: /BI0/SREQUID

RSSDLINIT, RSSDLINITDEL, RSSDLINITSEL – status of datasource initializations in BI


System

Communication channels
IDoc
An IDoc (intermediate document) is a format used for data exchange between systems. It can
be compared to an XML file, but has a different format. The structure of the data in case of a
particular IDoc is defined by the IDoc-Type, similarly, as a DTD defines the structure of an
XML document.
Besides the real data, IDocs contain a large set of mandatory metadata identifying the sender
and the receiver of the document, as well as status information on each of the processing steps
the document has gone through or will go through in the future.
IDocs are practically “text files” and can thus be transferred between system using various
communication channels, even e-mail. Between SAP systems usually some high-performance
protocol (like tRFC) is used to transfer IDocs.
A typical usage scenario for IDocs is EDI (Electronic Data Interchange) where business
documents are sent in electronic format between business partners: For example you could
send a purchase order per IDoc via E-Mail to your supplier.
IDocs can be sent using function modules. When IDocs are received – according to
configuration – either a workflow is triggered, or a function module is called to which the
IDoc is handed over for processing. (Based on the IDoc the function module will the create
appropriate application data.)
You can view IDocs with transaction WE05.
In a BI scenario IDocs are used to send data load requests from the BI system to the source
system, and to send status information back from the source system to the BI system.

sRFC
RFC stands for Remote Function Call is the de-facto standard (proprietary) network
communication standard between SAP system.
The simplest form of RFC – called synchronous RFC – requires both systems to be on-line at
the same time, and the function call is executed immediately in the target system. The calling
system is waiting for the processing to be finished in the target system, and will only continue
it’s own work afterwards.
aRFC
Asynchronous RFC calls make it possible for the calling program to continue execution
without waiting for the processing to finish in the target system. However also in this case the
target system has to be on-line, the request is transmitted immediately.
Asynchronous RFC calls can be made with the code suffix STARTING NEW TASK, and
will be executed in a separate workprocess. Once the aRFC call has finished, it is possible to
get a notification in the calling program and receive the results of the call.
For BI extraction aRFC is not normally used.

tRFC
As it can not always be guaranteed that there is an on-line connection between the systems,
tRFC (transactional RFC) has been invented.
In case of tRFC it is not required that both systems be on-line at the same time. Instead the
function call requests are collected and saved to the database for later execution. Request of
one calling application are stored together under a so-called transaction id (TID). The calling
program does not wait for processing in the target system to finish; it will trust the system that
the requested calls will be made once the target system is available.
Transaction RFC calls are made using the suffix IN BACKGROUND TASK in ABAP code.
The requests are collected in tables ARFCSSTATE and ARFCSDATA upon a call, and can
be viewed with transaction SM58.
Once the calling program executes a COMMIT WORK, the collected requests are handed
over to the target system if it is available. If not, a background job is scheduled which will
retry transmission later by executing program RSARFCSE with the TID as a parameter. If
transmission fails, this job is executed again and again periodically, until a maximum number
of retries is reached.
There are several features tRFC will guarantee: it will execute the RFC call requests exactly
once, in the same order as requested within the TID (but more TIDs can run in parallel), and it
makes it possible to execute a set of requests in a transactional manner (meaning either all
requests can be executed, or everything is cancelled.) During execution in the target system,
the internal program state (e.g. global variables in function groups) is kept between the calls
belonging to the same TID.

qRFC
Queued RFC provides additional functionality on top of tRFC. It allows several tRFC calls
with different TIDs to be serialized and executed in the same order as they were created. In
order to achieve this, requests are stored in queues and processed in a FIFO manner.
Queued RFC always uses an outbound queue in the calling system to queue requests to make
sure that the requests TIDs are executed in the appropriate order. When the queue is
processed, the requests are sent to the target system using the communication protocols of
tRFC (of course making sure that only one request is executed at a time, otherwise
searialization could not be guaranteed.)
There can be several queues with different queue names. Within a queue, requests are
processed in the same order as received, however queues with different names can be
processed in parallel. Normally there is a separate queue for each application.
Most of the time there is also an inbound queue in the target system. This is used for load-
handling: Instead of processing the incoming requests (from several clients) immediately and
thus overloading the system, the requests are put in a queue, and processed later on when
there are enough free resources.
Queued RFC can be used by calling the function module TRFC_SET_QUEUE_NAME
before executing tRFC calls, the calls will then be put into the appropriate queue.
Processing of the outbound queue is handled by the “QOUT Scheduler” which can be
monitored using transaction SMQS. The inbound queue is processed by the “QIN Scheduler”
(monitor with transaction SMQR) or can be read by applications directly.
You can view the content and control the status (lock/unlock) of the outbound queue with
transaction SMQ1, and the inbound queue with SMQ2.

TID
The TID is the transaction identifier used during tRFC and qRFC. It will uniquely identify a
set of RFC calls which should be execute together.
The TID is an alphanumeric 24-character string which is generated by encoding the IP
address of the host and the process ID of the work process executing calling program, a
timestamp and a counter.
Format:
1. 8 digits: 4 hexadecimal bytes of IP-Address
2. 4 digits: hexadecimal representation of workprocess PID (visible in SM50)
3. 8 digits: UNIX timestamp in hexadecimal format
4. 4 digits: sequential number (counter) in hexadecimal format

Delta Queue Mechanism


What is a delta queue?
The previously described communication channels can be used for different scenarios in the
OLTP system. For example, qRFC is also used in CRM systems for collecting data that has
changed, and sending it to mobile clients for synchronisation whenever they connect next.
The Delta Queue mechanism is specific to BI extraction, and technically builds up on qRFC.
It uses qRFC queues to collect data changes in the source systems until they are extracted to
BI by executing the next delta load request.

When you are executing an Init-Request from the BI system, you can specify some selection
criteria, which documents you are interested in. This criteria is saved to table ROOSPRMSF
in the source system. During posting to the delta queue the system will normally check
whether the data posted to the queue will match this selection criteria, and only put the data in
the queue, if is required for at least one Init-Request.

What gets into the queue?


Whenever you create or change a document in an application, the application will take care of
recording the change in the appropriate delta queues. This might be handled differently
application by application. Usually function modules are executed with the suffix “IN
UPDATE TASK” in the background after a document has been changed, which will
transform the data into appropriate format and save it into the queue.
Unlike change pointer and timestamp techniques also used for delta extraction, the delta
queuing mechanism will not just store the fact that a specific document has been changed, but
it will also physically store the data of the document (with status after the change) in the
queue. You extract the changes from the queue without having to read the document once
again during extraction. (Except if you want to read some addition fields, which have not been
saved into the queue physically.)
If a document has been changed several times between two delta loads, an entry is put into the
queue for each change. As the order of the entries in the queues will be respected during the
load, you can be sure that the last change will reach the BI system last.
How to check the queues?
As the delta load mechanics is built on top of qRFC you can see the queues used for delta
extraction in transaction SMQ1. This transaction however lists all the qRFC queues, so you
might be better of using transaction RSA7, which will show BI queues only. For a more
detailed analysis you can still use SQM1, as the functionality of RSA7 is quite limited.
Transaction RSA7 will show you a list of all the data sources, for which an init load has been
executed, and delta collection is currently enabled.
The names of the queues in SMQ1 are prefixed with “BW”, the current client, and continue
with the data source name. In RSA7 you will see the data source name directly.
If no data relevant for delta extraction (with the init-restrictions) has been changed yet, it is
possible that no qRFC queue has been created yet, but the entry for the datasource will still be
shown in RSA7.
Next to the queue / datasource name you will find a number.
Please note that this number is not the number of records which will be extracted, but the
number of transactions recorded in the queue. It is well possible that one document change
will write several records (for example one record per document item) into the queue within
the same transaction. In case of batch update it is even possible that several documents are
processed as part of one transaction.
Also, this number is a sum of the transactions collected for the next delta load, and the ones
extracted during the last load. Latter ones need to be kept for being able to do a Delta-Repeat
request, in case the last delta has failed on BI side, after the data has fully been collected from
the source system.

How to find out when documents are put in the queue?


Usually documents are put into the queue using qRFC calls. This involves two steps: setting
the queue name, and making tRFC calls. It is these two event you might try to catch in the
system.
Catching the point where the queue name is set is probably easier, however, you will only get
to a point of source code which is executed before the queue is filled. Depending on the
coding itself this can be useful, but you will probably need additional debugging and source
code analysis to find out when the documents are physically put in the queue.
To find this point in time, you need to set an external breakpoint in function modules
TRFC_SET_QUEUE_NAME, TRFC_SET_QUEUE_NAME_LIST and
TRFC_SET_QUEUE_RECEIVER_LIST. This function module is called as a first step of
every qRFC call, so it will for sure be run through if anything is put into the queue.
This function group is however marked as a system program, so you will need to activate
system debugging for the debugger to stop here. This you can only do in the debugger itself,
so set a breakpoint anywhere in normal code you know is run through at some early stage of
processing. Using the settings menu of the debugger make sure that system debugging is
turned on. Also turn on “Update debugging” among the debugger settings, as typically any BI
updates are programmed to be executed in an update task, and otherwise the debugger would
not stop during the update process. Chances are that even in this case the debugger will not
stop in within this function module – there are a lot of ways to have code running
asynchronously in the background, and especially for statistics updates these are frequently
used.
Also keep in mind that nothing is saved if there is no change in a document, so be sure to
execute some real changes. And, make sure the queue is initialized and your document fulfils
Init-Selection criteria.
Once you managed to stop on a breakpoint in any of these function modules, first check the
queue name in the input parameters, and make sure that the call is really related to the BI
queue and not something else. Then use the call stack to find out where exactly in code you
are.

Source System:
ROOSPRMSC – Stores status information about Init-Delta load
INITRNR – Init request number, as visible in RSA1 menu “Init requests for source
system”
DELTARNR – Request number of the last delta request requested
TIMESTAMP – Timestamp when the Init Request was created (watch out for time
zone!)

ROOSPRMSF – Stores the selections of Init-Requests

Report RSSM_OLTP_INIT_DELTA_UPDATE – to be run in BW to synchronize R/3 and BI


delta information??? (Unsure)