Sie sind auf Seite 1von 81

SAP BW 7.

30 : Performance Improvements in MasterData related scenarios and DTP Processing


Posted by Girish V Kulkarni
With Data Warehouses around the world growing rapidly every day, the ability of a Data Warehousing solution to handle mass-data,
thus allowing for the ever-shrinking time-windows for data loads is fundamental to most systems.
BW 7.3 recognizes the need of the hour with several performance related features and in this blog, I will discuss the performance
features related to data loads in SAP BW 7.3, focusing mainly on Master Data Loads and DTP Processing.
Here is the list of features discussed addressed in this blog Master Data
1.

Mass Lookups during Master Data Loads

2.

The Insert-Only flag for Master Data Loads.

3.

The new Master Data Deletion

4.

SID Handling

5.

Use of Navigational Attributes as source fields in Transformations.


DTP Processing

1.

Repackaging small packages into optimal sizes.

MASTER DATA
1. Mass Lookups during Master Data Load
Data loads into a Master Data bearing Characteristic require database look-ups to find out if records exist on the database with the
same key as the ones being loaded. In releases prior to SAP BW 7.3, this operation was performed record-wise, i.e. for every record in
the data-package, a SELECT was executed on the database table(s). Obviously, this resulted in a lot of communication overhead
between the SAP Application Server and the Database Server, thereby slowing the Master Data loads down. The effect is pronounced
on data loads involving large data volumes.
The issue of overhead between the SAP Application Server and the Database Server has now been addressed by performing a masslookup on the database so that all records in the data-package are looked-up in one attempt. Depending on the DB platform it can bring
up-to 50% gain in load runtimes.

2. The Insert-Only Flag for Master Data Loads

Starting NW 7.30 SP03, this flag will be renamed to New Records Only. The renaming has been done to
align with a similar feature supported by activation of DSO data. (See blog Performance Improvements for DataStore
Objects )
As mentioned above, the Master Data Load performs a look-up on the database for every data-package to ascertain which key values
already exist on the database. Based on this information, the Master Data load executes UPDATEs (for records with the same key
already existing in the table) or INSERTs (for records that dont exist) on the database.
With the Insert-Only feature for Master Data loads using DTPs, users have the opportunity to completely skip the look-up step, if it is
already known that the data is being loaded for the first time. Obviously, this feature is most relevant when performing initial Master
Data loads. Nevertheless, this flag can also be useful for some delta loads where it is known that the data being loaded is completely
new.
Lab tests for initial Master Data loads indicate around 20% reduction in runtime with this feature.
The Insert-Only setting for DTPs loading Master Data can be found in the DTP Maintenance screen under the UPDATE tab as
shown below.

Note :

If the Insert-Only flag is set, and data is found to exist on the database, the DTP request will abort. To recover
from this error, the user simply needs to uncheck the flag and re- execute the DTP.

3. The New Master Data Deletion


Deleting MasterData in BW has always been a performance intensive operation. The reason being that before any MasterData can
be physically deleted, the entire system (Transaction Data, Master Data, and Hierarchies etc) is scanned for usages. Therefore, if a lot
of MasterData is to be deleted, it takes some time to establish the data that is delete-able (i.e., has no usages) and data that is not (has
usages). In addition, with the classical MasterData Deletion involving large data volumes, users sometimes ran into memory
overflow dumps.
To address these issues, the Master Data Deletion was completely re-engineered. The result is the New Master Data Deletion. In
addition to being much faster than the classical version, the new Master Data deletion offers interesting new features like Searchmodes for the usage check, Simulation-mode etc. The screen shot below shows the user interface for the new Masterdata Deletion
when accessed via the context menu of InfoObjects in the DataWarehousing Workbench.

Although the new Master Data Deletion has be available for some time now (since BW 7.00 SP 23), it was never the default version in
the system. This implied that the BW System Administrators needed to switch it ON explicitly. With BW 7.30 however, the New Master
Data Deletion is the default version and no further customizing is necessary to use it.
All further information about this functionality is documented in the SAP note:1370848 underhttps://websmp130.sapag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1370848

It can also be found in the standard SAP BW documentation


underhttp://help.sap.com/saphelp_nw73/helpdata/en/4a/373cc45e291c67e10000000a42189c/frameset.htm

4. SID Handling
This feature relates to the handling of SIDs in the SAP BW system and while it is certainly relevant for Master Data loads, it is not
restricted to it. The performance improvements in SID handling are relevant for all areas of SAP BW where SIDs are determined, for
example Activation of DSO Requests, InfoCube Loads, Hierarchy Loads and in some cases, even Query processing.
In BW 7.30, SIDs are determined en-masse meaning that database SELECTs and INSERTs that were done record-wise previously
have been changed to the mass SELECTs (using the ABAP SELECT FOR ALL ENTRIES construct) and mass INSERTs. The system
switches to this mass-data processing mode automatically when the number of SIDs to be determined is greater than a threshold value.
The default value of this threshold is 500.
The threshold value is customizable of course and that can be done in the SAP IMG for customizing under the transaction SPRO by
following the path: SAP Netweaver -> Business Warehouse -> Performance Settings -> Optimize SID-Determination for MPPDatabases.
Note: As the threshold value corresponds to the minimum number of SIDs
to be determined in one step, setting the threshold to a very high value
(For example: 100000) causes the system the system to switch back to the
classical behavior.

5. Use of Navigational Attributes as source fields in Transformations


Quite often there are scenarios in SAP BW where data being loaded from a source to a target needs to be augmented with information
that is looked up from Masterdata of Infoobjects. For instance - loading sales data from a source that contains data on Material level to
a DataTarget where queries require the sales data to be aggregated by Material Group. In such cases, the Master Data Lookup ruletype in Transformations is used to determine the Material Group for any given Material (given that MaterialGroup is an attribute of
Material).
Although the performance of the Masterdata Lookup rule-type has been optimized in earlier versions of BW (starting BW 7.0), there is
an alternative to this rule-type in BW 7.30. Now, navigational attributes of Infoobjects are available as source fields in Transformations.
The benefits of this feature are two-pronged.

The fact that the data from the navigational attributes is available as part of the source structure allows the
data to be used in custom logic in Transformations (example : Start Routines).

Secondly, the data from the navigational attributes is read by performing database joins with the
corresponding Masterdata tables during extraction. This helps in improving the performance of scenarios where a lot of
look-ups are needed and/or a lot of data is to be looked-up.
To use this feature in Transformations, the navigational attributes need to be switched ON in the source InfoProvider in the InfoProvider
maintenance screen as below -

Once this is done, the selected navigational attributes are available as part of the source structure of Transformations as shown below

DATA TRANSFER PROCESS (DTP)


1.

Repackaging small packages into optimal sizes

This feature of the DTP is used to combine several data packages in a source object into one data package for the DataTarget. This
feature helps speed up request processing when the source object contains a large number of very small data packages.

This is usually the case when memory limitations in the source systems (for example: an SAP ERP system) results in very small datapackages in the PSA tables in BW. This DTP setting can be used to propagate the data to subsequent layers in BW in larger chunks.
Also, InfoProviders in BW used for operational reporting using Real-time Data Acquisition contain very small data packages. Typically,
this data is propagated within the DataWarehouse into other InfoProviders for strategic reporting. Such scenarios are also a use-case
for this feature where data can be propagated in larger packets.
As a prerequisite, the processing mode for the DTP needs to be set to Parallel Extraction and Parallel Processing. Also note that only
source packages belonging to the same request are grouped into one target package.
Below is a screenshot of the feature in the DTP Maintenance.

BW 7.3: Troubleshooting Real-Time Data


Acquisition
The main advantage of real-time data acquisition (RDA) is that new data is reflected in your BI reports just a few
minutes after being entered in your operational systems. RDA therefore supports your business users to make their
tactical decisions on a day-by-day basis. The drawback however is that these business users notice much faster when
one of their BI reports is not up to date. They might call you then and ask why the document posted 5 minutes ago is
not visible yet in reporting. And what do you do now? Ill show you how BW 7.3 helps you to resolve problems with realtime data acquisition faster than ever before.
First, lets have a look at what else is new to RDA in BW 7.3. The most powerful extension is definitely the
HybridProvider. By using RDA to transfer transactional data into a HybridProvider, you can easily combine the low data
latency of RDA with the fast response times of an InfoCube or a BWA index, even for large amounts of data. Youll find
more information about this combination in a separate blog. Additionally. BW 7.3 allows for real-time master data
acquisition. This means that you can transfer delta records to InfoObject attributes and texts at a frequency of one per
minute. And just like RDA directly activates data transferred to a DataStore object, master data transferred to an
InfoObject becomes available for BI reporting immediately.
But now, lets start the RDA monitor and look at my examples for RDA troubleshooting. Ive chosen some data flows
from my BW 7.0 test content and added a HybridProvider and an InfoObject. I know that this flight booking stuff is not
really exciting, but the good thing is that I can break it without getting calls from business users.

Remember that you can double-click on the objects in the first column to view details. You can look up for example
that Ive configured to stop RDA requests after 13 errors.

Everything looks fine. So lets start the RDA daemon. It will execute all the InfoPackages and DTPs assigned to it at a
frequency of one per minute. But wait whats this?

The system asks me whether Id like to start a repair process chain to transfer missing requests to one of the data
targets. Why? Ah, okay Ive added a DTP for the newly created HybridProvider but forgotten to transfer the requests
already loaded from the DataSource. Lets have a closer look at these repair process chains while they are taking care
of the missing requests.

On the left hand side, you can see the repair process chain for my HybridProvider. Besides the DTP, it also contains a
process to activate DataStore object data and a subchain generated by my HybridProvider to transfer data into the
InfoCube part. On the right hand side, you can see the repair process chain for my airline attributes which contains an
attribute change run. Fortunately, you dont need to bother with these details the system is doing that for you. But
now lets really start the RDA daemon.

Green traffic lights appear in front of the InfoPackages and DTPs. I refresh the RDA monitor. Requests appear and show
a yellow status while they load new data package by package. The machine is running and I can go and work on
something else now.
About a day later, I start the RDA monitor again and get a shock. What has happened?

The traffic lights in front of the InfoPackages and DTPs have turned red. The RDA daemon is showing the flash symbol
which means that is has terminated. Dont panic! Its BW 7.3. The third column helps me to get a quick overview: 42
errors have occurred under my daemon, 4 DTPs have encountered serious problems (red LEDs), and 4 InfoPackages
have encountered tolerable errors (yellow LEDs). I double-click on 42 to get more details.

Here you can see in one table which objects ran into which problem at what time. I recognize at a glance that 4
InfoPackages repeatedly failed to open an RFC connection at around 16:00. The root cause is probably the same, and
the timestamps hopefully indicate that it has already been removed (No more RFC issues after 16:07). I cannot find a
similar pattern for the DTP errors. This indicates different root causes. Finally, I can see that the two most recent
runtime errors were not caught and thus the RDA daemon has terminated. You can scroll to the right to get more
context information regarding the background job, the request, the data package, and the number of records in the
request.

Lets have a short break to draw a comparison. What would you do in BW 7.0? 1) You could double-click on a failed
request to analyze it. This is still the best option to analyze the red DTP requests in our example. But you could not find
the tolerable RFC problems and runtime errors.

2) You could browse through the job overview and the job logs. This would have been the preferable approach to
investigate the runtime errors in our example. The job durations and the timestamps in the job log also provide a good
basis to locate performance issues, for example in transformations.

3) You could browse through the application logs. These contain more details than the job logs. The drawback however
is that the application log is lost if runtime errors occur.

These three options are still available in BW 7.3 they have even been improved. In particular, the job and application
logs have been reduced to the essential messages. Locating a problem is still a cumbersome task however if you dont
know when it occurred. The integrated error overview in the RDA monitor, BW 7.3 allows you to analyze any problem
with the preferred tool. Let me show you some examples.

Unless you have other priorities from your business users Id suggest starting with runtime errors because they affect
all objects assigned to the daemon. RDA background jobs are scheduled with a period of 15 minutes to make them
robust against runtime errors. In our example, this means the RDA daemon serves all DataSources from the one with
the lowest error counter up to the one which causes the runtime. The job is then terminated and restarted 15 minutes
later. The actual frequency is thus reduced from 60/h to 4/h, which is not real-time anymore. Lets see what we can do
here. Ill double-click on 10 in the error column for the request where the problem has occurred.

I just double-click on the error message in the overview to analyze the short dump.

Puh This sounds like sabotage! How can I preserve the other process objects assigned to the same daemon from this
runtime error while I search for the root cause? I could just wait another hour of course. This RDA request will then
probably have reached the limit of 13 errors that I configured with the InfoPackage. Once this threshold is reached, the
RDA daemon will exclude this InfoPackage from execution. The smarter alternative is to temporarily stop the upload
and delete the assignment to the daemon.

The overall situation becomes less serious once the DataSource has been isolated under Unassigned Nodes. The
daemon continues at a frequency of onc per minute although there are still 32 errors left.

Note that most of these errors namely the RFC failures can be tolerated. This means that these errors (yellow LEDs)
do not hinder InfoPackages or DTPs until the configured error limit is reached. Assume that Ive identified the root
cause for the RFC failures as a temporary issue. I should then reset the error counter for all objects that have not
encountered other problems. This function is available in the menu and context menu. The error counter of an
InfoPackage or DTP is reset automatically when a new request is created. Now lets look at one of the serious
problems. Ill therefore double-click on 2 in the error column of the first DTP with red LED.

When I double-click on the error message, I see the exception stack unpacked. Unfortunately that does not tell me
more than I already knew: An exception has occurred in a sub step of the DTP. So I navigate to the DTP monitor by
double-clicking the request ID (217).

Obviously, one of the transformation rules contains a routine that has raised the exception 13 is an unlucky number.
I navigate to the transformation and identify the root cause quickly.

In the same way, I investigate the exception which has occurred in DTP request 219. The DTP monitor tells me that
something is wrong with a transferred fiscal period. A closer look at the transformation reveals a bug in the rule for the
fiscal year variant. Before I can fix the broken rules, I need to remove the assignment of the DataSource to the
daemon. When the corrections are done, I schedule the repair process chains to repeat the DTP requests with the fixed
transformations. Finally I re-assign the DataSource to the daemon.
The RDA monitor already looks much greener now. Only one DataSource with errors is left. More precisely, there are
two DTPs assigned to this DataSource which encountered intolerable errors, so the request status is red. Again, I
double-click in the error column to view details.

The error message tells me straight away that the update command has caused the problem this time rather than the
transformation. Again, the DTP monitor provides insight into the problem.

Of course GCS is not a valid currency (Should that be Galactic Credit Standard or what?). I go back to the RDA
monitor and double-click on the PSA of the DataSource in the second column. In the request overview, I mark the
source request of the failed DTP request and view the content of the problematic data package number 000006.

Obviously, the data is already wrong in the DataSource. How could this happen? Ah, okay Its an InfoPackage for Web
Service (Push). Probably the source is not an SAP system, and a data cleansing step is needed either in the source
system or in the transformation. As a short-term solution, I could delete or modify the inconsistent records and repeat
the failed DTP requests with the repair process chain.
Thats all. I hope you enjoyed this little trip to troubleshooting real-time data acquisition, even though this is probably
not part of your daily work yet. Let me summarize what to do if problems occur with RDA. Dont panic. BW 7.3 helps
you to identify and resolve problems faster than ever before. Check the error column in the RDA monitor to get a quick

overview. Double-click wherever you are to get more details. Use the repair process chains to repeat broken DTP
requests.
Disclaimer
http://www.sdn.sap.com/irj/sdn/index?rid=/webcontent/uuid/b0b72642-0fd9-2d10-38a9-c57db30b522e

The new SAP NetWeaver BW 7.30 hierarchy


framework
Posted by Serge Daniel Knapp

Introduction
If you remember older releases of SAP NetWeaver BW hierarchies could only be loaded through the old 3.x data flow. In this case you
needed the so called direct update functionality of the corresponding InfoSource for uploading the hierarchy. This InfoSource 3.x was
connected to an 3.x DataSource through update rules.

Limitations of 3.x data flow for hierarchies


This data flow has to be used in SAP NetWeaver BW 7.x, too, and could not be migrated to the new data flow. Consequently you
always had to deal with two types of data flows in your system. Besides the heterogeneous aspect the 3.x data flow for hierarchies had
a lot of disadvantages:

First, hierarchy DataSources were available only for flatfile and SAP source systems. Besides, end users could
only create own hierarchy DataSources for the flat file system.

Second you could not take full advantage of the new data flow, even some old data flow features (e.g. the start
routine) could not be used. Furthermore, to change the structure of hierarchies during runtime you had to implement
complex scenarios (e.g. with the help of the analysis process designer APD). The direct update functionality didn't
allow you to load the hierarchy to a DSO or an other arbitrary object and manipulate it according to the end users'
needs.

Third, monitoring was often unclear because the framework was not optimal for segments.

The new BW 7.30 hierarchy framework


With SAP NetWeaver BW 7.30 the hierarchy framework has been improved, you could now use the 7.x data flow with all its
advantages.

First you are able to use any BW object as source for a hierarchy, you are not limited to a DataSource for
hierarchies. This leads to simpler scenarios if you want to transform your hierarchy according to your needs. You just
have to connect your hierarchy through a transformation and a data transfer process.

Within this transformation you are able to use all features of a transformation, for example start, end or expert
routines. You are not limited as you were in the 3.x data flow.

You can use any DataSource as a source for your hierarchy, you are not restricted to hierarchy DataSources
any more. This makes hierarchy extraction of SAP source systems possible, too.

Last but not least you are now able to take full advantage of all capabilities of the new data flow. You can
distribute the data loaded from one DataSource to several hierarchies and you can use an arbitrary number of
InfoSources in between the DataSource and your hierarchy. A very useful feature is the automatic filling of the fields
CHILDID, NEXTID and LEVEL through the framework if they are not filled by the source (e.g. if only the PARENTID is
provided).

New DataSource structure


If you are familiar with the old hierarchy framework you will notice a new segmentation format for the hierarchy structure. Let's have a
look at the old structure which is shown in the figure below on the left hand side.

The hierarchy structure contained of the five fields NODEID, IOBJNM, NODENAME, LINK and PARENTID. The NODEID was the
internal number for your node, the field IOBJNM contained the InfoObject name for your node. The NODENAME contained the value in
compounded format (if compounded InfoObjects were used). This is the case if you use e.g. cost center hierarchies which are
compounded to the controlling area.
The new framework now uses more fields to describe the hierarchy structure. Whereas the first five fields are the same, the new
structure now contains fields for every InfoObject of a compounded structure. For example, if you use the cost center which is
compounded to the controlling area, the new structure contains both InfoObjects.

New rule type "hierarchy split"


In SAP NetWeaver BW 7.30 both structures are available, the old one and the new one. Please be aware that most of the DataSources
of the Business Content have not been migrated to the new structure. That could lead to a problem because you have to map the old
DataSource structure to the new structure within the hierarchy. To avoid the problem SAP has introduced a new rule type which
automatically maps the old structure to the new one (see figure).

This new rule type is called "hierarchy split". To use the new rule type you have to map the field NODENAME to every InfoObject of
your structure, in this case the controlling area and the cost center. The hierarchy split automatically transforms the value of
NODENAME to the corresponding values of controlling area and cost center.
Please remember that this functionality only works if you haven't changed the length of the corresponding InfoObjects. For example, if
you changed the length of the controlling area to 5 digits instead of 4 you cannot use this feature.

Creating a new DataSource for hierarchies


If you want to create a new DataSource for hierarchies just navigate to transaction RSA1 and open a DataSource tree of a source
system, for example a flat file source system. Right-click on a display component and create a new DataSource for hierarchies:

After choosing the correct type of DataSource you can edit the properties of your DataSource. The main properties can be found in the
tab "extraction" which is shown in the following figure.

In the top of the screen you can provide information on full/delta loading and the direct access, both you will know of other
DataSources. The interesting parts are highlighted. If you select "hierarchy is flexible", the DataSource will be created in the new
structure, if you leave it blank the DataSource will consist of the old structure. Second you are now able to create the hierarchy header

directly from the DataSource. In older releases you had to maintain the hierarchy header in the InfoPackage. I think the other
parameters are known, if not just feel free to ask me.

Creating transformation and DTP


After activating your DataSource you can create a transformation between your new DataSource and the hierarchy of an InfoObject.
The following picture shows how this is displayed in the system.

Within the transformation you have to map the corresponding segments to each other. The interesting one is the mapping of source
segment "hierarchy structure" to the correspondent target segment.

You can see that there are a lot of 1:1 rules within the transformation except the mapping of field NODENAME to 0COSTCENTER and
0CO_AREA. Here, the new rule type "hierarchy split" has been set.
Maybe you have noticed that the key field 0OBJECTID has not been mapped. This field will be used to load more than one hierarchy to
have a unique key. Please be aware that in this case each data package has to contain exactly one complete hierarchy.
Now let's have a look at the corresponding data transfer process (DTP). There are two major changes when loading data through a
hierarchy DataSource:
1.

If you select full extraction mode in the DTP there's a new flag called "only retrieve last request". This flag is
helpful if you don't want to clear your PSA before loading the new hierarchy. Only the newest request in the PSA will be
selected and loaded.

2.

In the update tab of your DTP (see figure) you can decide how to update your hierarchy (full update, update
subtree, insert subtree). Furthermore you can activate the hierarchy from within the DTP, no separate process step in a
process chain is necessary to activate the hierarchy!

Now you are able to load a hierarchy from a hierarchy DataSource to an InfoObject.

BW 7.30: Simple supervision of process chains


Posted by Thomas Rinneberg
There has been some moaning about the built-in capabilities to monitor BW process chains. Clicking one chain after the other to see
recent status is just too much effort and transaction RSPCM listing the last execution is lacking supervision with regards to timely
execution. In fact for those of you who do not want to use the administrator cockpit, there is no time supervision at all (except the insider
tip of transaction ST13 which however is not contained in standard, but an add-on by SAP active global support).
However with release 7.30 BW development has finally catched up let me show you!

Lets start as always with data warehousing workbench RSA1:

First change catching the eye is a new folder Process Chains in the Modelling View. Seems a small change, but there comes
something important with it search.

At last you are able to search for process chains by name directly in the workbench. Plus, you might have noticed: The display
components are ordered hierarchical like any other folder in RSA1. But not only the display components, but also the chains itself have
a hierarchical structure, displaying the metachain subchain relationship. So you can directly navigate to the lowest subchain in your
metachain hierarchy if you only remember the name of the top-level master.
But this is not what we were looking for, we spoke about monitoring. So lets go to the Administration area of RSA1

Ok, same tree, but that does not really make a difference. Lets look at the Monitors.

And choose Process

Chains
If RSPCM was used in this system before upgrade, I will be asked the first question, else the second one. What does it mean? You
might know, that RSPCM requires you to choose the chains, which shall be monitored. So I understand the second question, this will
simplify the setup. But what about the first question? Well, RSPCM became user-dependent with 7.30. Each user has his own unique
selection of process chains to monitor.
Good thing for most users. But stop what if I have a central monitoring team where all administrators shall look at the same list? Does
each of them need to pick his or her chains himself? No:

Besides the known possibilities to add a single chain resp. all chains of a display component there is the option to Select Include.

And how do I create an include? I press button Edit Include.

And then I am on the ususal RSPCM list and can add chains. But now, how does the new RSPCM list look like? Here it is:

The first five columns look pretty familiar. But there is something hidden there as well! Lets click on one of the red chains:

The system is extracting the error messages from the single processes of the failed process chain run and directly displays them
together with the corresponding message long text. So no more searching for the red process and navigating and searching the log.
One click and you know whats wrong (hopefully ;-). If not, you still can go to the log view of the process chain by the Log Button on the
popup.
And look the button alongside, called Repair, looks interesting. No more error analysis, just press repair any failure. THAT was what
you were looking for all the years, isnt it? Sorry, it is not that simple. The button will repair or repeat any failed process in the chain run
like you could do already before via context menu. And whether this is a good thing to try still depends on the particular error. But you
should be a little faster now finding it out and doing it.
What about the other two buttons? Lets delay this a little and first go back to the overview table. There are some new columns:

First, there is a column to switch on and off the Time Monitoring because you might not be interested in the run time or start delay of
any chain in RSPCM, especially considering that you can still schedule RSPCM to batch to perform an automatic monitoring of your
chains and alert you in case of red or yellow lights. Considering performance, it does not make much of a difference, because the run
time calculations of previous chains are buffered in an extra table. After all, the performance of RSPCM also greatly improved by
following setting:

You can choose to refresh only those chains, which have yellow status or not refresh the status at all but display the persisted status
from the database. For the red and green runs, it is expected, that the status does not change frequently. If you nevertheless expect a
change of such chains, you can also force the status refresh in the transaction itself by pressing the Refresh All button in the
screenshot above.Now, how come the system judges a run time of 48 seconds Too Long? Did somebody set a maximal run time for
each chain? Let us click on the red icon.

Seems like this chain runs usually not longer than about 35 seconds. Considering this, 48 seconds is too long. More specifically, the
chain takes usually 25.5 plusminus 3.9 seconds. The 48 seconds are about 22.5 seconds more than usual. Especially, these 22.5
seconds are much more than the usual deviation of 3.9 seconds.
So, the system uses some simple statistics to judge on the run time of a chain run. If you like, I can also give you the mathematical
formulas for the statistics, but I guess this is not so interesting for most of the readers ;-)
To stabilize this statistics, the system even removes outliers before calculating the average run time and average deviation. If you are
interested in the filtered outliers, you can press the Outlier button:

This indeed is an outlier, dont you think? Lets get back in time a little bit and look at older runs of this chain by pressing the + button
twice, which will add twenty additional past values.

Now our outliner is not the biggest anymore. Even more interesting, the chain seemed to stabilize its runtime over the past weeks.
Considering the previous values as well, our current run is no longer extraordinary long. Is it a false alarm then? No, because if the run
time turns back to former instabilities, you would like to get notified. If that instable behaviour however becomes usual again, the
system will automatically stop to consider this run time extraordinary and thus stop alerting you, because it always considers the last 30
successfull executions for calculating the statistics for the alerts.
So let us check, what process made this current run longer. We press button Display Processes.

Of course a DTP. You would not have expected something different, would you? I now could click on the DTP load to open the DTP
request log and start analyzing what went wrong, but at this point rather lets go back to the RSPCM overview list and look at the
Delay column.

Today it is the 14.5.2010, so an expected start on 27.4.2010 indeed I would judge heavily delayed. But how does the system come to
the conclusion that the chain should start on 27.4. (plusminus six and a half days)? The answer comes when clicking on the LED.

Obviously this chain was executed only five times in the past, and it had been started once a week (in average). Seems like the
responsible developer lost interest in this chain meanwhile. If he continues not to start the chain, the system will stop complaining about
the delay and turn the icon grey. Now it is important to note, that this chain is started either immediately by hand or by event. If it would
be scheduled time periodic in batch, the delay is not calculated in this statistical manner, but the delay of the TRIGGER job is observed
and the alert is raised after 10 minutes of delay.
And now also the question about the two buttons in the status popup is answered: They open up the shown run time and start time
popups.
I hope you have fun using these new functions and appreciate that it is not neccessary to laboriously customize any thresholds in order
to get meaningful alerts.

Performance Improvements for DataStore Objects


Posted by Klaus Kuehnle

1.
2.
3.
4.
5.

A lot of time and effort has been invested in SAP BW 7.30 to improve the performance of operations on
DataStore Objects, such as request activation. The most important improvements are:
database partitioning for DataStore Objects
mass lookups in activation of requests in DataStore Objects
dynamic flag unique data records
faster request activation in DataStore Objects on databases with massively parallel processing
architecture
lookup into DataStore Objects in transformation
The following sections describe these points in more detail.

(1) Database Partitioning

The E fact tables of InfoCubes can be partitioned, by month or fiscal period, on databases that support
range partitioning (for more details, see here). In SAP BW 7.30, this is now also possible for the active
tables of standard DataStore Objects. More precisely, if a standard DataStore Object has at least one key
field referring to InfoObject 0CALMONTH or InfoObject 0FISCPER, then the active table of this DataStore
Object can be partitioned by one of these key fields. The advantage of this partitioning is faster access to
data due to partition pruning, provided that the partition criterion is part of the selection criterion (where
clause).
To activate partitioning, go to the menu and choose Extras DB Performance DB Partitioning (in edit
mode of a standard DataStore Object).

A popup appears where you can select a key field that refers to the InfoObject 0CALMONTH or InfoObject
0FISCPER (you cannot select anything if there is no field referring to one of these InfoObjects).

Any changes made here will become effective after activating the DataStore Object.
Note that you can only change the partitioning settings if the DataStore Object is empty. This means that
you should decide whether you want a DataStore Object to be partitioned before you load any data into it.

(2) Mass Lookups


Requests are activated in a standard DataStore Object by splitting the data in the activation queue into
packages. These packages are then activated independently of each other (and usually simultaneously).
For each data record of the activation queue, the system performs a lookup into the active table to find out
whether a record already exists in the active table with the same key (i.e. whether the activation will cause
an insert or an update in the active table).
In SAP BW 7.30, the lookup is no longer performed record by record, but for all records in a package in one
go. This decreases the activation runtime by 10-30%, depending on the database.

(3) Unique Data Records Can be Set Dynamically


The last section was about lookups into the active table as part of the activation. This lookup is performed
to find out whether the active table already contains records with the same key as a record that you want
to activate. However, if the active table does not contain any records with the same key as one of the
records that you want to activate (for example, if data for a new year or a new region is activated), then
these lookups can be omitted. This reduces the runtime of the activation, particularly if the active table
contains a lot of records.
You can set the flag Unique Data Records in the settings of a standard DataStore Object to guarantee
that no record in the activation queue will ever have the same key as any record in the active table during
activation. The system will then omit the lookup into the active table. Since this setting is very restrictive,
there will probably not be many occasions where you can risk using this flag.
In SAP BW 7.30, you can set this option for one single activation request only. In other words, you can
specify that for the current activation request, none of the keys in the activation queue occur in the active
table, without making any statement on other activation requests. The system then will omit the lookup for
this activation request only.
To select this option, click Change... next to the text Use DataStore Setting (Unique and Changed Data
Records) in the popup where you choose the requests to be activated.

Another popup appears. Choose New, unique data records only.

Confirm the dialog. The system will now run the activation without lookup.
For activation processes scheduled as part of a process chain, you also have the option to force the system
to omit the lookup for init requests. Choose Change... next to the text Uniqueness of Data: Use
DataStore Settings on the maintenance screen of your variant for an activation.

A popup appears where you can choose Init requests always return unique data records.

Confirm the dialog. The system will now omit the lookup for init requests.

(4) New Activation Method for Databases with Massively Parallel Processing
Architecture
The traditional activation of requests in a standard DataStore Object is done by splitting the data of the
activation queue into packages that are activated independently of each other (and usually
simultaneously). The system ensures that records with identical keys are placed into the same package
and stay in their original sequence. It is very important that records with identical keys are activated in the

correct sequence to guarantee correct results. Therefore a package is activated sequentially - record by
record.
Usually there are not many records with identical keys in an activation run. SAP BW 7.30 offers you a new
activation method that can be used on databases with massively parallel processing architecture (these
are currently (January 2011) IBM DB2 for Linux, UNIX, and Windows and Teradata Foundation for SAP
NetWeaver BW).
This new activation method finds all the records in the activation queue that do not have unique keys and
activates them in the traditional way. All the other records have unique keys in the activation queue and
can be activated regardless of the sequence. These records are activated simultaneously using a few SQL
statements. This means that the majority of records are no longer read from the database to the
application server, processed and written back to the database. Instead they are processed directly in the
database. This results in a considerable improvement of the activation runtime, in databases with a
massively-parallel-processing architecture.
This new activation method is used automatically by the system (i.e. no user action is needed), provided
that certain conditions are met (list not complete):

this new activation method is supported for the database used (currently only IBM DB2 for Linux,
UNIX, and Windows and Teradata Foundation for SAP NetWeaver BW, as mentioned above)

SID generation during activation is not requested

the option unique data records is not set

the aggregation behaviors of all (load) requests that are activated in one go are identical

the option do not condense requests is not set


The performance gains depend on the data as well as on the hardware. In scenarios with very suitable
conditions, performance can improve 2 to 3 times.

(5) Lookup into DataStore Objects in Transformation


In SAP BW 7.30, you can define a transformation rule to fill a field in the transformation target by reading
the value from a DataStore Object (similar to reading from master data). More information on this can be
found in Documentation in section Read from DataStore Object.
This new method of reading from DataStore Objects in a transformation is intended to replace the routines
written by users as transformation rules to read values from DataStore Objects. Besides making it more
convenient to read from DataStore Objects, this replacement will result in performance gains in many
cases (depending on the implementation of the replaced user routine). This is because data is read from
the DataStore Object once per data package instead of record by record.
Disclaimer:http://www.sdn.sap.com/irj/sdn/index?rid=/webcontent/uuid/b0b72642-0fd9-2d10-38a9c57db30b522e

How to save your Listcube selction screen and the


output.
Posted by Nanda Kumar
Hi All,
Just found an quick information in SAP BI from LISTCUBE tcode so thought of sharing the same across others because I suspect
only very very few make use of this .
Have you ever thought of how to save selection screen and output fields in tcode LISTCUBE? If not below are the steps to be followed
to make use of it.
Advantages:

This will help in reusability like avoiding the LISTCUBE tcode again and again for the same selections.

This will also helps you to reduce certain amount of your manual work.

Here is the steps to be followed.


1. Go to the TCODE LISTCUBE.
2. Select your needed info provider to be displayed.

3. Same time give the program name starting with Z or Y ( This is the program which is going to be reused) and please make a note of
it.

4. Now execute the screen , which displays your selection screen and select your needed field for the
selection, later also select your needed fields for the output using field selection for output tab
.
5. After selecting it, kindly select the Save button which is there on the top of the screen to save it as variant.

6. You will get the popup screen in which it will ask you to update with Variant name and its Description, enter the needed info and save
it again through the save button or through the file menu Variant --->Save.

7. Use can make use of options check box which is available in the screen likeProtect Variant, which protects others to changes
this variant on the same program, means it can be changed only by the created user, other is not certainly required, still if you need to
know its purpose kindly press F1 on corresponding check box for its use.
8. Now go to TCODE SE38 and give the program name as the one which you gave it in the LISTCUBE transaction and execute it.

9. Once you execute it you will get the screen as you decided in listcube, click on variant button for save variants or click on field
selection for output tab for your changes like to save another variant or to make the changes in the current variant.

10. Here if you want to view the save variants from list cube you can use the variant button and select your variants to display it which
has been stored already.

11. Select the need variant and execute the program to get your desired outputs.
Note: you can also store N number of variants from this program itself.

Now instead of going LISTCUBE again and again to make your selection you can make use of this program when you want to make
use of the same info provider to display your output for n number of times, this will reduce your time in selecting your selection option,
and your desired output screen.
Dependencies: If the structure of the Info Provider changes, a program generated once is not adapted
automatically. The program has to be manually deleted and generated again to enable selections of new
fields.

How to Fetch Your Infopackage Details From R/3


System.
Posted by Nanda Kumar
Hi All,
Just thought of sharing a small piece of code which help you a lot in indentifying the BW system or Info package
details at R/3 system while you write your custom code in CMOD at data source level. This code will help you in
following below situations.
Remote Function Call: If you need to call Function Module written in BW system from R/3 system for any manual
manipulation.
0.1. To Identify BW Load request details at CMOD code Level.
0.1. When a R/3 system is connected to Couple of BW system.
0.1. From one R/3 system when the data has to be sent differently for same data source to two different BW systems at
CMOD level.
I dont how many of us has faced such situation, but definitely, a system with large landscapes might have such issue
or scenarios, for them the below piece of code will be helpful.

Scenario1.

Consider a R/3 system having a CMOD code for one of the data source and there you have been given a situation to
identify few data from BW system based on that , records from R/3 has to be fetched to complete your extraction. Now
in order to achieve this, a Function Module has been written in BW system to fetch the data, but that Function module
has to be called from R/3 system using Remote function call, situation seems to very simple, but while you write the
code in R/3 system there comes a complexity to identify from which BW system(either QA or Dev OR PRD) the data
has to be read, because you cant write same code in your Dev, Quality and Production System, as the client and
system name will differs accordingly your remote call system will differ
If you are having only three systems then you can go ahead and hardcode the BW system values as DEV, QA, PRD
using if condition and your R/3 system client details you can write your code, but tomorrow if one more system called (
Sandbox or Readiness) has been added, in that case again you have to go ahead and change the code again to work
in that environment which makes you complex each and every time, so in order to avoid such situation, there is run
time structure which identifies your Info package details, accordingly you can get your Target system details from your

IP and pass that value in your remote Function Module to get the BW data from R/3 system for your manual
calculations.

R/3 System: BKTCLNT010.

BW system: BWSCLNT010.

Code to be Used in your CMOD

The BW - HANA Relationship


Posted by Thomas Zurek

With the announcement of HANA*, some customers, analysts and others have raised the question on how HANA
relates to BW with a few of them even adding their own, home made answer in the sense that they speculate that
HANA would succeed BW. In this blog, I like to throw in some food for thought on this.
Currently, HANA's predominant value propositions are
(i) extremely good performance forany type of workload
(ii) a real-time replication mechanism between an operational system (like SAP ERP) and HANA
Let's match those for a moment with the original motivation for building up a decision-support system (DSS) or data
warehouse (DW). In the 1990s, a typical list of arguments in favour of such a system looked like this:

1.

Take load off operational systems.

2.

Provide data models that are more suitable and efficient for analysis and reporting.

3.

Integrate and harmonize various data sources.

4.

Historize - store a longer history of data (e.g. for compliance reasons), thereby relieving OLTPs from that task
and the related data volumes.

5.

Perform data quality mechanisms.

6.

Secure OLTPs from unauthorized access.


Installing a DW is typically motivated by a subset or all of those reasons. There is a particular sweet spot in that area,
namely a DW (e.g. an SAP BW) set up for reasons 1. and 2., but with all the other arguments not being relevant as it is
connected to basically one** operational system (like SAP ERP). Here, no data has to be integrated and harmonized,
meaning that the "T-portion" in ETL or ELT is void and thus that we are down to extraction-load (EL) which, in turn, is
ideally done real-time. So value proposition (ii) comes along very handy. Critics will argue that such systems are no
real data warehouses ... and I agree. But this is merely academic as such systems do exist and are a fact. So, in
summary, there is a good case for a certain subset of "data warehouses" (or reporting systems) that can be now built
based on HANA with (i) and (ii) as excelling properties - see the top left scenario in figure 1 below.
Now, will this replace some BWs? Yes, certainly. In the light of HANA, a BW with a 1:1 connection to an ERP might not
be the best way anymore. However, will this make BW obsolete in general? No, of course not. As indicated above:
there is a huge case out there for data warehouses that integrate data from many heterogenous sources. Even if those
sources are all SAP - e.g. a system landscape of multiple unharmonized ERPs, e.g. originating from regional structures,
mergers and acquisitions - then this still requires conceptual layers that integrate, harmonize and consolidate huge
volumes of data in a controlled fashion. See SAP NetWeaver BW: BW Layered Scalable Architecture (LSA) Building
Blocks for a more comprehensive discussion of such an approach. I sometimes compare the data delivered to a data
warehouse with timber delivered to a furniture factory: it is raw, basic material that needs to get refined in various

stages depending on the type of furniture you want to produce - shelves might require less steps (i.e. "layers") than a
cupboard.
Finally, I believe that there is an excellent case for building a BW on top of HANA, i.e. to combine both - see the bottom
right scenario in the figure below. HANA can be seen as an evolution of BWA and, as such, this combination has already
proven to be extremely successful: Understanding Query Performance in NW BI and BIA have been in the market for
about 5 years,SAP BW 7.3 beta is available albeit mainly focusing on the analytic layer of BW (in contrast to the DW
layer). When you continue this line of thought and when you assume that HANA is not only BWA but also able to
comply with primary storage requirements (ACIDetc.) then the huge potential opens up to support, for example,

integrated planning (BW-IP): atomic planning operators (used in planning functions) can be implemented
natively inside HANA, thereby benefitting from the scalability and performance as seen with BWA and OLAP and also
from avoiding to transport huge volumes of data from a DB server to an application server,

data store objects (DSOs): one can think of implementing such an object natively (maybe as a special type of
table) in HANA, thereby accelerating performance critical operations such as the data activation.
This is just a flavour of what is possible. So, overall, there is 4 potential and interesting HANA-based scenarios that I
see and that are summarized in figure 1. I believe that HANA is great technology that will only come to shine if the
apps exploit it properly. SAP, as the business application company, has a huge opportunity to create those apps. BW
(as a DW app) is one example which has started quite some time ago on this path. So the question on the BW-HANA
relationship has an obvious answer.
* High Performance ANalytic Appliance
** The case remains valid even if there are a few supporting data feeds, e.g. from small complementary sources.

Figure 1: How HANA can potentially evolve in existing SAP system landscapes ... in a non-disruptive way.

Demystifying aggregates in SAP BI 7.0


Posted by subash sahoo
Aggregates are subsets of InfoCube data, where the data is pre-aggregated and stored in a InfoCube structure. Aggregates on an Info
Cube can be considered similar to that of an Index on a database table. Creating an Aggregate on an InfoCube is one of the few ways

to improve performance of SAP BW Query. Subset of an InfoCube data is stored in an Aggregate. As a result, the response time that
we get out of reading the data from aggregate will be much faster than reading from an InfoCube.
When a query gets executed it is split into several sub queries. Split of the query is based on the following rules:
Condition 1: Parts of the query on different aggregation levels are split.
Condition 2: Different Selections on characteristics are combined
Condition 3: Parts on different hierarchy levels or parts using different hierarchies are split.
Aggregates that we build should meet the needs of query navigation and several sub queries within each query. If we have more than
one aggregate built on top of a cube, after the query split, OLAP Processor searches for an optimal aggregate for each part. This
means, a single query using a multiple sub query can access more than one aggregate to get the desired output. Refer to the flow
chart shown below. At the end, OLAP processor consolidates all the results and gives the desired output.

If a query performance is poor, we can use the following 2 thumb rules to decide if Aggregates will actually help improve performance.

Condition 1: Summarization Ratio greater than 10


Summarization Ratio is the ratio of number of records in the cube to the number of records in the aggregate.
Note: If the summarization Ration is very high, say 300 or more then the aggregate might be very specific to a query and cannot be
used by more than one query.
Condition 2: Percentage of DB time > 30% of the total query run time.
Please note that building an aggregate will only improve the database access time.

There is a tool called Work Load Monitor (ST03N) that is used to track the System Performance. It basically
helps to analyze the queries with highest run time.

DB time can also be determined by executing the Query through RSRT. The EVENT ID 9000 (Data Manager):
highlighted in the screen shot below gives the DB time.

Summarization ratio can be seen by executing the query from RSRT transaction. Select the query you want to monitor the
performance and click on execute and debug button as shown in the screen print below
Usage: Every time a query hits the aggregate, Usage counter gets incremented by 1. If your Usage counter is zero, in 99% of the
cases, you can safely assume that your aggregates are not being used.

Last Used Date: Along with the Usage Indicator, another important parameter to be kept in mind is the Last Used Date. For an
Aggregate, Usage Indicator might show some big number. But when you actually take a closer look at the last used date, It might show
a date 6 months prior to the current date. This means that this aggregate has not been used by any query in the last 6 months. The
reason might be that there were many changes made at the query level but the aggregates were not updated or modified accordingly.
Valuation:
Valuation is the systems best guess of judging how good an aggregate is. Usually, if the summarization ratio is high, the number of +
signs in the valuation will be more. There could be some aggregates where the number of plus signs in an aggregate is more, but the
aggregate as such is never used. Though the valuation will give a fair idea about the aggregates created, it need not be 100% right.

Once the data is loaded into the cube, following steps are to be carried out- for the data to be available for reporting at the aggregate
level.

Activate and Fill Aggregates should be activated and filled.


Aggregate Rollup: Is a step by which, the newly added data at the cube level gets aggregated and available
for reporting.

Change Run (Master Data Activation): To activate the changes of the master data. All the data containing the
navigational attributes gets realigned.

Adjustment of time dependent aggregates: This is done to recalculate the aggregates with time dependent
navigational attributes.

Any aggregate that is created on a cube has to be periodically monitored for its usage and the last used date.
If there is any aggregate that is not being used, it has to be either modified or removed

Un-used aggregates add to the load performance. It is always a good practice to drop the aggregates that are
not being used.

Aggregates should be relatively smaller compared to the Info cube.

Too many similar aggregates can be consolidated to a single aggregate.


TOOL - Pre-Analysis of the aggregate filling.
With changes beyond a certain magnitude, modifying the aggregate becomes more time consuming than reconstructing it. You can
change this threshold value.
Steps-In the implementation guide, choose SAP NetWeaver--> Business Intelligence --> Performance Settings -->Parameters for
Aggregates in the section Percentage Change in the Delta Process. In the Limit with Delta field, enter the required percentage (a
number between 0 and 99). 0 means that the aggregate is always reconstructed. Change these parameters as many times as
necessary until the system response is as quick as possible. We can accord this with the help of TOOL called "Pre-analysis of
aggregate filling".

By default blocksize is set to 100.000.000. SAP recommends changing this setting to a value between 5.000.000 and 10.000.000. In
this way, we reduce joining or sorting on disk and also reduce log space consumption. You should not set BLOCKSIZE to lower values,
because this can result in a WHERE condition that forces the optimizer to use an index other than the clustered index. Suggest to use
5.000.000 for systems with less than 10 GB of real memory. If you have more real memory SAP recommends setting BLOCKSIZE to a
value up to 10.000.000.
Ideally the index on the Time dimensionshould be used when reading data from the fact table, because the fact table is clustered
according to the time dimension. In many cases index on data request dimension is chosen.
The tool helps you to identify how much block size you can actually generate with your existing aggregates. Use button "Pre-Analysis of
the aggregate filling" in the aggregate maintenance window to see the SQL statement that is used depending on the value of parameter
BLOCKSIZE.
We can explain this statement in ST05 and check, which INDEX is used to access the FACT table and then act accordingly.

SAP BI Statistics Installation


Posted by nilesh pathak
This blog shares my personal experience during installation of "Standard BI statistics" on my BW system & it is already provided by
SAP as a standard content for analyzing performance of different cubes,reports in the system.
During intial installation via SPRO transaction, the job RSTCC_ACTIVATEADMINCOCKPIT_NEW was executed in the background and
it was getting cancelled & also gave me dump in ST22 as Assertion failed.
For resolution of this issue I implemented :"SAP Note-1330447- ASSERT incorrect when RRI jump targets are activated". Generally it is
recommended to install Bi content individually in all the three environments-devlopment, quality, production otherwise during
transporting the objects one may encounter several issues.
Normally if the BI statistics installation is successfull, the relevant BI statistics cubes and reports are activated & one should be able
to see different statistics in transaction ST03N as well.But when i tried to open ST03N transaction, I was not able to see any statistical
info here as the cube 0TCT_C01 -Front End and OLAP Statistics-Aggregated was getting zero records everyday even though
initialization was successfull .The data source 0TCT_DS01 during extraction in RSA3 also showed zero records.
When i checked in the view RSDDSTAT_OLAP the data was there & ideally it should have been uploaded to cube 0TCT_C01(as this
cube gets data from this view)but it was not happening in my case.So I tried to debug in RSA3 in delta mode for data source
0TCT_DS01 & kept the breakpoint on fetch_package method in function module RSTC_BIRS_OLAP_AGGREGATED_DATA.
The data source 0TCT_DS01 is based on two views- RSDDSTAT_DM & RSDDSTAT_OLAP so while debugging i found that data was
coming in both the views but it was getting filtered out due to settings maintained in table RSTCTEXTARCT.Generally in
RSTCTEXTARCT the step categories - MDX and OLAP are selected but since my System release was 701 and Support Pack was 4
hence in table RSTCTEXTRACT I had to set the flag for step category-OTHERS as well to extract the data from data source
0TCT_DS01 in RSA3.
Hence after maintaining the settings as mentioned above I was able to get data in cube 0TCT_C01 everyday.I hope this blog will
provide help regarding the issues that may occur during BI Statistics Installation.

How to convert Classic Infocube/DSO to HANA


optimized cubes/ DSO
Posted by Isha Gupta
We can convert Standard info cube into In-Memory info cube by using the Report RSDRI_CONVERT_CUBE_TO_INMEMORY . Or Call transaction RSMIGRHANADB directly. To add the option of conversion of DSO also in the same
program/T-code we can add the parameter ENBL_HDB_MIGR_DSO = X in RSADMIN Table.
During a conversion a lock is set, preventing all maintenance activity and load processes.The conversion is executed as a
Stored Procedure within HANA and therefore shows an excellent performance.

During conversion DataStore objects are not available for reporting / staging. Querying on the InfoCubes is however
possible during this time.
After migration to the SAP HANA database, normal standard InfoCubes are in the SAP HANA database's column-based
store and have a logical index (Calculation Scenario). In the analysis, they behave like BWA-indexed InfoCubes.If the
InfoCubes have data persistency in BWA, the content is deleted during the system migration to HANA and the InfoCubes
are set to inactive. If you want to continue using one of these InfoCubes as a standard InfoCube, you need to activate it
again and reload the data from the former primary persistence (DataStore object for example).
We can make DSO as an In-Memory based DSO only if it is of Standard type. Also Standard DataStore objects can be
converted into SAP HANA-optimized DataStores only if:
1) The DSO is not included in a BW 3.x data flow (for example, by using update rules).
Exception: If you use the BW 3.x data flow only on an inbound basis (but not on an outbound basis), you can still convert
the DataStore. However, in general, we recommend that you migrate the old data flow before converting the DataStore.
See: http://help.sap.com/saphelp_nw73/helpdata/en/8d/6b1b58cc1744e1b</p><p>ce7898a50e19368/frameset.htm
2) DSO is not supplied by real-time data acquisition. Since small data volumes are usually produced for each activation
step when data is supplied by RDA, RDA-supplied DataStores do not benefit in the same way as staging-supplied objects.
Continue to use classic DataStore objects for this use case instead.
3) DSO is not part of a HybridProvider or a semantically partitioned object (SPO).You can recreate SPOs based on SAP
HANA-optimized DataStore objects
Different options available for DSO migration:
1) Simple Migration: Only Active data gets converted and change log history gets deleted. Conversion is faster and
requests cannot be rolled back after migration.
2) Full Migration: Active data gets converted along with change log history. Conversion is slower but there is no restriction
on rollback of requests.

Why BW and HANA are such a good combination


There has been much talk and debate on whether HANA replaces BW. See Steven Lucass blog, the associated
comments, discussions on Linkedin or Twitter (e.g. start somewhere around here) or some blogs (e.g. this one) and
probably many other places. The sheer number and heat of those threads indicates, in my opinion, that the question is so
relevant that it underlines how relevant BW is.

Figure 1: An EDW comprises a DB that is complemented with a application code (X).


Now, a lot has already been said about the matter. The origin of the controversy has a little bit to do with the perception
that an enterprise data warehouse (EDW) basically is an DBMS. Actually, in a discussion with some well known analysts
some time ago I popped the question whether they would see a difference and met a few seconds of silence. In my
opinion, a DBMS is a fundamental component of a EDW, like an engine to a car. But the DBMS alone makes no EDW like
an engine alone makes no car. The cars chassis, body, interior, provides the comfort for the consumer and makes sure
that the engine is used in the most suitable way. Similarly, there is software surrounding a DBMS that makes it the heart of
a EDW. BW is simply one such instance of software that can do that. Figure 1 shows a slide that I frequently use to create
that awareness.
Why is BW such a good choice in my opinion to make HANA the heart of a EDW? Rather than answering this in a
generic, philosophical and potentially non-tangible way I will give three instances that clearly underline that statement:
1.
BW has the most capable code generator (for queries) to leverage HANAs calculation engine in the most efficient
way. Even experts will struggle to emulate this by handcrafted code. The trick is to issue a graph of related
calculation instructions whose dependencies are in the calculation semantics (of an OLAP query) and exposed to the
calculation engine which, in turn, can leverage that knowledge to optimize the processing. Hello World style or simple,
SQL-style queries wont see a notable difference but queries with calculations beyond a single row (YTD, hierchies,
currentmember, normalizations to group totals, group-specific aggregations, ) will and these are neither esoteric nor
rare but mainstream.
2.
Changes in BW are creating the right context that allow HANA to process load operations in a way that
translate into overall accelerations of factor 3 (on average) for process chains. Instances are translating single row
to mass data operations or the HANA optimized versions of infocubes and DSOs. In fact, Ive seen customer
measurements that indicate that many process chains only yield equal performance (compared to classic DBMSs) but
improve significantly by using the HANA optimized infocubes and DSOs.
3.
In autumn, both, BW and HANA, are planned to ship a mechanism to distinguish between hot (= permanently
used / queried), warm (= sporadically, e.g. in nightly batch processes) and cool (= rarely used, e.g. old requests in PSA)
data. This is will translate into a significantly better usage of HANAs memory, similarly as if the data compression
algorithms were to yield higher compression rates. As the roles and semantics of tables in BW are well known they can be
easily classified by a default configuration which means that a BW-on-HANA system will use HANA by default more
effciently than an operational data mart where such settings can also be done but need to be set and derived manually.
This list can certainly be extended. It is meant to provide a non-marketing, down-to-earth flavour of what you gain with BW
beyond the usual services. It is not surprising that some customers have lauded it the most genuine application on and for
HANA that SAP provides.

Delta extraction for 0FI_GL_04 for every 30 minutes


Posted by Ravikanth Indurthi
Note: The below scenario is applicable only for the datasources 0FI_GL_4 (General ledger: Line
Items),0FI_AP_4(Accounts payable: Line Items), 0FI_AR_4 (Accounts receivable: Line Items)
Scenario
In BW, the delta extraction for 0FI_GL_4 from the source system can be loaded as per the processchain or infopackage
schedule but where as, from source system will be able to fetch the delta records as per the timestamp defined, or
depending on system predefined delta pointer. So even if the data is pulled multiple times Ex:- for every 30 minutes in the
same day in the BW system, the deltas will fetch zero records, if the delta pointer is limited to certain period for Ex:- only
once a day in the source system.
Check the below link:
Financial Accounting: Procedure for Line Item Extraction
Constraints
Per day, no more than one delta dataset can be transferred for InforSource 0FI_GL_4. The extracted data therefore has
the status of the previous day. For further data requests on the same day, the InfoSource does not provide any data.
In delta mode, data requests with InfoSource 0FI_AR_4 and InfoSource 0FI_AP_4 do not provide any data if no new
extraction has taken place with InfoSource 0FI_GL_4 since the last data transfer. This ensures that the data in BW for
Accounts Receivable and Accounts Payable Accounting is exactly as up to date as the data for General Ledger
Accounting.
Because in the source system delta extraction is based on CPU Time, the deltas are fetched once per day.

So if the source system delta extraction is to be achieved for the timely base, the same data can be pulled into BW easily
by a process chain.
However, you can change this standard delivery. For more information, see note 485958.
Solution
It can be achieved by changing the settings at back end in the source system.
As the requirement is to get the deltas for every 30 minutes (1800 Secs), it needs to be update in the table
BWOM_SETTING. We need to create an entry or change the existing entry for the field/Param_Name BWFINSAF with
value 1800 (secs).
So now the Delta pointer for the datasource will be set to for every 30 mins, So that new deltas are created in source
system.

Coming to BW side, we need to schedule the start variant in the process chain for every one hour ot two hours, depending
on the time it takes for completion of the chain. So now the latest FI GL data can be reported.

So after loading the data into the cubes, the Bex queries will be able to fetch the latest financial data in the report output.
Even the business object reports, OLAP universes based on the Bex queries will fetch the latest FI data and show the
same data for the BO reports such as WebI or crystal reports based on the BO Universe.
Related SAP Notes:
SAP Note 991429
SAP Note 485958

BW 7.30: Different ways to delete Master data


Posted by Krishna Tangudu in SAP NetWeaver Business Warehouse
In the older version i.e. BI 7.0, we experience Memory
overflows while deleting the master data. Because system checks for all the
dependencies of the data associated with other InfoProviders in the system. This new tool is available in BI 7.0 (SP 23) as
an optional feature. In BW 7.3 it becomes a default option.
I will explain how the new functionality helped
to overcome the above problems with a simple navigation.
Navigation:
<br />

1) Go to DWWB (RSA1).

2) Go to Info Object tree and choose Delete Master


Data from the context menu of the desired Info Object.

In the present
version of Master data deletion functionality, we find that system checks if
the desired info object has any
dependencies i.e. info object being used in Master data like Hierarchies or in
Transaction data like Info Cubes.
So the system takes huge time to delete and also may lead to
memory overflows.
The new functionality looks like as shown below:

We can see four options in this wizard. Let us briefly


discuss each of them.

1) Delete SIDs:

<br /></p>
<p>This option is
used to delete the SID values in the /BIC/S<Info object name>.</p>
<p> We can choose if we want to delete the SIDs
also. If we choose to delete SIDs we have to check option Delete SIDs. This
will ensure that all SIDs are deleted.</p><p> </p><p>!https://weblogs.sdn.sap.com/weblogs/images/252146487/3d.jpg|
height=250|alt=|width=600|src=https://weblogs.sdn.sap.com/weblogs/images/252146487/3d.jpg!</p><p> </p><p>System
will pop up a warning, before you delete SIDs as
shown below.</p><p> </p><p>!https://weblogs.sdn.sap.com/weblogs/images/252146487/16d.jpg|height=150|alt=|
width=600|src=https://weblogs.sdn.sap.com/weblogs/images/252146487/16d.jpg!</p><p> </p><p>SAP recommends not
deleting
SIDs as it may lead to consistencies when reloading the same data.* *</p>
<p>In Older version, before deleting we have
to ensure that the chances or reloading the same characteristics is less. Or
else when we reload the data it may lead to inconsistencies if the where-used
list check performed by the system was not complete.</p>
<p>In the newer version, we can check the usage of
the Master data using Search Mode as per our requirement. We have 4 different
search modes which I will be discussing in detail later in the section.</p><p> </p><p>2) Delete Texts:
<br />

This option is used


to ensure that Texts are deleted along with the master data of the info object
selected.

3) Simulation mode:

When we delete
master data, we have to check if it is used in info cubes, hierarchies before
deleting. As we have to lock these dependent objects for deleting data in
them. So we need a down time to execute
deletion.

So it would be better if we know what


objects are getting affected beforehand so that we can delete the dependent
data from these targets, thereby reducing the number of locks and proceeding to
deletion of master data.
Simulation mode helps you in do this by
performing a where-used check based on the search mode you have chosen and
display the logs along with cubes or hierarchies in which the info object is
used.

4) Perform the deletion in background:

We know that every


time we try to delete master data, it takes more than 1 dialog process. To
perform in background we have to use Process chain. We can now do the same from
here as shown below.

Now you can choose the scheduling time from


the below screen to trigger the background job. In this case I choose
"Immediate".

Now you can monitor the background job using SM37. The job will be created by the user name who created it or choose
Application log from the below screen for the same action.

We have 4 search modes, which are used, they are:

1) Only One Usage per value:

0.1. The system performs where-used check and if


it finds a value being used by an InfoProvider then search continues for other values.
0.1. This is the default setting and can be used
if you want to minimize the run time for where-used list check.

On clicking Start option, the simulation starts and


you can find the logs as below:

You can see in the logs, the system creates


a where-used table /BI0/0W00000032 & /BI0/0W00000033 to generate the
where-used list as below and deletes it after the deletion.

It starts a job DEL-D- in background as


we selected background and simulation both. 3 child jobs starting with BIMDCK
are created to compute the where-used list table.

Where-used Table:
<br />

Here you can see the message stating that master data
is deleted in the background as below:
!https://weblogs.sdn.sap.com/weblogs/images/252146487/9d.gif|height=455|alt=|width=395|
src=https://weblogs.sdn.sap.com/weblogs/images/252146487/9d.gif !
2) One usage per value per Object:

We use this to
know about the usage of master data in all InfoProviders.
The system performs where-used check and if
it finds a value being used by each InfoProvider then search continues for
other values in the InfoProvider.
!https://weblogs.sdn.sap.com/weblogs/images/252146487/12d.jpg|height=250|alt=|width=600|
src=https://weblogs.sdn.sap.com/weblogs/images/252146487/12d.jpg!

This mode helps to find out which


InfoProviders, from which we need to delete data first, before we continue to delete
them all.

3) One usage is sufficient:

<br />

0.1. We use this to


delete the attributes and texts completely from the Info Object.
0.1. If one of the records from master data is
found to be used, the whole process stops saying it is already being used.

4) All usage of All Values:


<br />

All usages are searched in all info


providers along with the count (as in earlier functionality). It will provide

us with complete where-used list and takes the maximum run time.

You can see in the logs, the system creates


a where-used table /BI0/0W00000031 to generate the where-used list as below
and deletes it after the deletion.
In my case I used Only One Usage per value* *and scheduled the deletion in
background after comparing the above results. This helped me to identify the
associated dependencies of the data in the other InfoProviders and reduce extra
locks during deletion.

Selective Deletion:
We can use report RSDMDD_DELETE_BATCH for selective deletion from
any Info Object and give the selections using FILTER button. Simulation options can be given in P_SMODE.

SAP BW Version 7.3 Overview of promising features!


Added by Shah Nawaz Shabuddin
I am concentrating on some of the new features provided in BW 7.3 and their benefits.
As part of this wiki, I will discuss about the below:
1.
Hybrid providers
2.
Graphical Dataflow modeler
3.
Accelerated data loads
4.
Semantic partitioning
5.
Integrated planning front end
6.
Analytical indexes
There are numerous improvements in release 7.3 new ways of delivering data, new reports and new BW applications.
The changes exist in mainly in the backend, used by the BW Architect and Development teams. SAP has invested the
majority of their frontend development effort in Business Objects; so with the exception of small minor enhancements, the
BEx frontend is unchanged. A number of the new features rely on a BW Accelerator (BWA), an increasingly important tool
for businesses.
1. Hybrid providers
Firstly, lets talk about the Hybrid providers. With hybrid providers, BW Consultants lives are made easy!!!
To make data held within a DSO available for reporting, in BI7 there are a number of steps well need to do: Create the
DSO, InfoCube, Transformation/DTP, MultiProvider, store in a BWA and connect them all up, and then schedule and
monitor load jobs.
A HybridProvider takes a DSO and does it all for you, removing substantial development and maintenance effort. Just load
your data into a DSO, create a Hybrid Provider and start reporting. You can even build your HybridProvider on a Realtime

Data Acquisition DataSource (RDA), which could potentially provide near real-time reporting from a BWA.
A typical usage scenario could be that you want to extract your PurchaseOrders from R/3 and make available for
reporting. Using a HybridProvider, as soon as the data is loaded into a DSO they then become available for reporting with
all the benefits of an InfoCube and

BWA.
A hybrid Provider within the Administrator Workbench
2. Graphical Dataflow modeler
A graphical way of building a datamodel has been introduced in BW 7.3. The Graphical Dataflow Modeler is a new
function within the Administrator Workbench (transaction RSA1).
This provides:* Top-down modeling of a new dataflow* Organization for existing dataflows* Creation of template dataflows
using best practice* SAP delivered pre-defined dataflows
Using a drag-and-drop interface, a developer can build a dataflow by selecting either existing objects, or by directly
creating new objects from within the modeler. The entire dataflow can be modeled in this way, from the DataSource to the
MultiProvider, and even as an OpenHub destination.
Any BW system with multiple complex projects would benefit from the retrospective implementation of dataflows. This tool
reduces the effort and complexity associated with the administration and maintenance of a system. For example, a single
dataflow could be delivered for all the data used within an application, meaning the system will become easier (and
therefore cheaper) to support.
3. Accelerated data loads
Accelerated data loads are now possible!!! - SAP has rewritten underlying code for data loads for optimized performance.
In many places in BW 7.3, the technical coding has been optimized to improve the loading performance. Load times into
DSO objects has been reduced by up to 40% when compared to BI7, plus the loading of Master Data has been
accelerated.
Improvements in load performance translate into benefits for the business. Data can be made available earlier in the day,
reloads are quicker (so cost less) and larger volumes of data can be loaded each and every day.
4. Semantic partitioning Turned Automatic!!!
With BW 7.3, the BW systems can automatically Semantically Partition your InfoProviders. Using a wizard interface, you
can create multiple partitions that store your data, handle the mappings between partitions and schedule the loading
processes. The need for the manual creation of multiple transformations, DTPs has been removed J
Semantic Partitioning is where we spread similar data across several InfoCubes or DSOs, split according to region or
year. This allows increased performance and stability, though comes with at a cost of increased development and
maintenance.
A typical scenario would be to Semantically Partition an InfoCube that is used for Global Sales reporting. By keeping to
data for each region or year separate, performance is improved, and maintenance and daily loading is easier.
Semantic Partitioning development will provide the business with both quicker and more reliable reporting, at a lower cost.
Rather than manually building Semantic Partitioning where critical, you can use it in smaller applications at little extra
cost.

Maintaining a semantically partitioned DSO


5. Integrated planning front end
In my view, the current Integrated Planning (IP) frontend in BI7 has always been unsatisfactory. Whilst it does the job, I felt
that it is:* Slow to use* Requires MS Internet Explorer
Interestingly, SAP has now migrated the IP frontend to a traditional transaction screen within the SAPGUI. The screens
are quicker and easier to use, and have no dependency on either a Portal or Java stack.
I believe business will benefit through lower development and maintenance costs but I find it so much nicer to use, this is
reason enough for it to be a favorite change!
6. Analytical indexes
An Analytical Index (AI) is a transient dataset that is stored within the BWA. BEx or BusinessObjects can be used to report
on them directly. They are defined and loaded from within the Analysis Process Designer (APD), and can contain the
results of a JOIN between a CSV file and query.
The fact that they do not exist in the database and the ease at which they can be defined makes them a very interesting
concept for inclusion in BW. Up until now, creation of datasets to address specific requirements has always been a
significant exercise and has required transports and loading. So AIs reduce the development effort needed to produce a
fast-performing tool that addresses a specific current requirement.
Visible Benefits of SAP Netweaver BW 7.3:
So, all in all, SAP BW 7.3 looks quite promising with all the below benefits.
Wizard based system configuration - Supports copying data flows, process chains etc.

Accelerated data loads - SAP has rewritten underlying code for data loads for optimized performance.

Performance improvement for BW Accelerator* Automated creation of Semantic Partitioned Objects

Graphical Data flow modeling and best practice data modeling patterns

Admin Cockpit integrated into SAP Solution Manager

Modeling is the next generation of programming


languages.
Posted by Kobi Sasson in kobi.sasson
Modeling is the next generation of programming languages, you may say, well you are being a little bit dramatic here. So
let me explain.
Using the conventional way ( e.g. Java, C# etc. ) to create applications is somewhat an art, it require creativity, a lot of
experience and usually a lot of effort. The main reasons for that are clear, but I will just mention a few- An enterprise
application requires you to design and develop your application in a way that it is maintainable, fulfill all the application
standards such as accessibility, security, quality etc. all of these mean at the bottom-line, time and money.
So I suggest a different way, why not just model the application, no code required, all standards are already applied on all
of your modeled application, and the bottom-line much less money and time, Im not saying anything new here, but I
would like to emphasize on the different aspects of a good modeling tool :

1.
A robust modeling language the modeling language should be robust enough in order to express most of the
requirements from an enterprise application. If it is too simple( such as configuration tools which allows you to define a
very limited set of options and then create an application out of it) then it will cover only limited set of requirements and
therefore will help only on a very specific cases. On the other hand the modeling language should not be too complex as it
will make the modeling tool tedious and not user friendly, which will not fulfill my second requirement J.
2.
A simple modeling tool modeling tool should be used by business experts and usually not by developers, so the
modeling tool should be simple enough so the business expert will be able to focus on the business logic and not on
technical/complex staff.
3.
UI extension points as the modeling language will not cover all customer requirements, a major point here is to
provide the ability to extent the UI offering of the modeling tool. By providing the UI extension points the customers can
cover all of his specific requirements.
4.
Connectivity to various backend the ability to consume services is a must, but in the enterprise world today
companies have various sources of data ( Web services, Databases , R3 ). If you can consume different subsets of data
you are able to create applications which displays and consolidate data from various data sources.
5.
LCM- one last and very important in the enterprise world is to enable customers to deliver the applications from
development to production or to enable to work on the applications with multiple users.

Visual composer do just that. We still have some points that can be improved but in general Visual composer fulfill the
above requirements, and therefore is the right solution for creating a fully fledged enterprise applications.
I will appreciate your comments and feedback, and of course, if you think on a concrete requirement that is missing in
Visual Composer, please feel free to suggest an idea in Visual Composer Idea place.

BW 7.30: Dataflow Copy Tool: How to use Dataflow


Copy tool to speed up the development activity
Posted by Krishna Tangudu in SAP NetWeaver Business Warehouse
As you know creating a data flow takes considerable development time. This affects TCO aspects of the project. To
reduce the total cost of development activity, BW 7.3 release supports the copying of existing BW 7.x data flows, i.e. data
sources, data targets, transformations and also process chains.
In this blog, I would like to explain how to use Dataflow Copy tool (wizard based tool). You will also understand how to
copy an existing BW 7.x data flow and create data flows. This will help you to decrease the overall time of your
development.
I present you with sample navigation, using this wizard based tool.
Navigation:
1) Go to DWWB (RSA1)
2) Select Copy Data flow from the context menu of the "Info provider" or "Process Chain".
In my case, I have selected an info provider (Ex: Sales Overview).

3) This action leads to the below screen, where I have to select whether to copy the upward, downward or both the data
flows of the cube.

In this case, I choose "Downward".


4) After continuing the above step, we will have an option to collect the dependent objects like IPs and DTPs as below.

I choose YES to continue. This will help in creating required IP's and DTP's automatically.
5) This will lead to Dataflow copy wizard. Where I will configure below steps:
A) No. Of Copies
B) Source Systems
C) Data Sources
D) Info Provider
E) Transformations
F) Directly dependent Process
G) Indirectly dependent process
Now lets discuss more about features present in the wizard.
We can see that wizard is divided into 2 parts. Left side tells you the step which is currently in progress as well as the
errors RED traffic light (if any) in the steps. If you find all these steps with GREEN traffic light, then you can precede
with copying the data flow else the wizard will not allow you to do the same.

We will now discuss in detail about what each step in this wizard is responsible for:
A) No. Of Copies:
We can determine the number of copies to be created in this step. We can create a maximum of 99999999999 (We may
not use these many copies in the real environment).We need to mention the replacement for placeholders for technical &
description names as you can see below.

In my case, I will create only 1 copy. If it is "one" copy, we need not worry about the replacement holders as shown above.

Now I press continue to proceed to the next step in the wizard i.e. Source Systems.
B) Source Systems:
We cannot create a new source system or change the existing source system for the copy flow. But we can create a data
source from the existing source system to the new source system i.e. we can create an identical data source for a flat file
system based on SAP source system (or we can use an existing flat file system).
We have an additional feature in the wizard. We can checkDisplay Technical Names to display technical names of
the objects as shown below.

If we assign a wrong object or no object, you can find RED status as shown below.

Now I will assign a flat file source system (ex: YFF_SALES) to our new flow as shown below.

Now the status will turn to GREEN and we can precede to the next step i.e. Data Sources.
C) Data Sources:
In this Step, I will use the option Create with Template. This means we create a new data source with help of new
template.

Now I proceed to the next step i.e. Info Provider.


D) Info providers:
We can use the existing info providers or copy to new info providers but we cannot use overwrite here. If in case this
option is available, then if we have changed anything to the original info provider to accommodate the changes this would
not reflect in OVERWRITE and hence may lead to the data loss. Hence MERGE option is used.

E) Transformations:
Before copying transformations, we have to check if source and target fields are present in the Target object. If we dont
have those fields present in the Target object that particular rule cannot be copied (as the field in Target is missing).

Now I proceed to the next Step i.e. Directly Dependent Process.


F) Directly Dependent Process:
In this step, the processes are displayed that depend on an object of the data flow. For example, using the change run if
we include any Master data in the flow.
In this case we have IPs and DTPs to be created (As I selected this option in the previous steps).

Now I proceed to next step i.e. Indirectly Dependent Process.


G) Indirectly Dependent Process:
In this step, the processes are displayed that depend on a direct data-flow-dependent process. For example, I have to use

Error DTP (as I am using a DTP here).

With these I have completed all the important steps and I can proceed with copying the data flow.

Now system will ask you if I want to continue with data flow copy in the dialog or background.

I choose "Dialog" here, and the logs are displayed as below.

If we want to check the logs once again, we can use transaction RSCOPY.

We can also see our newly created objects.

6) Hence the copying is completed. Hope you understood the benefits of this new tool.

All about SAP HANA composite post


I'm very well aware that there are already initiatives to get all information about HANA into single place. Therefore
consider this blog as yet another HANA blog.
As I started similarly with SAP BW 7.3 I try to collect all relevant information sources about in this case SAP HANA here.
BTW: Whats HANA? In short it is in-memory solution from SAP. It stands for High-Performance Analytic Appliance and
basically it is appliance (plus its software components) that can absorb large volumes of data (e.g. tera bytes) into its
operational memory. The reason why it is so high performant is that all the data is in memory not stored in hard drives. It
can be set up on top of SAP ERP or BW system (plus non SAP databases) without necessity to materialize data via
transformations as in contrary to current DWH solutions. HANA bundles several components: in-memory computing
engine, real-time replication service, data modeling and data services. As it is delivered as appliance it is depended
currently on few vendors are supported: Fujitsu, HP, and IBM.
Basically we can say that HANA successor of SAP NetWeaver BW Accelerator as moving forward into in-memory
computing. Here HANA acts as persistence mechanism for SAP NetWeaver BW.
HANA components:
1. The core of HANA is called SAP In-Memory Computing Engine (IMCE or ICE, also called as NewDB or BAE). It is an
engine kind of in memory database which uses row/column/object based database technology to store data. It is for
parallel data processing using state-of-the art CPU possibilities.
2. HANA Studio (client app, or Studio Repository, Eclipse based editor connected to a HANA
Server backend) consists of:
2.1 Administration console - administer and monitor database
2.2 Information modeler - data modeling
2.3 Lifecycle mngt - provides an HANA stack update using SAP Software Update Manager (SUM)
3. HANA Load Controller -resides in SAP HANA, coordinates the entire replication process: It starts the initial load of
source system data to the IMDB in SAP HANA, and communicates with the Sybase Replication Server to coordinate the
start of the delta replication
4. Host Agent -handles login authentication between source system and target system
5. Sybase Replication Server -accepts data from Replication Agent, distributes and applies this data to the target
database using ECDA/ODBC for connectivity.
Current versions:
SAP HANA 1.0 SP02 - as of 12th June 2011 in general availability
Operating system: //only following one:
64-bit SuSE Linux Enterprise Server (SLES) 11 SP1 operating system
Upcoming version:
Nov 2011 - SAP starts rump up program for their customers to run BW-on-HANA as its database
Read more at : http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/28011

What's new in SAP NetWeaver 7.3 - A Basis


perspective Part - 2

13. SAP JVM


In our earlier installation of SAP we need to install jdk or sapjvm before we start with sapinst. But with NW7.3 its not
needed any more. SAPInst will install required sapjvm6 on its own, though you can provide the path of latest version of
sapjvm6 downloaded from service market place.
Benefits of sapjvm :The SAP JVM provides comprehensive statistics about threads, memory consumptions, garbage
collections and I/O activities. This information is visualized in the monitoring and management tools provided with the SAP
NetWeaver Application Server Java. For solving issues with SAP JVM, several traces may be enabled on demand. They
provide additional information and insight into integral VM parts like the class loading system, the garbage collection
algorithms and I/O.

Patching the SAP JVM on an SAP NetWeaver system is only supported using the Java Support Package Manager
(JSPM) but there is issue with patching SAPJVM6 through JSPM for distributed system JSPM tries to start PAS and DB
for SAPJVM deployment, but its not able to do so because they are on different servers and it gives error.

In that case we can do manuall patching like below first modify Instance profile parameters for JVM 6
_CPARG0 = list:$(DIR_CT_SAPJVM)/sapjvm_6.lst
_CPARG1 = source:$(DIR_CT_SAPJVM)
SAPJVM_VERSION = 6.1.032
DIR_SAPJVM = $(DIR_EXECUTABLE)$(DIR_SEP)sapjvm_6
jstartup/vm/home = $(DIR_SAPJVM)

Then uncar sapjvm6 and keep it at location of jvm directory, there will be variation in directory name as per your OS. If you
are not using PI or Java then this directory structure may not present and upgrade is not needed in that case.

14. SUM : Software Update Manager


The Software Update Manager is a multi-purpose tool, which supports various procedures, such as installing
enhancement packages or applying Support Package Stacks
SAPehpi 7.10 is replaced by Software Update Manager:
The SAP Enhancement Package Installer (SAPehpi), which was used until now to install Enhancement Package 1 for
SAP NetWeaver PI / MOBILE / ABAP 7.1, is replaced by the Software Update Manager (SUM).
Use Software Update Manager 1.0 SP02 for the following processes:

Upgrading to SAP NetWeaver 7.3

Applying Support Packages Stacks

Installing enhancement packages


Install SUM
SAPCAR -xvf /<directory>/SUM10SP02_9-20006543.SAR -R /usr/sap/<SAPSID>

Start SUM
With sid<adm> on Primary Application Server
> /usr/sap/<SID>/SUM/sdt/exe/DSUGui

Please take a glimpse of this tool it will remind you of SAP version upgrade and Ehp ugrade tool !

For detail please check SAP Note 1557404 - Central Note - Software Update Manager

15. CUA and IDM


SAP NetWeaver 7.3 AS Java fully supports the latest security standards, such as the SAML 2.0 protocol.
In SAP NetWeaver 7.3, CUA (Central User Administration) is now completely replaced by SAP NetWeaver Identity
Management. But this does not mean that NW7.3 will not support CUA, you can still use it like previous version, in fact I
am also using CUA with my NW7.3 so far its not complaining
Below is typical IdM landscape for your reference.

16. Dynamic Profile Parameters


To make it more dynamic and reduce downtime, from NetWeaver 7.3 some of the well-known system parameters have
been dynamized the parameters do not require a system restart after changes are made.
You can get a list of those parameters at this link
List of Newly added/converted Dynamic parameter in NetWeaver 7.3

17. Dynamic work processes


Probably this feature has been introduced from NetWeaver 7.0 Ehp 1, but still this is not frequently used and NetWeaver
7.3 favours/promotes majorly the concept of dynamic behaviour so I wanted to make it a separate point. Here we can
increase total number of work processed without restarting SAP system, its different than what we do through RZ04.
There no point me discussing it here when SAP has itself described it beautifully, please go throught the link

http://help.sap.com/saphelp_nwpi71/helpdata/en/46/c24a5fb8db0e5be10000000a1553f7/content.htm

18. ABAP Soft Shutdown


Like different option present in case of Oracle Database, now from NW7.02 to NW7.3 has got the additional option of Soft
Shutdown of ABAP. There are various option possible with help of these parameters.

rdisp/shutdown/disable_gui_login - This parameter will suppress GUI logons during the shutdown. This applies
irrespective of user roles or authorizations

rdisp/shutdown/gui_auto_logout - Defines the period for which dialog users can be inactive during the kernel
shutdown, before they are automatically logged off. If greater value than 0 will be used, the maximum wait time is
calculated from the minimum of the parameters rdisp/gui_auto_logout and rdisp/shutdown/gui_auto_logout.

rdisp/shutdown/idle_wp_timeout - How long the kernel shutdown waits for all work processes to be in the status
"waiting".

rdisp/shutdown/message_frequency - How frequently dialog users are requested to log off during the server
shutdown.

rdisp/shutdown/j2ee_timeout - How long the kernel shutdown waits for the AS Java (JEE Engine) to shutdown.

rdisp/shutdown/load_balance_wait_time - How long the kernel shutdown waits for the server to be deleted from
the load balance information. During this time, all requests can continue to be processed.

rdisp/shutdown/abap_trigger_timeout - Defines how long the kernel shutdown waits for the shutdown trigger to be
read by the Auto ABAP.
The Soft Shutdown option is available from SAP transaction SM51 and with help of SAP MMC

SAP MMC : From Web Browser -> http://<hostname>:5nn13


Right click on instance which needs to be shutdown and Click on Shutdown or restart

19. SSO2 Wizard is more powerful


Now from NetWeaver 7.3 there is no need to use STRUSTSSO2 and NWA certificate link unless you wish to use them to
import export certificates.

Please go through this blog for details


Bye bye STRUSTSSO2: New Central Certificate Administration NW7.3

20. NetWeaver 7.3 on Cloud


With release of NetWeaver 7.3 SAP is now more focusing on future cloud integration, mobility and in-memory computing
tasks.
As if now, these NW7.3 hubs are available on Cloud.

SAP Enterprise Portal 7.3

SAP BW NetWeaver 7.3

SAP PI NetWeaver 7.3

HANA
For reference please visit below links
http://www.sap.com/corporate-en/press.epx?pressid=15178
http://www.sapcloudcomputing.com/technology/architecture.html
http://www.wftcloud.com/

With these 20 points I would like to wind up this blog, but I will appriciate if readers suggest more points which they know
but we should stick to the Basis perspective

Hope this will be of help to some of us.

----------------Adding this point as suggested by Hemanth Kumar Jul 10, 2012---------------21. From NetWeaver7.1 sap introduced new tool in place of JCMON. This tool is JSMON

This can be used just like jcmon.


jsmon pf=<instance profile name>

you can type help and get multiple options that can be used, like below screenshot you can type instance, process,
display and see the result.

Email This

Major Lessons Learnt from BW 7.30 Upgrade


Dear SAP BW-ers,
This is my 1st time blog posting experience, I would like to share with you my journey when upgrading to BW 7.30 SP05
from BW 7.01 SP07.
Whatever I've written below is purely my opinion and not my employer.
It has been a painful project yet rewarding since this will enable the BW to be on HANA and have much better integration
with other Business Objects tools.
Data Modelling Issue
There is a new data validation check procedure in BW 7.30, some of our daily batch jobs fails because of
this new behavior. We have to remove conversion routine in 0TCTTIMSTMP and also write an ABAP routine to do the
conversion during data loading. The same thing happened for 0WBS_ELEMT, we wrote a routine to fix this thing as well.
Cannot activate an InfoSource after the upgrade, run program RS_TRANSTRU_ACTIVATE_ALL in
debugging mode and set the value of the parameter "v_without" to "x".
3.x DBConnect Data source stopped working after upgrade, needs to be regenerated because it was
corrupted after the upgrade Go-Live
BWA Issue
If you are running on BWA 7.20 lower than revision 22, please set the query properties back into using
Individual access instead of Standard/Cluster access. You can also do mass maintenance of every queries in an
InfoProvider in transaction RSRT. If you don't do this you will be having major performance problem, for example a query
that used to take 9 secs before upgrade will come back in 800 seconds if the cluster access is enabled.
Reporting Issue
Error #2032 because of dashboard crosstab data binding, we experienced this with most query that has
0CALDAY. You have to remove the 0CALDAY and then put it back again in the query so that the BICS connection will be
refreshed from the query to the dashboard.
UOM conversion issue after the upgrade for some query, implements this SAP Note 1666937 to solve the
issue.
Cell defined value was not impacted by scaling factor in BW 7.01, but in BW 7.30 it does. We have to
make lots of adjustment in few queries because of this.
A condition on a hidden key figures no longer supported in BW 7.30, again some queries has to be
adjusted because of this.
Dashboard Design 4.0 cannot be connected to BW 7.30 system until we upgrade SAP GUI version to
7.20 SP10.
This is the list of major issues that we encountered so far a week after Go-Live; hope that this will help your journey. I
personally say to run the upgrade better we need to have a copy of production right before the upgrade and do the heavy
testing few times and involve the business when doing so. You will then expect only few minor issues during the go-live.
Regards,
Erdo Dwiputra (original Contributor)

Das könnte Ihnen auch gefallen