Beruflich Dokumente
Kultur Dokumente
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
When you are asked to provision a table then usually question that follows is How long will that
take? Currently there is no way of predicting this especially when you are doing first replication
on new hardware. You might have very rough estimation based on size of tables however this can
be very inaccurate.
Again solution can be relatively simple. All that is required is that SLT needs to collect various statistics and
then (if allowed by customer) these can be sent to SAP for analysis.
Following information should be collected:
hardware configuration where SLT is running this can be then used to calculate first
variable representing power of the machine (HW_POWER)
table name (and corresponding structure) this can be then used to estimate complexity
of table or to directly assign complexity to well known SAP tables (TABLE_COMPLEXITY)
amount of records in table and size of the table this can be represent size factor of the
replication (TABLE_SIZE)
replication duration how much time the initial load took (REPLICATION_TIME)
These values can be then used to create following formula and to find proper generic variables:
REPLICATION_TIME = TABLE_SIZE * TABLE_COMPLEXITY / HW_POWER
Of course historical values collected by SLT can be then used in case that table needs to be provisioned again.
Data Provisioning screen in SAP HANA studio should contain details about the table or selected tables to be
provisioned including time estimation.
SLT system
1.) Consistency check and Clean-up functions
I really love SLT replication as my most favourite type of replication into SAP HANA. However I must say that
things are not working as they should. Although the replication principle is very simple the implementation
is so abstract that there is a huge space for errors. And errors are happening more often then what can be
considered normal.
I have no constructive ideas in area of preventing errors. However I do have some ideas in area of error
troubleshooting.
Definitely useful function would be the possibility to run consistency check for given objects. It happened to me
multiple times that status in SAP HANA (table RS_STATUS, fields ACTION and STATUS) was different then
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
status in SLT (table IUUC_RS_STATUS, fields ACTION and STATUS). This error is quite obvious yet there is
no way how to fix it without running update query on database level in SLT, HANA or both systems.
Similar problem can be observed with tables RS_ORDER. Sometimes these tables are also
having multiple last entries for same replicated table. It can also happen that when table is deprovisioned it is removed from table list in transaction IUUC_SYNC_MON but does still exist in
Mass Transfer definition and there is no way how to get rid of it.
Fantastic function would be consistency check where all these object would validated against each other and
all inconsistencies would be removed. In case of unclear state user can be queried for decision.
Also orphaned entries should be automatically identified and removed during SLT start to keep
the system clean and tidy.
Another nice function would be to purge the configuration. To remove EVERYTHING from SLT
regarding specific table like that it was never ever replicated by SLT for this particular Mass
Transfer. This function would remove all entries related to given table in given Mass Transfer
including possible inconsistencies without impacting other tables replicated by SLT. Then table
can be safely provisioned again without risking collision with obsolete entries.
Same function should be available to be executed on Mass Transfer level (to clean up everything in given
Mass Transfer definition) and also on whole SLT level (to make it like after installation including removal of all
obsolete Mass Transfer IDs).
Of course corresponding purge actions should be also executed in source systems.
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
what was minimum, average and maximum utilization of background jobs suggesting if more
background jobs should be allocated
All these statistics would enable additional insight into the process of replication offering possibility to
understand if and how SLT system should be adjusted.
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
I believe that SAP should currently focus on stabilizing the product to avoid issues rather then
adding new features however possibility to adjust data type should be leveraged. Very simple
dialog doing code generation and registration designed only for change of data type for particular
table would do the job. Justification for the need is explained in next point.
8. ) Documentation
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
Last but not least SLT needs documentation. SLT is currently designed as a black box where
admin does not need to know the internal mechanics. This is fine as long as SLT is working as
expected. However this is not daily reality SLT can get some problems and then admin is left
without any guidance how to solve the situation...
8408 Views Tags: sapmentor, hana, replication, slt, ideas, sap_lt, suggestions, hde
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
to be honest I am not anymore in position where I am working hands-on with SLT so I cannot comment on
what is current status of SLT... Now I am more on infrastructure side of SAP HANA (dealing with architecture,
HA, DR, operation, monitoring, backups, etc..)
I think best person to say if points above were covered and how is Tobias Koebler
Tomas
Gregory Misiorek
Jan 9, 2015 2:40 PM
Hi Thomas,
Time check for January 2015, so how is the implementation going? Any luck in having those features in the
commercially available product?
Thx,
greg
Bastiaan Lascaris
Jan 10, 2014 10:35 AM
Thanks Tomas, these are good ideas. I bumped into a few issues which where easier to solve, if some of the
points in this blog where resolved by SAP by now. Especially with purging and solving replication errors.I hope
that they will tackle this soon.
Chandra Sekhar
Jul 26, 2013 2:33 PM
Good blog Tomas
SrinivasuluReddy T
Jul 3, 2013 7:15 PM
Nice Data
Lucio Menzel
May 18, 2013 3:24 AM
all good ideas, hope to see them implemented soon.
Tobias Koebler in response to Tomas Krojzl on page 10
May 14, 2013 1:40 PM
Hi,
I know some time is gone - sorry for the delay. I tried to note all important facts down in a blog. You find it on
the SLT community: http://scn.sap.com/community/replication-server/blog/2013/05/14/how-slt-is-mapping-datatypes
Best,
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
Tobias
Tomas Krojzl in response to Raj Kumar Salla on page 9
Apr 29, 2013 9:28 AM
Actually this is quite interesting idea... Potential solution can be usage of SAP HANA Studio modeling features
where you could model data transformation that can be then "saved" into SLT as ABAP code doing designed
adjustments during transformation.
Of course this "SLT modeling" would be limited to features provided by SLT - so no complex features would be
possible unless SLT itself would be extended..
Advantage would be no need to know ABAP, ability to define the replication without the need to leave SAP
HANA Studio, possibility to package and export the modeling (reusability), etc..
Disadvantage would be risk of de-synchronization between SAP HANA and SLT in case of backup/restore,
etc..
Anyway very nice idea...
Michael Harding in response to Tomas Krojzl on page 10
Apr 29, 2013 3:00 AM
Tomas Great points. Like previous posts, its a bit dissappointing that we are not seeing these improvement
opportunities addressed 10 months later.
One of the areas I'd also like to see some clarity on is the SLM strategy. A couple points here:
1) There really should be more around a unified patching strategy around what I would refer to as the HANA
'ecosystem' in a Sidecar type implementation: HANA DB, HANA clients & shared libraries (on source systems
& for developers), DMIS patches (on source systems), SLT system. Of course SAP's suggestion is 'apply the
latest patch', but the patch levels across these components are not in sync and updates are coming seemingly
weekly. Stabilization across this ecosystem is an uphill battle, so trying to keep up with the patch levels in a
Production environment is difficult. Furthermore, many of the replication issues you see in the SLT space are
not observed during testing because transactional volume is much lower in non-Prod systems.
2) Alignment with TDMS. The DMIS engine in source systems support both TDMS and SLT, yet each of these
replication products seem to be running independent SLM cycles; it's as if the developers of each product are
not communicating. Our implementation hit a scenario where TDMS actually required a higher DMIS patch
level than what HANA was supporting at the time.
Thanks,
Mike
Raj Kumar Salla
Apr 27, 2013 8:43 AM
Hi Thomas,
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)
My point here is to have a drag and drop facility to perform complex transformations, instead of writing ABAP
logic, as we do have in ETL tool BODS.
SAP HANA and In-Memory Computing: SAP HANA - SLT suggestions (brainstorming blog)