Sie sind auf Seite 1von 47

LSMW migration with IDOC method and using IDOC as source Part1: Extract by ALE

Posted by Jrgen L in SAP ERP - Logistics Materials Management (SAP MM) on May 30, 2013
6:24:22 PM
inShare
In this blog I want to show one of my most used and preferred migration option using IDOC and
LSMW
The business case: Data migration with a SAP legacy system. Many companies use already SAP for
several years. Companies buy other companies. Both run SAP. The decision is taken to consolidate
many SAP systems into just one.
Challenge: the legacy system has data for plants that were shut down, closed companies and
abandoned purchasing and sales organisations.

The good thing: your company actually owns all systems and you can customize and develop in all
those systems.
The bad thing: all love their data and want rescue as much as possible. While in many cases where the
legacy system is not a SAP system you just get an Excel extract with maybe 20 or 30 fields, with SAP
as legacy system you have to deal with several hundred fields for material, vendor and customer
master migration.
Writing extract programs is a huge effort, downloading data from maybe 10 or more different tables
not really less work and you still have to get the data together again.
This was the basis to search for a better option where I finally found a well known feature that saves
us a lot of work, especially for the extract: ALE

In brief: I setup an ALE distribution in the legacy system to send e.g. vendors as IDOC to a file. And
in the target system I develop my LSMW object using this file as source file for my data migration.
I do not explain any single LSMW step in this blog in deep detail as I described it in in my other
blogs:
LSMW Material master by BAPI method - Part 1
LSMW Material master by BAPI method - Part 2
So i am just focusing here on the difference to these other blogs and the specific setup of ALE
distribution and IDOC import method

The homepage of LSMW can be found here in SCN at http://scn.sap.com/docs/DOC-26158


In that document you can find the links to the general documentation in help.sap.com too: SAP
Library - Legacy System Migration Workbench

Setup ALE distribution in legacy system:


1) Define Logical System
IMG > SAP NetWeaver > Application Server > IDoc Interface / Application Link Enabling (ALE) >
Basic Settings > Logical systems > Define Logical System:
name it ZFILESY and enter Filesystem for IDocs as description

2) Maintain ALE Port Definition


IMG > SAP NetWeaver > Application Server > IDoc Interface / Application Link Enabling (ALE) >
Modelling and Implementing Business Processes > Configure Predefined ALE Business Processes >
Logistics > Logistics <-> External Systems > External Transportation Planning Systems > Maintain
ALE Port Definition (Transaction WE21)

The port need to be defined in FILE folder


Define Port: name it ZFILESY enter a description: IDOC to FILE in /DC_EX/M/

The IDOC record types for SAP Release 4.x need to be selected if you are in a system with release 4
or higher
The Unicode-Format field has to be activated if you are in a Unicode system
Enter physical directory /DC_EX/M/. (I have to use a certain directory - this is due to a policy)
Select a Function module: I usually use EDI_PATH_CREATE_CLIENT_DOCNUM , See F4 for
alternatives
In order to prevent unprocessed files being overwritten, you use function modules which generate file
or directory names dynamically, that is to say, at runtime.
Carry out an Access Test, this helps to identify wrongly entered directories and uncovers authorization
issues.

3) Define Cross-System Company Codes


IMG > SAP NetWeaver > Application Server > IDoc Interface / Application Link Enabling (ALE) >
Modelling and Implementing Business Processes > Global Organizational Units > Cross-System
Company Codes (Transaction OB72, SALE, OBB5)
a) Cross-System Company Codes
b) Assign Cross-System Company Code to Chart of Accounts
c) Assign Company Code to Cross-System Company Code
Cross-system company codes are used in the distribution in financial accounting. There is exactly one
central system for each cross-system company code in the distributed environment. One company
code has to be assigned to this cross-system company code on each system involved in the
distribution.
When sending an IDoc with company code-dependent data, the company code is replaced with the
cross-system company code in all company code fields.
Example:
Company Code
Migration
8740
8740
Company Code
Migration
4982
4982
Company Code
Migration
8787
8787
Company Code
Migration
8788
8788

4) Maintain Distribution Model


IMG > SAP NetWeaver > Application Server > IDoc Interface / Application Link Enabling (ALE) >
Modelling and Implementing Business Processes > Maintain Distribution Model and Distribute Views
(Transaction BD64)
Here we need a distribution model
Structure: model view (call it ZFILESY)
Sending system (your choice)
Receiving system (ZFILESY)
!!! there is a valid from / valid to date double click at distribution icon in front of the model!!!
Start with clicking the change icon, because this transaction starts as display
click Create Model view, enter technical name ZFILESY and a short text of your choice

then continue with Add message type, enter ZFILESY as Model view, you system as sender,
ZFILESY as receiver and CREMAS as message type for vendor master distribution. Make use of F4,
this helps to avoid typing mistakes

Actually for customer and vendors you can generate the distribution model and do not need to add
each message type manually in BD64
Just go to
IMG > SAP NetWeaver > Application Server > IDoc Interface / Application Link Enabling (ALE) >
Modelling and Implementing Business Processes > Configure Predefined ALE Business Processes >
Logistics > Master Data Distribution > Proposal for distribution model: Customer and vendor masters
(Transaction WYL2)

5) Maintain ALE Partner Profiles


IMG > SAP NetWeaver > Application Server > IDoc Interface / Application Link Enabling (ALE) >
Modelling and Implementing Business Processes > Configure Predefined ALE Business Processes >
Logistics > Logistics <-> External Systems > External Transportation Planning Systems > Maintain
ALE Partner Profiles (Transaction WE20)
click the create button, then enter this data in the screen:
Partner profile type/Partner type: LS logical system
Partner profile/Partner number: ZFILESY
Details at Post Processing agent
Typ: US user
Agent: your user ID
Lang: EN
Now click the icon for create outbound parameter below the section for outbound parameters
in next screen enter Message type e.g. CREMAS for vendors
enter Receiver Port ZFILESY and Basic type CREMAS03
Very important: set the radio button to Collect IDocs, and do not start subsystem, and flag the box
for Cancel processing after Syntax error

Send data by ALE.


BD14 Send vendors

Enter the message type CREMAS and your target system, the logical system ZFILESY
Enter 1, several or none account number of vendor for distribution. No entry means you send ALL
vendors, this can take some time as SAP has to read the vendor master and create IDOCs.
After execution you get a pop-up telling you how much IDocs got created for your message type

after you click continue you get another pop-up telling you how much communication IDOCs got
created:

The IDocs are still not send at this time, continue with transaction BD87

If this second Pop-up shows error B1039 0 communication IDocs generated, then you have done
some setting wrong
Check if message type exists in a distribution model (BD64)
Check if the distribution model is avctive for the current date (BD64)
Make sure the message type is added to the partner profiles (WE20)

BD87 Status Monitor for ALE Messages

In the selection screen of BD87 just enter your partner system and execute to find all your IDOCs

SAP lists per status (e.g. yellow - ready for dispatch; green - successfully dispatched) each message
type with its number of IDOCs

Place the cursor on a message type below the yellow status.


If you just have a few IDocs, then you can continue and click the Process button.
If you have more than 5000 IDocs and you want them all in one file (which makes most sense,
especially if you use partner functions for vendors, then you should migrate all vendors in one shot)
then choose from menu EDIT > Restrict and process.
Don't be confused with this name "restrict" as you actually want the contrary.

SAP has now taken all your IDOCs into the multiple selection. And the best part is, that this multiple
selection can take hundred thousands while multiple selections in normal selection screen may only
take 3000 vendor numbers until the application dumps.
You only need to extend the value of maximum numbers of IDocs beyond the numbers of IDOCs
you have. This ensures that all IDOCs are going into one file. Otherwise SAP would create several
files.
Execute.
you get a pop-up telling you how much IDOCs got selected, then choose from menu Program >
Execute in Background
as it may take several hours to distribute those IDOCs
After the background job has finished, you can find the file with the IDOCs via AL11 transaction in
the directory that you entered in your Port definition
Further it has the name that was automatically created via the function module entered in the port
defintion.

The detail looks unstructured compared to normal flat files created from an Excel Spreadsheet:

In the second part of this blog I will describe how to read this file as source file in LSMW.
This second part can be found here:
LSMW migration with IDOC method and using IDOC as source Part2: Import
LSMW Material master by BAPI method - Part 1
Posted by Jrgen L in SAP ERP - Logistics Materials Management (SAP MM) on Dec 27, 2012
8:40:05 PM
inShare4
I would like to show how BAPI method is used to create and change material master.

As an example I use a real business case:


existing material masters need to be updated with new material descriptions and sales text
not all materials have a sales text view
Text needs to be maintained in German and English language, English text is identical to German text
Text is provided in a Excel spreadsheet

Annotation: Our material numbers are set as Lexicographical, no leading zeros


We have to use a certain path to store the .read and .conv files

The homepage of LSMW can be found here in SCN at http://scn.sap.com/docs/DOC-26158


In that document you can find the links to the general documentation in help.sap.com too:
http://help.sap.com/saphelp_nw70/helpdata/en/4d/455035082e2271e10000009b38f889/frameset.htm

LSMW is a workbench which makes use of several tools (Batch/Direct Input, BAPI, IDOC and batch
input recorder) which existed long before LSMW was created and that still exist and can be used
independently of LSMW, LSMW is basically a tool to map legacy data with SAP structures,
exchange field values and to generate a program that supplies those import tools with the data.

Prepare your LSMW

Define your project, sub-project and object


Project, sub-project and object is a free definable hierarchy
The project is usually self-explaining, it is the name of your project. Keep in mind that all mapping
rules are shared among all subprojects and objects within a project. This is a big advantage, however,
if many persons work in the same LSMW project you can as well face some disadvantages. E.g. if
somebody is in development and has a syntax error, then others in the same project are affected by
this error too.
The subproject is a lower part of your project. It could be for example represent a module or subdivide the project by responsibles.
The object is the lowermost part where you actually define the migration for a certain business object
or a part of it.
My area is global master data, so I usually chose Master data as subproject and Material, vendor or
customer as object.
In this example I am going to use SCN as project, material as subproject and texts as object.

SAP uses project, subproject and object name to build the file names for read and conv files.. Each
field is 15 character long, the maximum length of the conversion file name is 45 characters. If you
have to work with a certain path, then this path will occupy a part of the conv file name. Because of
that, I recommend to make the names for project, subproject and object shorter than the maximum of
15 characters.

If you start with a new project, then you have to click the new document icon, you get a pop-up to
enter project name and description.
You will have to do the same for Subproject and Object.

Next you get to the overview of all steps in LSMW. At this moment SAP will show more steps than
you finally have to go through. What steps you have to perform depends on the import method you
chose in step 1 Maintain object attributes. I highlighted the steps which are variable on the import
method

Above the step overview you can see various buttons.


The button "Double click = " can be used to set your preferred way to work, I prefer double click to
go into change mode
With "numbering on " you get those numbers in front of the steps. This can make communication
easier, but only if both are working with the same settings, because the numbers are not fix to the text
next to it, they are just number to the positions in the overview
The button "User menu" allows you to add hidden steps to the LSMW steps (and causes renumbering
to the shown entries)

As you can see in the pop-up there are some interesting options like "Display Read Program" and
"Display Conversion Program" which can be enabled if you need to debug in case the programs do
not work like you expect.
The steps "Generate Read Program" and "Generate Conversion Program" are not really needed,
because this step is automatically carried out with the next visible step below them.

Often neglected is the option to check the converted data against customizing, which means you could
find errors (e.g. missing customizing transports, wrong mapping rules) without the need to post your
data.

Prepare your Field mapping and conversion rules


After executing step 5 Field mapping and conversion rules you will see the mapping based on the
SAP given settings. I rarely saw that professionals can work with the presetting given by SAP.
You can change this setting from menu Extras > Layout. You get then a pop-up to define the layout
I prefer and recommend to set each field in the layout active

Here are some examples how the layout looks like depending on the settings made.

with technical fields active:

same section technical fields inactive:

With initial fields (fields that do not have a mapping)

And the same section without initial fields:

With this setting you will not even know that there are more fields in the import structure.

The next screen shot shows the same section code inactive (compare it with the screen shot above)

You do not see what values the constants have and you can't see the ABAP coding for the Movestatement.

The next screenshot shows the appearance with inactive processing time (BEGIN_OF_RECORD;
END_OF_RECORD; compare with screenshot above)

the following screen shot is made with "Global Data Definitions" set to inactive (please compare with
screenshot "technical fields active" further above:

the very first entry __GLOBAL_DATA___ is gone now.

Prepare for IDOC Import method

if you use IDOC and BAPI import method then you need to define Ports and partner profiles like you
have to do it when you get IDOCs from an external system.
In the initial screen of LSMW, where you entered the project name chose from menu Settings >
IDOC Inbound processing

in the next screen you have to enter the File port name, the partner type US (user) and the partner
number and then you click Activate Inbound processing.
This has to be done in any system where you use this LSMW project.

File port and partner number are free definable names, and in many companies set up by the Basis
team. So you would only need to enter those names and Activate IDoc Inbound processing,

However, if you have to do it yourself, then you need to make use of the buttons next to those fields.

Maintain ports is actually nothing else than transaction WE21


In my example I had used the name DATEI (which is nothing else than the German word for FILE)
After clicking Maintain ports you get the screen shown below. Put the cursor onto the file folder on
the left, then click the create icon
Enter a description, set the radio button for the IDOC record type (most probably the 4.x type
nowadays), make the setting if you use Unicode format. then set the radio button to physical directory,
enter a directory and a function module in tab Outbound file then do an Access test to make sure you
have access to this directory. These are the minimum settings needed for a Port used in LSMW.

The button Maintain partner numbers is actually transaction WE20


In my example I named it LSMW

After clicking this button you are taken to the Partner profiles. Click create button. Enter the partner
number (here LSMW), partner type US, type US for user, Agent: you user-ID and your language.

Then define the Inbound parameters, Click the "insert line" icon below the table.

At this place you add the IDOC message type that will be used as your import method. LSMW can
create this entry itself if you have activated the IDOC inbound processing before you defined the first
step of your LSMW. But there is no harm if you add it manually, which usually has to be done in Test
and production systems as you do not carry out step 1 anymore.
For this example we use message type MATMAS_MASS_BAPI (which is basically the same BAPI
that is used in MM17 Material master mass maintenance)
Enter the process code with help of F4 (in this case it is BAPI, but it may vary by message type)
Activate "Cancel Processing after Syntax error" and set the radio button to Trigger by background
program, otherwise your IDOCs get posted immediately while creating.

LSMW Material master by BAPI method - Part 2


Posted by Jrgen L in SAP ERP - Logistics Materials Management (SAP MM) on Dec 27, 2012
11:46:31 PM
inShare
This blog is in continue to the LSMW Material master by BAPI method - Part 1

I would like to show how BAPI method is used to create and change material master.

As an example I use a real business case:


existing material masters need to be updated with new material descriptions and sales text
not all materials have a sales text view
Text needs to be maintained in German and English language, English text is identical to German text
Text is provided in a Excel spreadsheet

Annotation: Our material numbers are set as Lexicographical, no leading zeros


We have to use a certain path to store the .read and .conv files

The homepage of LSMW can be found here in SCN at http://scn.sap.com/docs/DOC-26158


In that document you can find the links to the general documentation in help.sap.com too:
http://help.sap.com/saphelp_nw70/helpdata/en/4d/455035082e2271e10000009b38f889/frameset.htm

Step 1 - Maintain object attributes

Here you define if your LSMW object is for a one-time data transfer or if you want use it permanently
as a periodic transfer.
Migrations are usually one-time data transfers, even you do it many times until you are satisfied with
the result
The periodic transfer gives you an option to create a program ready to be used by end-users.

Please make use of F4 search help to get the parameters for the import objects.
in this example we are using standard material (industry) as BAPI import method: BUS1001006,
method SAVEREPLICA and Basic type MATMAS_MASS_BAPI03

Step 2 - Maintain source structure

I just have one Excel file with the source data and each line looks equal, so there is no hierarchy no
different structures per line

Hence the source structure to be defined is as well a simple flat structure.


Click the create icon enter a name for the source structure and a description.
Don't create a too complex and long name, as you may need to type it in the mapping rules if you
have to add some ABAP coding in the Mapping rules later.

Step 3 - Maintain source fields

In this step you have to enter all the fields that are contained in your source file per structure. I
recommend to use the same sequence as it is in the source file.

There are various options to maintain the source fields as you can see in the pop-up after clicking the
"Copy-Button"
you can upload the fields from a text file, or copy it from another LSMW object, or you copy it from
data dictionary (I will explain this in more detail later in another blog), or you can copy it from the
first line of your data file (but as you do not have the field length in the first line you need to complete
it anyway manually).

For such small files like in this business case I usually do it manually. Move the cursor onto the
structure line, then click the table icon (right from the copy icon). This is more convenient than
defining field by field via the create button.
I usually use the field names, its type and length from data dictionary in SAP, but if you want identify
the values from your source based on field names (instead of position in field sequence), then you
need to make sure that the fields names are identical with your source file.

Step 4 - Maintain structure relations

In this step you can see that a BAPI or IDOC structure is much more complex than a structure that
you get from a recording.
But don't panic, you do not need to care about any part, just about the parts that are needed for your
migration case.
You move the cursor onto the structure that you need and click the create relationship button. As your
source structure has just one structure it is assigned automatically. In case of a multi-structure source
you would need to select the source structure that need to be assigned to the corresponding target
structure.
We have fields that are used in many target structures in just one source structure, so we assign this
source structure to all needed target structures.
In our example it is the header segment (it always needs to be assigned), the header segment with
control information, the material description and the long text.

Step 5 - Maintain field mapping and conversion rules

After you assigned your source structure to the target structure in step 4, in this step you assign your
source fields to the target fields, and you define how the values will be. Whether they are just moved
from source field to target field or if they need to be translated via rules, or fixed value assignment for
mandatory or just necessary fields that are not in your source file.

My first choice is Auto field mapping. You get it from menu Extras > Auto-field Mapping

you will then get a screen to define the rules for this auto field mapping:
Here you control whether you do this for all fields or just for empty fields
I usually choose the 100% match as there are too many fields with similar names that could get a
wrong assignment if you use a lower percentage.
If you have not yet define reusable rules then you can only apply the MOVE rule to the fields. You
can change it anyway later.
And as I want to see what SAP does I choose "with confirmation", SAP will then show me each
assignment and I click only okay to continue.
With my example the auto field mapping does not work as my source field names are different from
the BAPI field names.
This happens often with BAPIs, as they do not use the table field names. With IDOC import method
this auto field mapping is a big success if your source field names are defined like the SAP table
fields.

In my example we need to do the field mapping manually.


You have to tell the BAPI what views you want maintain, this is done in the header structure, where
you find as well the material number.
move the cursor onto the material field, then click the create source field icon and select the material
number from your source field definition.
Keep in mind that my material number is lexicographic, means it does not have leading zeros. If your
material number is with leading zeros, then you either need those leading zeros in the source file too,
or you need coding to add those leading zeros, otherwise the BAPI is not able to find your material for
update.

For my example we need Basic data view for the material description and Sales view for the sales
text.
As those fields are not among the fields in the source file we have to assign value as a constant.
Move the cursor onto the field, then click the constant button. You get a pop-up to enter the value,
which is just a X.

Use F4 to assign the value, this helps to avoid errors from typing. SAP behalves sometimes strange if
the characters expected as capital characters are entered as small characters.

The sales view is only needed if there is sales text in the source file. Hence we cannot assign a pure
constant, we need some coding.
Nevertheless, assign the constant first, so you need to code less yourself, then double click the coding
to get to the code editor.
Now you only need a small piece of ABAP coding to check if the sales organisation field in the source
is empty.

After this is done we move forward to the Segment E1BPE1MAKT for the material description.
Here you find as well the material number field, so you assign it to your source field.(think about
leading zeros!)
Language is not among the fields in the source file, so you assign the language as a constant.
Remember we need the text in German and English.
However, we can only assign one language here. So you could think about cerating a second LSMW
object to load with the other language, or you make it flexible with some coding. At this point we
assign just the first language. We do it for both language fields LANGU and LANGU_ISO.
LANGU is the 1 character long field, LANGU_ISO is the 2 character long field for language (see
table T002 for reference).
Just assign the language via F4 and SAP will put the right code for you (even you enter e.g. EN in the
pop-up, SAP will only enter E in the LANGU field.
And the last field in this section is the Material description which need to be assigned to your source
field.

Now we need to care about the second language, which is from the text identical to the first language
as our product names are equal in all languages.
We do this with a small coding in the section __END_OF_RECORD__
By default you see the code transfer_record. This code submits the entries made in this section.
We need a second record for material description in English.We can force this in the
__END_OF_RECORD__ processing time
Double click the coding line here to get to the code editor and enter the code behind the
transfer_record statement.
You only need to move the new value for the 2 language fields, all other field values are still in the
memory.

so you add: E1BPE1MAKT-LANGU = 'E'.


and:

E1BPE1MAKT-LANGU_ISO = 'EN'.

followed by: transfer_record.

This way you transferred 2 records for MAKT table with a different language.

The same has to be done for the sales text now, but only if we have sales text, which means a few
lines more coding.
But let us start from the beginning in the sales text structure:
All long text is stored in table STXH and STXL. The key of this table is Text object (for material
master sales text: MVKE), the text ID (0001), the language, and a combination of material number,
sales organisation and distribution channel for the text name.
You can find this key if you maintain one material manually, then click the editor icon in the sales text
view. In the editor chose from menu GOTO > Header
and you will get all necessary information.

In this example you assign MVKE as a constant to the Object field, 0001 as a constant to the Text-ID
field, and again the language to the 2 language fields.
For the combination of the text name field we need to do it with coding.

Coding is: If MYSRCFILE-VKORG = ' '


SKIP_RECORD.
ELSE.
CONCATENATE E1BPE1MLTX-MATNR MYSRCFILE-VKORG MYSRCFILEVTWEG
INTO E1BPE1MLTX--TEXT_NAME RESPECTING BLANKS.

Which means: if the sales organisation field is empty, then skip this this structure. If it is no empty,
then concatenate Material number (think about the leading zeros) with sales organisation and
distribution channel and bring this value into the target field .

Until here we only have the first of 2 lines sales text. So we need to take care about the second line
and as we need this sales text as well in the second language we solve this with coding in the
__END_OF_RECORD__ section, similar as we did for the material description.

the transfer_record statement in line 1 creates a record with the first Sales text line in the German
language .
in line 3 we check if there is a second line for sales text, and only if we have one then wie move the
content from the source field to the target field and transfer this record. The language and the object,
text id and text name is still in memory.

And then we repeat the same for the English language. We first move the English language to the 2
language fields, then we move the 1st line of sales text and transfer the record. And then we do it for
the second sales text line - if present.

Step 6 - Maintain fixed values, Translations , user defined routines


Not necessary for this business case

Step 7 - Specify files

In this step you have to define minimum 3 files, your source file, the read file and the conv file.
When SAP uploads and reads your source file, then it is writing the content into the READ file. The
name is automatically created by SAP using the project, subproject and object name.
When you run the conversion, then SAP reads this READ file and writes the converted data into
CONV file. This file name is as well created automatically by SAP. Here we have a restriction of a
maximum length of 45 characters. if you have to write this data into a certain directory, then you may
have to shorten the proposed file name.
Your source file is completely unknown to SAP, so you have to describe it. Which means you have to
tell where it is located, whether it is on your PC drive or in the SAP's file system. In this case the
source file is at the PCs local drive.
As SAP cannot read directly from Excel, you need to save the Excel file as a comma separated file
(CSV) or as a Unicode Text file (TXT).
Keep already in mind that your file has to be closed when you execute the reading step.

Please use F4 to find the file on your PC, SAP is then creating the content of the FILE field itself
Enter a name of your choice
Indicate whether this file has one source structure or multiple source structures. In this case there is
only one source structure (remember the definition in step 2)
In case you saved your Excel file as unicode text you have to put the radio button to Tabulator.
As we have field names in the first line in our Excel source, we need to select the box for "Field
names at Start of File". This controls that SAP starts processing the source file at the second line.
If the sequence of fields in the source is identical to the definition made in Step 3 then you have to
mark the box "Field Order Matches Source Structure Definition". Without this indicator SAP would
try to find the fields based on the names.
Further set the radio button to "Record End Marker" and to ASCII as we are using a text file as
source.

Step 8 - Assign files

Not much to do, as there is only one source file, SAP proposed it automatically, you only need to click
SAVE

Step 9 - Read Data

In this step you read the data from your source file. SAP automatically generates the Read-Program
based on the settings made in the earlier steps.
You can process it for all your data or only for a part of it.
In case you defined amount and date fields in your source (step 3) then you have here the option for
automatic conversion. Without having amount and date fields you can ignore the defaults.

After execution you get the numbers of read records and transactions. As we have only a flat structure
the numbers for records and transactions is equal.
However, you should verify with your source file if this number is correct. It is the first indication
when something may be wrong.
I had often cases where people had deleted content from the Excel file, I mean cell content, they had
not deleted rows. For Excel those rows with erased content are still active, and they get saved as
empty lines at the end of your text file. So SAP may read many lines more than actually have data. If
you have a good sense for data then the number of read records will already tell you something.

Step 10 - Display Read Data

A very important step is to display the read data, because you can check if the values were moved into
the correct fields and that they fit into the fields and do not overlap with other fields ( in this case you
have to redefine your field length in step 3)
You cannot see this from the overview, you have to click the line to get into the detail.

However, the overview screen is important too. Here you can see in one sight if the records look
equally or if you see a kind of wave, which indicates that something with the field lengths i wrong.
And if you use the icons to go the end then you can as well check if your records have data until the
last record
SAP will process even empty records in the conversion step, which certainly leads to errors. Such
empty records need to be removed from your source file, then you need to read it again.

Step 11 - Convert Data

Similar to the Read Data step you can convert only a selection or all data.

Important here is to set the radio button to "Create File". This way SAP writes the CONV file with
the converted data. This corresponds to the setting made for the Port in Activation of IDOC Inbound
processing.
Further is gives you the last chance to check in step 12 a few individual records if the data was correct
converted.

Not to forget that you get the numbers of converted records and transactions. Those numbers will
hopefully match with your expected numbers:
I usually store this output as text file for audit purposes.

Step 12 - Display converted Data

Like in step 10 you can call the display for all or just for a selection. If you have more than 5000
records then you should restrict the selection.
You get first the overview screen, similar to the Read data, and can as well go into the detail by
clicking a line.
The converted data is displayed according to the target structure. Only those segments that were
selected in step 4.

The green lines are technical lines for the IDOC, The yellow line is the header segment for the
transaction, the blue lines are the individual records.
You can already see that there are 2 records with ...MAKT which holds the material description in
German and English
and another 2 records with ...MLTX for the sales text. (if we had sales text wit 2 lines, then we could
see 4 records here)

Go into the detail of at least any record of one transaction to verify that each target field has the right
content.

Step 13 - Start Idoc Generation

Not much to worry, you have just 2 options: Execute or leave

SAP proposed already the file name with the CONV file. Click execute and see the count in the status
line going up.
SAP tells you when it has finished. then leave this step and continue with the next step.

Step 14 - Start Idoc Processing

You will see a bigger selection screen, but everything needed is defaulted and you can just execute it.
If you have many thousand IDOCs, then better execute it in background (via menu Program > run in
background).
Even the IDOC posting is much quicker than batch input sessions, I have often migrations with 10
and more hours for processing of one object.

Changing material text and sales text for about 40000 materials needed only 36 minutes.

When processing has finished, then SAP present an overview like you probably know it from MM17
Mass maintenance.

You can sort it by status to get immediately an overview if you have failed IDOCs.

However, this analysis is better made from the next step.

Step 15 - Create Idoc Overview

If you ran the processing step in foreground, then the selection screen is already fully defaulted and
you can execute it like it is.
If it is not defaulted, then enter the "Created at" time and the "created on" date and enter the logical
message, here "MATMAS_MASS_BAPI" and execute.

The overview looks slightly different now.


You get a frame on the left from which you can select the IDOCs based on its status.
As we have done everything correct, only one status is shown.

By double click onto the IDOC number you can display the IDOC details, and eventually correct the
content of a failed IDOC and then repost the IDOC in the last step of your LSMW. I plan to describe
this in more detail in another blog later (but first I have to make something wrong, otherwise I cant
get the screen shots - this will be tough
)

7842 Views

A glance at SAP data migration methods.

JULY 4, 2012 4 COMMENTS


What are the various methods available for SAP Data Migration? I studied few ongoing prominent
SAP Data Migration projects and had a discussion with our Data Migration team. As per my
understanding, there are three popular methods for SAP data migration from legacy systems and/or
old SAP R/3 to new SAP ECC system.

SAP Best Practices Pre built contents based on SAP Data Services (ETL) that utilizes
primarily IDOCs to load data into SAP.

LSMW A utility by SAP that utilizes flat files to load data into SAP

Custom Developed Programs Uses SAP BDC programs and flat files.
Each method has its advantages and disadvantages. I will discuss what I know about these methods,
advantages and disadvantages of one method vs. another, challenges faced by clients by using any of
these methods etc. In this blog, I will talk about SAP Best Practices. In subsequent posts, I will
discuss LSMW, Custom Developed Programs, Advantages, Disadvantages, Challenges etc.
SAP Best Practices Method
Lets talk about data migration from legacy(non-SAP) systems to SAP system. This includes new SAP
customers as well as current customers who are bringing in new plants, new business units, etc., and
need to convert data to a SAP ECC system. SAP Information Lifecycle Management (ILM) is used
for system decommissioning or data retention and archival. It is beyond the scope of this discussion at
this time.
This method utilizes loading of data into SAP primarily by IDOCs. SAP acquired Business Objects
tools such as Business Objects Data Integrator ETL, Data Quality (First Logic) and bundled it
together with a new avatar SAP Data Services. The core strength of Business Objects Data Services,
earlier known as Business Objects Data Integrator ETL or Acta ETL has been tight integration with
SAP. This ETL tool was primarily used for SAP data extraction since its inception in 1998 or so. I
have seen the evolution of tool from Acta 1.1 to SAP Data Services XI 4.x. There are some other
Business Objects software too used in migration such as Data Insight (Data Profiling tool), Metadata
Manager (these two tools now known as Information Steward) and some reports, but SAP Data
Services is where the bulk of the work takes place. For those who dont know Business Objects
America acquired a company Acta Technology in 2002 or so and SAP acquired Business Objects
Americas in 2007. Business Objects renamed the Acta ETL as Business Objects Data Integrator after
Acta acquision and later SAP renamed it as SAP Data Services.
Acta also offered SAP Rapid Marts. Rapid Marts are out of box pre-packaged Acta ETL code and
target database schema based on Oracle or SQL Server databases for extraction of data from various
SAP modules such as SD, IM, FI, CO, GL, HR and so on. The value proposition of Rapid Marts has
been that it gives a jump start to SAP customers in terms of getting data out of SAP quickly.
Customers are generally able to leverage 65-70% of out of box Rapid Mart contents in its AS IS
mode. Remaining contents can be easily customized based on customers SAP configuration etc. and
generally entails addition of deletion of fields in tables in Rapid Marts, extraction of SAP custom
table(s) if any etc. These Rapid Marts are standard SAP Data Mart offerings from SAP based on SAP
Data Services now.
SAP has developed similar out of box SAP Data Services ETL codes for data migration to SAP based
on standard SAP ECC Master data structures. These are called Best Practice(BP) Content for Data
Migration. It is also known as SAP AIO BP, which is nothing but SAP Business All-in-One Best
Practices. It is confusing to see so many new SAP terms but dont let it scare you. SAP is pioneer in
coming up with new buzzwords however core contents remain more or less the same behind the
scenes.

The BP content for Data Migration can be found under the Data Migration, Cross-Industry Packages
in Best Practices section in HELP portal. This content has everything you need to get started on
migrating non-SAP data to an SAP system. The content includes the following: guides to install SAP
data services and other components required for the migration, actual content to load that includes
jobs to load data into SAP via IDOCs, mapping tools to help you map the non-SAP data to the IDOC
structure, and some reports. It includes IDOC mapping and structures for objects like Material
Master, Vendor Mater and Customer Master, Pricing, BOM, Cost element, Payables and Receivables
contents. There are detailed word documents on each piece of content, for example a document on
Material that is a 39 page word document, covering the IDOC structures, what you need to know, and
how to map data to the structure.
SAP also provides standard data migration methodology, framework, templates, based on SAP Best
practices and SAP Data Services. Methodology has components Analyze, Extract, Cleanse, Validate,
Upload and Reconcile legacy data into a SAP ERP environment.
This method of data migration using SAP Best Practices and IDOCs work very well in case no
customization is required for data migration. What it means is that if a customer has standard SAP
ECC vanilla implementaion, this method works just GREAT. For example, a SAP Best Practices per
built job for material master loads the data as per standard ECC Material Master IDOC structure. In
case customer needs more fields, or a custom table is to be loaded in Material Master, it is easy to
modify or add to SAP Best Practices ETL code however along modify BP code will not suffice.
Corresponding SAP IDOCs need to be modified or extended as well which may or may not be
allowed by customers SAP Basis team. Customer will also need SAP ABAP/IDOC expertise on the
project to modify IDOC structure. Many customers dont prefer to modify standard IDOCs.
Another scenario where SAP Best Practice will not work is if there is no one to one mapping between
the input and output data. In other words if master data element to be convereted into SAP ECC is
dependent on more than one dimension of input data, SAP Best Practices will not work. Lets take an
example, if sales org A in legacy system is to be converted into sales org B in SAP ECC, SAP Best
Practice will work great. However if there are three sales orgs A, B, C in legacy systems and there is
needed only one sales org D in SAP ECC with value dependent on three dimensions such as Sales org,
Plant, Country code in source data in legacy systems, SAP Best Practice cant handle this conversion
scenario at least as of today. In this case, a good amount of customization needs to be done in SAP
Best Practices code, tables, scripts etc which may not be worth the efforts and may impact the
integrity of SAP Best Practices contents dependent on the modified content/code.
A similar approach is taken for data migration from one or many SAP systems, Legacy System to
SAP ECC system. In this option, maybe you have multiple SAP systems on different releases, so one
on 4.6c, 4.7 and you want consolidate to a single ECC 6.0 system. You can use SAP Data Services to
extract data from old SAP system, non-SAP system and use same methodology, framework and SAP
Best Practices to load data into SAP ECC similar to what we discussed above.

Das könnte Ihnen auch gefallen