Sie sind auf Seite 1von 8

Pivot Transformation in SAP BODS

Pivot Transformation

Summary:
Pivot Transformation is used to Convert Column Values of table\file int to Row Values.
The Pivot transformation makes a normalized data set into a less normalized but more
compact version by pivoting the input data on a column value. For example, a
normalized Orders data set that lists customer name, product, and quantity purchased
typically has multiple rows for any customer who purchased multiple products, with each
row for that customer showing order details for a different product. By pivoting the data set
on the product column, the Pivot transformation can output a data set with a single row per
customer. That single row lists all the purchases by the customer, with the product names
shown as column names, and the quantity shown as a value in the product column.
Because not every customer purchases every product, many columns may contain null
values.

Prerequisite:
Following setting is required to be done at the Pivot transform level, options available under
Pivot transform:

1) Pivot sequence column: Specifies the column to use for values across the top row
(header row) of the table.

2) Non-pivot columns: List of columns that needs to be displayed as it is in the target.

3) Pivot set: Specifies the column to use for values in the left column of the table. The input
date must be sorted on this column.

4) Data field column: The name of the column that contains the pivoted data. This column
contains all of the Pivot columns values.

5) Header column: The name of the column that contains the pivoted column names. This
column lists the names of the columns where the corresponding data originated.

6) Pivot columns: Specifies the column to use for the table values, other than the values in
the header row and the left column (A set of columns to be rotated into rows. Describe
these columns in the Header column. Describe the data in these columns in the Data field
column).
Example Scenario:
In this scenario the source of the data is coming for Parent and child data shown in fig1, by
using

Hierarchy Flattening Transform and Pivot Transforms we can generate Node numbers and
Parent Nodes

For existing source data shown in fig2 for each record of the table\file.

The following pictures are shown below, procedure to generate Node numbers, Parent
Nodes and the use of each query transformation in ETL flow is explained.

 The following figures1, 2 are Sample Source and target data in our scenario.
Parent Child
Country Reg1
Country Reg2
Reg1 State1
Reg1 State2
Reg2 State3
Reg3 State4

Figure 1: Source Data in file or Table

Node_Number Current_Leaf Parent_Node


1 Country 0
2 Reg1 1
3 State1 2
4 State2 2
5 Reg2 1
6 State3 5
7 Stete4 5
Figure 2: Target Data in file or Table
 In project area we have to create project ,Job, work flow and data flow shown in figure 3
Figure 3: Job Hierarchy
 Below figure4 shows ETL flow of our Job.
 Create flatfile by using “Imperio_Source_Pivot” file drag into work area.
 Drag Query transformations ,Pivot transformtion ,Hierarchy_flattening transform and
template table into workarea after that connect all objects same as figure4.

Figure 4 : ETL Flow


 Select Parent in Parent column and Child in child colums.

 Hierachy_Flattening Transform is create output columns based on source data(current


leaf,leaf_level,level0,level1 etc….)
Figure 5: under Hierarchy flattening transform
 Shown in figure6 under pivot transformation.

 Drag Current_Leaf ,Leaf_level Columns in Non-Pivot colums and drag remaining columns
in Pivot Columns.
 Pivot sequnce column,Data field column,Header columns are default taken by system if you
have need change the desription for this fields.

Figure 6: Under Pivot transform


 Apply Decode Function In parent_node Column.
Figure 7: Decode function for parent node column in first query
 Figures 8,9 shown under second query.

 Apply gen_row_num() Function in Node number column and write where condition in same
column due to avoid null values.

Figure 8: gen_row_num () Function for Node number in second query

Figure 9: where condition for parent node column in second query


 Map one to one columns in schemain into schemaout.
Figure 10: under third Query transform
 Apply nvl() function to avoid null values in parent_node column(using this function to
populate ‘0‘ in place of null values) and write where condition, outer joins in below
figures11,12,13.

Figure 11: nvl() function for parent node column in forth query

Figure 12: where condition in forth query


Figure 13: outer join in forth query

Figure 14: source and target data

Attachment:
 Use the below file as source for working out on the transformation

Click Here to Download Source data.


 Import the below .atl file in the Data Services Designer to find the Job for the above
transformation.
Click Here to Download ATL File.
that’s it.

Das könnte Ihnen auch gefallen