Beruflich Dokumente
Kultur Dokumente
START->REMOTEDESKTOP CONNECTION.
Click ok
Here password:W©elcome1234
Click on finish.
GO TO CONTENT->NEW PACKAGE
HERE PACKAGE CREATED
TO CREATE SCHEMA
SCHEMA CREATED
TO DO THE ATTRIBUTE VIEW OF TYPE BUSINESS PARTNER USE THE FOLLOWING TABLES
SNWD_BP
SNWD_BP_PH
SNWD_CONTACT
SNWD_BP_EM
ALL THESE TABLES ARE IN ECC WE CAN REPLICATE THIS TABLES TO HANA SERVER BY USING
DATA PROVISIONING TECHNIQUES LIKE BODS AND SLT.
Go to project->replication job
Validate
Execute
TO CREATE ATTRIBUTE VIEW
JOIN CLIENT OF SNWD_BP TO CLIENT OF ALL REMAINING TABLES AND JOIN WILL BE
REFERENTIAL JOIN AND CARDINALITY WILL BE 1:1
JOIN SNWD_BP OF NODE KEY TO THE ALL REMAINING TBALES COLUMN OF THE NODE_KEY
ADD OUTPUT FIELDS
CLIENT
NODE_KEY
BP_ID COLUMNS FROM SNWD_BP TBALE HERE BP_ID IS THE PRIMARY KEY
FIRST_NAME
LAST_NAME
PHONE_NUMBER
GO TO SEMANTICS
HERE WE CAN DEFINT KEY ATTRIBUTS AND PERFORM DESCRIPTIVE OR LABLE MAPPING
RAWDATA
HRE IT IS NOT SHOWING ANY DATA..WE DO NOT HAVE DATA IN ECC TABLES SO IT IS NOT
SHOWING ANY DATA HERE
TASK 2:CREATE AN ATTRIBUTE VIEW FOR PRODUCTS.
CLIENT
NODE_KEY
PRODUCT_ID
CATEGORY
TYPE_Code
SUPPLIER_GUID
TO DO THIS ADD THE FOLLOWING TABLES AT THE DATA FOUNDATION OF ANALYTICAL VIEW.
SNWD_PO
SNWD_PO_I
GOTO PACKAGE AND NEW ANALYTICAL V IEW
CLICK ON FINISH
AT THE LOGICAL JOIN ADD THE CREATED ATTRIBUTES
HERE WE HAVE CLIENT AND NODE_KEY COLUMNS FROM BOTHE THE VIEWS TO OVERCOME
THE NAMONG CONVENTION WE CREATE ALIAS RENAME FOR PRODUCT TABLE
AT THE DATA FOUNDATION JOIN THE TWO TABLES.
Join the field CLIENT from SNWD_PO to the CLIENT field in SNWD_PO_I.
Join the field NODE_KEY (SNWD_PO) with the field PARENT_KEY (SNWD_PO_I).
select Referential join and a cardinality of 1:n for both joins in the properties of the join.
CLIENT
NODE_KEY
PARTNER_GUID
CLIENT, Attribute
PO_ITEM_POS Attribute
PRODUCT_GUID Attribute
GROSS_AMOUNT Measure
NET_AMOUNT Measure
In the Logical View screen in the output, under the Column folder, rename CLIENT
(SNWD_PO_I.CLIENT) to Client_PD, for Product Client.
In the logical view, join the data foundation to the Product view with the following fields. Use
referential join and cardinality of n:1
CLIENT_PD to Client
PRODUCT_GUID to NODE_KEY ("Product" view)
Then, join the data foundation to the Business Partner view with the following fields. Use
referential join and cardinality of n:1
CLIENT_BP to Client
PARTNER_GUID to NODE_KEY ("Partner" view)
GO TO SEMANTICS
In the Semantics screen, define the column PO_ITEM_POS as an attribute, and the columns
GROSS_AMOUNT and NET_AMOUNT as measures.
SAVE
VALIDATE
AND ACTIVTE
WE CANNOT EDIT OR MODIFY THE DERIVED ATTRIBUTE VIEW..IT IS JUST REFERENCE TO THE
ATTRIBUTEVIEW. If your model requires that you have 2 identical Attribute Views in the same
Analytical View, you will have to create a Derived Attribute with Aliases.
VALIDATE AND ACTIVATE THE ATTRIBUTEVIEW.
NOW WE CAN ADD THE DERVIED ATTRIBUTE VIEW TO THE ANALYTICAL VIEW.
In the Logical View screen (of the previous Analytic View), join it to the Data Foundation by
mapping the following fields:
Client_PD to Client
CLICK ON ANALYSIS
TASK 5: Create a new Analytic View for Sales Orders and reuse the shared Product
Attribute view.
SNW_SO_I
Join with a referential join (cardinality n,1) the Data Foundation (on the left) to the
Product Attribute view with fields the following fields:
DATATYPE:VARCHAR
LENGTH:20
VALIDATE
ACTIVATE.
Now open the Attribute View again. Hide the fields PHONE_NUMBER and PHONE_EXTENSION,
by setting the Hidden property to True for these 2 fields in the Data Foundation View screen.
GO TO SEMANTICS->HIERARCHIES
SAVE AND VALIDATE AND ACTIVATE.
Open the attribute view
And go to category
NEW->CALCULATED COLUMNS
SAVE
VALIDATE AND ACTIVATE THE VIEW
DATA PREVIEW
EXERCISE6:USING VARIABLES
GO TO SEMANTICS->VARIABLES->
VALIDATE->ACTIVATE->DATA PREVIEW
HERE IT IS SHOWING THE DATA OF ONLY BUKRS 1000
HERE WE AEE RESTRICTING THE BUKRS DETAILS TO 1000,IF WE WANT TO REMOVE THE
RESTRICTION THAN DELETE THE VARIABLE CUSTOMER_VARIABLE.
SELECT NEW
GIVE NAME TO THE INPUTPARAMETER
CLICK OK
Add a Calculated Column to the new Aggregation node. Name the column
OVER_3MEUR_FLAG and set the Data Type to VARCHAR, Length: 1.
In the Expression Editor of the new column, enter the following: ‘N’
(include the single quotes) then click Add
Ok
Add a Union node in between your Aggregation nodes and the Output node. Connect the two
Aggregation nodes to the Union node.
In the details of the Union node, click on the Auto Map by Name button.
In the Output node, add all columns from the Union node as Attributes.
Click on the background of the Graphical Calculation view in order to edit the properties of the
Calculation View.
Make sure that the Calculation View has “Multidimensional Reporting” set to “disabled”.
Save the View.
Activate and preview. Click on the Raw Data tab.
The Aggregation node for OVER_3MEUR_FLAG = Y contains only Aggregated columns
You want to copy the SQL script and paste it after var_out =
VALIDATE ACTIVATE
DATAPREVIEW
EXERCISE:
CREATING PROCEDURE:
TO CREATE A PROCEDURE
CREATION OF SCALAR INPUT PARAMETER
CREATION OF OUTPUT PARAMETER:
VALIDATE AND ACTIVATE THE PROCEDURE
GO TO CATALOG
USE CALL()
GO TO PACKAGE->NEW->ANALYTICAL VIEW
CLICK ON OK
ADD GROSS _AMOUNT COLUMN AS THE ADD TWO INSTANCES OF THE COLUMN
CLICK OK
Task:
Close all open views prior to modifying preferences in Validation Rules.
GO TO PACKAGE->NEW ATTRIBUTEVIEW
Go to package->new->analytical privilages.
Click finish
CLICK ON ADD
Click on 1000
Click ok
Go to security->users->new users->
When we create a user the default role public will be assigned to user.
Give rolename->here we need to assign our analytical privilege to role and role to user.
Here we can give privilages like read the package->activate the package->maintain the package.
Here we have object privilages->when we create any view a column view will be generated in _sys_bic
schema
We can apply to _sys_bic schema by using object privilages.
CLICK OK
Click on ok
Click on deploy(f8)
The creation of roles will be done by hana admin..here we donot have permission to create roles it si
throwing error like above.
If we want to create roles and user and analytical privilages we have admin privilages to do all these.
Click on deploy(f8)
To check this
Go to systems->add systems->
Host:hanaserver
Instancenumber:07
Description:saphana
Username:myuser
Password:m(c )yusersec48
Newpassword:W©ELCOME1234
CLICK OK
HERE WE DO NOT HAVE PRIVILAGES,IF WE HAVE PRIVILAGES.WE ONLY GET COMPANY CODE OF 1000
DATA.
We can create analytical privilages at content area .like views not at tables.
Enter zaddress3 in Tables for selection and click on the black triangle to filter the table list.
GO TO ECC
LTRC
CLICK ON EXECUTE
CLICK ON EXECUTE
Exercise :Data Acquisition using SAP Data Services
Log in to SAP Data Services Designer.
Start SAP Data Services Designer by using the following path:
Start Menu → Programs → SAP Business Objects Data Services 4.1 Patch→ Data Services
Designer.
Create new project here project used to group the related objects.
CREATE DATA STORE FOR ECC
At the bottom left hand side, you will see your local object library. From the tabs at the bottom
select the blue data ton icon
From the context menu select New to create a Data store
Enter the following details to create an ECC Data store and click OK
Click next->finish
Import the metadata from the ECC extractor into SAP Data Services Repository
Right click on the new ECC Data store and from the context menu choose Import by Name.
Go to project->replication job
Validate
Execute
Data Acquisition using SAP Data Services
Log in to SAP Data Services Designer
Right click on the new ECC Data store and from the context menu choose Import by Name.
Then select Extractors from the type and import the extractor called SFLIGHT. In the pop-up
add the Name of consumer and Name of Project as shown below
CREATE A HANADATASTORE:
Right click and from the context menu select New and create a Project with
the name, SREEHANA
In the Project Area located on the left side, right-click on your new project and click on New
Batch Job and create a job called HANA_JOB.
On the right hand side, you will now see a toolbar. Click on the Dataflow icon and click again in
the workspace to create a Dataflow called HANA_DF.
Double click on your new HANA_DF, and add an ABAP Dataflow by clicking on the ABAP
Dataflow icon from the right toolbar.
Double click on the new ABAP_Dataflow and drag-drop the SFLIGHT Extractor from the the ECC
Data store.
Add the Query transform from the right toolbar which has the Query icon (orange and yellow
triangle).
Add the Data Transport from the right toolbar which has the icon.
Connect all of them together.
Double-click on the query transform drag all fields from the SCHEMA IN to the SCHEMA OUT for
the mappings.
Double-click on the Data Transport and add the following name in the filename section,
HANA.dat
On the top of the Data Services Designer click on the back button which will take you back
outside of the ABAP dataflow and show the Data Flow. Click on the green arrow to the left icon.
Now add the Query Transform from the right toolbar and drag-drop the template tables folder
under the HANAXX Data store. Call the template table, SFLIGHT .
Now add the Query Transform from the right toolbar and drag-drop the template tables folder
under the HANA Data store. Call the template table, SFLIGHT.
Double-click on the query transform drag all fields from the SCHEMA IN to the SCHEMA OUT for
the mappings.
You have now setup the Batch Job to load ECC data into SAP HANA using an extractor
Now right click on the Job and Execute. It will prompt you to save and then accept all the
defaults on Execution Properties window.
Task 1:
Create a DXC Data Source and load ECC data directly into HANA
Check the parameters in table RSADMIN
For this go to ecc
Se11
Rsadmin
PSA_TO_HDB → “Global” (All activated Data Sources will create an SAP HANA Optimized DSO)
Use transaction SE16 and open table RSADMIN. Select all entries which started with “PSA*” and
check them
Create your own Data Source based on credentials of existing Data Source
Go to ecc
Se11
Than go to rso2
Provide table name in the transaction data click on create.
In the popup to create a new Object entry choose Local ($TMP) and Save.
In the Data Source: Customer version Edit screen save again without any changes to the
defaults.
Expand node Generic Data Source and click on Maintain Generic Data Sources
click on save
Than go to rsds
Replicate your Data Source based on application component
Right-click on node DXC and choose Replicate Metadata All Data Sources for application
component DXC will be replicated.
For new Data Source choose type as Data Source (RSDS) and confirm.
Than right click on data source and click on create info package
Click on save
Go to schedule click on start
Go to data source and right click on data source and click on manage
Than go to hana studio
Go to catalog
G o to dxc schema and here you can find out your tables
Check the new SAP HANA Optimized DSO directly in the SAP HANA Studio.
Check the new SAP HANA Optimized DSO and identify the tables in the SAP HANA Studio.
Open the table folder and identify the tables of your new SAP HANA Optimised DSO.
.