Beruflich Dokumente
Kultur Dokumente
Project Manager:
IT AS Manager:
TABLE of CONTENTS
1 Overview........................................................................................................................................................... 3
2 Scope................................................................................................................................................................ 3
3 Solution Design................................................................................................................................................. 3
3.1 Process Flow Diagram............................................................................................................................ 3
3.2 Process Flow Description........................................................................................................................ 3
3.3 Traceability.............................................................................................................................................. 3
4 Design Considerations...................................................................................................................................... 3
4.1 Assumptions............................................................................................................................................ 3
4.2 Dependencies......................................................................................................................................... 3
4.3 Constraints.............................................................................................................................................. 3
4.4 Risks....................................................................................................................................................... 3
5 Data Mapping.................................................................................................................................................... 3
6 Data Sharing Agreements................................................................................................................................. 3
6.1 Input File Layouts.................................................................................................................................... 3
6.2 Output File Layouts................................................................................................................................. 3
6.3 Intra File Layouts..................................................................................................................................... 3
7 Logical View of the Design................................................................................................................................ 3
7.1 Design Components................................................................................................................................ 3
7.1.1 DataStage Sequences........................................................................................................................ 3
7.1.2 DataStage Jobs................................................................................................................................... 3
7.1.3 Shell Scripts........................................................................................................................................ 3
7.1.4 Metadata Objects................................................................................................................................ 3
7.1.5 Database Objects................................................................................................................................ 3
7.2 Error Handling Design Flow..................................................................................................................... 3
7.2.1 Exception Conditions.......................................................................................................................... 3
7.2.2 Error Codes and Messages................................................................................................................ 3
7.2.3 Restartability....................................................................................................................................... 3
8 Data and Information......................................................................................................................................... 3
8.1 Logical Data Model Reference................................................................................................................ 3
9 Environment Details.......................................................................................................................................... 3
9.1.1 File System......................................................................................................................................... 3
9.1.2 Enterprise Staging Area Files.............................................................................................................. 3
10 Logging/Auditing........................................................................................................................................... 3
11 Performance Considerations........................................................................................................................ 3
12 Reporting...................................................................................................................................................... 3
13 Security......................................................................................................................................................... 3
14 Deployment Details....................................................................................................................................... 3
14.1.1 Clear Case Reference......................................................................................................................... 3
14.1.2 Deployment Inventory......................................................................................................................... 3
14.1.3 Package List....................................................................................................................................... 3
14.1.4 Special Considerations....................................................................................................................... 3
15 Scheduling.................................................................................................................................................... 3
16 SLA............................................................................................................................................................... 3
17 Special Considerations................................................................................................................................. 3
18 Other Project Specific Requirements............................................................................................................ 3
19 Appendix....................................................................................................................................................... 3
19.1 Architectural Design Decisions Reference.............................................................................................. 3
19.2 Definitions, Acronyms, and Abbreviations................................................................................................ 3
19.3 References.............................................................................................................................................. 3
20 Approvals...................................................................................................................................................... 3
1 Overview
EODS will acquire, update and store Personal Insurance Legal entity data for a policy data from PI Client. PI
Client would provide the data in a file.
This document describes the process to read files sent by PI Client, convert them to canonical format, store
in Enterprise Staging Area and finally load them to EODS database. This detail design is for loading Legal entity
information from PI Client Only. Payor Role data (Billing Account) will be filtered and would be written to a file
which will be used by Master Billing Account Setup Process.
The design demonstrated holds for the requirements attached in the appendix. Any change in the
requirements might lead to changes in this document.
2 Scope
3 Solution Design
The process flow diagram and a description for premium data processing are demonstrated in the subsections
below.
PI Scrubbed Data
Customer
Data
No
EODS
Database
1. Customer data files from PI Client will be available in source directory everyday by <TBD> Time. Master
Legal Entity Data Load process reads Customer data from Source directory copy them to work folder and
move to Archive folder.
2. Read the customer data files from Work folder and convert each one of them to Canonical format. Store
this file in Enterprise Staging Area.
3. Read CDM converted files from Enterprise Staging Area, and join the filtered records with the global
reference dataset generated during the Master policy data, to get the database ids generated during
creating policies in EODS
4. Check if the records whose LEBA_TYP = ‘PO’. If so save those records into a temporary file which would
be used for Master Billing account processing. Else, filter those records which would be processed in
further steps.
5. Join the filtered records with the global reference dataset generated during the Master policy data, to get
the database ids generated during creating policies in EODS.
6. Split the file into different streams based on the Transaction type. The different streams will be
New and renewal polices – Processed together
Endorsements – Legal entity changes
7. For New and Renewal Policies, legal entities would be added to the existing legal entities of EODS
database. Business action entry created during policy creation would be used here.
8. For all endorsements, update the EODS database with the changes. Add an entry to BUSINESS ACTION
table to track the endorsements applied on the EODS database. If we have multiple endorsements for a
same policy on a same date then use different BA_SEQ_NBR for each record.
3.3 Traceability
Traceability.xls
4 Design Considerations
4.1 Assumptions
# Assumption Comments
1 Policy information would be present in EODS We will be running Master Policy data processing
before this process starts. preceding Legal entity data processing.
2 PI Client will send all roles on a Policy without All Roles on a Policy would be sent to EODS for New
dropping any roles for New and Renewal and Renewal transactions.
Transactions.
3 EODS will NOT perform any data Validation. EODS will redirect any data issues if they are not
associated with transformations to the Billing and
Legacy systems to coordinate to fix.
4 EODS receive a file from PI Client everyday to EODS will check for the file at a specified time
load Customer data. <TBD>.
4.2 Dependencies
# Dependency Comments
1 Master Legal entity data inserts in the EODS Master policy data processing would insert data into
database would be performed after the policy tables for new policies which would be used in
corresponding Master Policy data is available this process.
in EODS.
4.3 Constraints
# Constraint or Issues Impact
N/A
4.4 Risks
# Risk Mitigation Strategy
1 Data discrepancies in PI Client will exist in All data in-consistencies should be fixed at PI Client
EODS. end.
5 Data Mapping
EODS_Master_PI_PR
OCESSING_MAPPING_Src_to_MDR_V1.0.xls
EODS_Master_PROC
ESSING_MAPPING_MDR_to_EODS_V1.0.xls
Input_file_format_PI
.xls
MDR_Fmt_PI.xls
Sequence_LE.xls
Job # 1 – Xfm_EODS_MASTER_ADT_LOG_00
Purpose:
This job is to log an audit message into EODS processing Audit table.
EODS process will insert an entry into the Audit_trail_log table for auditing and validation purpose.
This is a common job that is used to set the value of CRTE_TS, LAST_UPDT_TS and PROCESS_STATUS fields
in the table.
Job Parameters:
Step Description
Insert a log record into Audit_trail_log table saying “EODS – MASTER file” processing has started.
Populate the Audit_trail_log:
CRTE_TS as “Current System Timestamp”
And I_rec_actv to ‘Y’
SEQ_FLOW_NM to to ”Master_Legal_Entity_Data_Load”
PROCESS_STATUS to ‘S’ (Started)
1 LAST_UPD_TS to “Current System Timestamp”
For more details about the job refer to the common flow section.
Job # 2 - Xfm_PI_MDR_Transform_00
Purpose:
This Job will transform the input PI customer data into Canonical data model format by applying the
transformation rules as said in the mapping document..
SN Type Value
O Parameter Name Description
1 G_S_Wrk_Dir Strin /Staging/eods/work/
g Work Directory
2 G_S_Cdm_Master_D /
ir Strin esa/eods/master/canonica
g Master Staging Area l
Step Description
1 Read the flat file received from PI client.
Job # 3 - Xfm_MDR_Merge_00
Purpose:
This Job will merge the Legal entity canonical data model format with the dataset from Master policy data
processing, which contains the database sequences like POL_SYS_ID, BA_ID, AGMT_SYS_ID
Job parameters:
Step Description
1 Read all the records from the DS_PI_MDR.ds.
Join the records in step 1 and the records in the dataset “DS_AgreementGlobalRef.ds”,
2 generated during “Master policy Data processing” using the POL_ID
Filter all those records which have LEBA_TYP as ‘PO’, save these records in a temporary
dataset (DS_MasterBillingAccountRec.ds) which would be used for processing in Master Billing
Account data processing. For all those records which do not have LEBA_TYP use them in next
3 step. Create a Temp data for all records having LEAR_TYP roles.
Save all the records which are matching and has the UBA_TYP as ‘ISS’ and
UBA_RSN_CD = ‘NEW’ or “RENEWAL’ into DS_CutomerNew.ds
Save all the records which are matching and has the UBA_TYP as ‘END’ and
4 UBA_RSN_CD = ‘CHGNM’, ‘CHGAD’, ‘CHGBN’ into DS_CustomerEndorsement.ds
Job Parameters:
Step Description
1 Read the records in the dataset DS_CustomerNew.ds.
Filter all the records which have LEAD_TYP as ‘PRIR’ (LEAD_TYP derived in CDM Conversion
2 process.)
For all the filtered records from step 2, we need to create an entry in ADDRESS table. Create
system generated sequence number and assign it to ADDRESS_SYS_ID.
Write all the contents into a temporary dataset “DS_PI_InsertAddress” with data from
previous step according to the data mapping document. This dataset would be used to insert
into ADDRESS table.
To Populate CANP_CD perform the following transformation.
3 If CTRY_CD=”CAN” then Populate ST_CD else NULL
For all the filtered records from step 2, we need to create an entry in
PHYSICAL_LOCATION_SAR_ASSN table. Create system generated sequence number and
assign it to PLOC_SYS_ID.
Use the ADDR_SYS_ID from step 3 and lookup the input data with data from step 4 using
SAR_TYP as lookup key and get the SAR_SYS_ID and save all the contents into a
temporary dataset “DS_PI_InsertPLOC” according to the data mapping document. This
4 dataset would be used for inserting data into PHYSICAL_LOCATION_SAR_ASSN table.
Job #5 – Put_ADDRESS_EODS_LOAD_RA_00
Purpose:
The purpose of this job is to read the dataset “DS_PI_InsertAddress” and populate the ADDRESS table. Bulk
Load will be used to load the data into the database.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_PI_InsertAddress file.
2 Use DB2 stage and load all the contents from step 1 into ADDRESS table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_PI_InsertSAR file.
2 Use DB2 stage and load all the contents from step 1 into SUBJECT_AT_RISK table.
Job #7 – Put_PHY_LOC_SAR_ASSN_EODS_LOAD_RA_00
Purpose:
The purpose of this job is to read the dataset “DS_PI_InsertPLOC” and populate the
PHYSICAL_LOCATION_SAR_ASSN table. Bulk Load will be used to load the data into the database.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_PI_InsertPLOC file.
2 Use DB2 stage and load all the contents from step 1 into PHYSICAL_LOCATION_SAR_ASSN table.
Purpose:
The purpose of this job is to retrieve data related to columns LGLE_ID, LGLE_SYS_ID and LGLE_EFF_DT from
LEGAL_ENTITY table.
Job Parameters:
Step Description
Execute the following query in DB2
SELECT LGLE_SYS_ID,
LGLE_ID,
LGLE_EFF_DT
FROM LEGAL_ENTITY
1 WHERE LGLE_EXP_DT >= #Current_Processing_Date#
2 Use Dataset stage and load all the contents from step 1 into dataset DS_LGLEDB.ds
Purpose:
The purpose of this job is to create a new system generated sequence number in EODS for the LGLE_ID from the
input dataset if that LGLE_ID is not present in EODS but if the LGLE_ID is present in EODS then get the
LGLE_SYS_ID from the dataset. Additionally this job would also create a temporary dataset containing the legal
entities which are not present in EODS. This dataset would be used to insert into LEGAL_ENTITY table.
Job Parameters:
Step Description
1 Read all the records in the dataset DS_CustomerNew.ds.
Filter all the records which have the LEAR_TYP in (NIN , MT, SMT , LEN, OPER,CI ) AND
2 LEAD_TYP <>"PRIR"
Join the records from step 2 and dataset DS_LGLEDB.ds (target dataset of job
Xfm_EODS_Legal_Entities_PI_AR_01) using LGLE_ID as joining key. Data will be partitioned and
3 sorted on LGLE_ID in DataStage environment.
All the non matching records should be inserted into LEGAL_ENTITY table. Create a system
generated sequence number and assign it to LGLE_SYS_ID and save the data into a temp dataset
4 “DS_InsertLGLE.ds”
5 For all the matching records, get the LGLE_SYS_ID from the step 3.
Merge these details into a temporary dataset “DS_PI_Customer_LGLE.ds”. This dataset will be used
6 as a reference to get LGLE_SYS_ID.
Job #9 – Xfm_EODS_LE_SA_PI_AR_Transform_00
Purpose:
This job would be used to transform the input PI data and create temporary datasets which would be used to
load the EODS
Job Parameters:
Step Description
1 Read all the records from the dataset “DS_PI_Customer_LGLE.ds”.
For all the records from step 1, we should create an entry in LEGAL_ENTITY_ROLE table.
Create system generated sequence number and assign to LERL_SYS_ID and write the input
data into a temporary dataset “DS_LGRLInsert.ds” according to the data mapping document.
If input LEAR_TYP in (‘MT’, ‘SMT’,’TMT’) And LOAN_ID Exists then create entry in
CLIENT_BANK_ASSN table. Create a system generated sequence number and assign it to
CB_SYS_ID and write the input data into a temporary dataset “DS_CBInsert.ds” according to
the data mapping document.
If input LEAD_TYP = ‘INHA’ or ‘BSM’, then create an entry in ADDRESS table. Create a
system generated sequence number and assign it to ADDR_SYS_ID and write the input data
into a temporary dataset “DS_PI_AddrInsert.ds” according to the data mapping document.
For all the records inserted into ADDRESS table, an entry should be made in
LEGAL_ENTTY_ADDRESS_ASSN table. Create a system generated sequence number and
assign it to LEAD_SYS_ID and write the input data into a temporary dataset
2 “DS_LEADInsert.ds” according to the data mapping document.
The following jobs would be used for loading all the Legal entity tables for all the agreement role data.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_PI_Customer_LGLE.ds file.
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_LGRLInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ROLE table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_BENInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ENTITY table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_IPNInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into INDIVIDUAL_PERSON_NAME table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_LEARInsert.dsfile.
Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_AGREEMENT_ROLE
2 table.
Job Parameters:
5 G_S_Wrk_DataSet_D /Staging/eods/dst/
ir String Work Directory
Step Description
1 Use dataset stage to read the DS_CBInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into CLIENT_BANK_ASSN table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_PI_AddrInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into ADDRESS table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_LEADInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTTY_ADDRESS_ASSN table.
Ste
p Description
Read the input dataset “DS_CutomerEndorsement.ds” and filter all the records which have UBA_TYP =
1 END and UBA_RSN_CD = CHGADDR, DELGARG, CHGGARG, ADDGARG and LEAD_TYP as ‘PRIR’
Execute the following query in database
SELECT BA_ID,
MAX(BA_SEQ_NBR),
BA_EFF_DT
FROM BUSINESS_ACTION
2 GROUP BY BA_ID, BA_EFF_DT
Lookup all the records from step1 in the data in dataset DS_BusinessAction_Agg.ds using BA_ID,
3 BA_EFF_DT as join key and get the BA_SEQ_NBR.
For all the records from step 3, we should insert data into BUSINESS_ACTION table. So insert
into the table by incrementing the BA_SEQ_NBR. Please refer the data mapping sheet for a
detailed mapping. Save all the contents in a temporary dataset “DS_RA_BAInsert.ds”
For all the records from step 3, we should insert data into ADDRESS table. Create a system
generated number and assign the value to ADDR_SYS_ID. Save the contents from the input file
to a temporary dataset “DS_RA_ADDRInsert.ds” according to the data mapping document. Use
BA_ID, BA_EFF_DT and BA_SEQ_NBR from previous sub step.
For all the records from step 3, for each unique occurrence of SAR_TYP, we have to insert into
SUBJECT_AT_RISK table. Create a system generated sequence number and assign it to
SAR_SYS_ID. Save the contents from the input file to a temporary dataset
“DS_RA_SARInsert.ds” according to the data mapping document. Use BA_ID, BA_EFF_DT and
BA_SEQ_NBR from previous sub step.
For all the records from step 3, we should insert data into PRIMARY_SAR_LOCATION_ASSN
table. Create a system generated number and assign the value to PLOC_SYS_ID. Save the
contents from the input file to a temporary dataset “DS_RA_PLOSInsert.ds” according to the
data mapping document. Use Business action information and subject at risk information from
4 previous sub steps.
Ste
p Description
Job #19 – Put_EODS_PI_END_RA_update (RA-Risk Address) This is for Physical Location Sar Association
Update.
Put_EODS_PI_END_RA_update_01(New Job) This is for Address Update
Purpose:
This job would expire the old records in tables PRIMARY_SAR_LOCATION_ASSN and ADDRESS for all the
records which have changes in risk address.
Ste
p Description
Read the input dataset “DS_CutomerEndorsement_NewBa.ds” and filter all the records which have
UBA_TYP = END and UBA_RSN_CD = CHGADDR, DELGARG, CHGGARG, ADDGARG and
1 LEAD_TYP as ‘PRIR’
2 For all the filtered records,
Update PRIMARY_SAR_LOCATION_ASSN table using AGMT_SYS_ID and AGMT_EFF_DT
from the input dataset.
Following columns would be updated:
PLOC_EXP_DT
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
Please refer the data mapping sheet for detailed mapping.
Update ADDRESS table using AGMT_SYS_ID and AGMT_EFF_DT from the
PHYSICAL_LOCATION_SAR_ASSN Table
Following columns would be updated:
ADDR_EXP_DT
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
Ste
p Description
1 Read the input dataset “ DS_RA_ADDRInsert.ds .
2 For all the records, insert into ADDRESS table.
Ste
p Description
1 Read the input dataset “DS_RA_SARInsert.ds” .
2 For all the records, insert into SUBJECT_AT_RISK table.
Ste
p Description
1 Read the input dataset “DS_RA_PLOSInsert.ds .
2 For all the records, insert into PHYSICAL_LOCATION_SAR_ASSN table.
Step Description
1 Use dataset stage to read the DS_CutomerEndorsement.ds dataset.
2 If the UBA_TYP=’END’ and UBA_RSN_CD IN (‘ADDLEN’, ‘ADDMTGI’, ‘ADDOPR’ is ADD,
then the legal entity should be added, write this data into a temp dataset
DS_Cust_End_ADD_LE.ds
If the UBA_TYP = ‘END’ and UBA_RSN_CD is CHGNM, CHGAD, CHGBA then there is a
change on the legal entity, write this data into a temp dataset DS_Cust_End_Chng_LE.ds
If the UBA_TYP=’END’ and UBA_RSN_CD IN ( DELLEN, DELMTGI) then the legal entity
should be expired in EODS, write this data into a temp dataset DS_Cust_End_Del_LE.ds
Job Parameters:
Step Description
Use the following query to get the BUSINESS_ACTION table
SELECT
BA_ID,
BA_EFF_DT,
BA_SEQ_NBR
FROM BUSINESS_ACTION
1 WHERE BA_EXP_DT > Scheduled job run date
Use the aggregator stage and get the maximum BA_SEQ_NBR grouped by BA_ID, BA_EFF_DT
Group by
BA_ID, BA_EFF_DT
Aggregate function
2 MAX(BA_SEQ_NBR)
3 Save the above data in a dataset “DS_BusinessAction_Agg.ds”
Job Parameters:
Step Description
Use the following query to get the LEGAL_ENTITY table
SELECT
LGLE_ID,
LGLE_EFF_DT,
LGLE_SYS_ID
FROM LEGAL_ENTITY
1
2 Save the above data in a dataset “ DS_LegalEntity.ds”
Purpose:
The purpose of this job is to transform the input dataset containing the legal entities that are to be added and save
the data into temporary datasets which would be further used for loading into tables.
Job Parameters:
Step Description
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_BAInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ACTION table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_LGLEInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_LERLInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ROLE table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_LEARInsert.ds file.
Join this data with Billing Account data (LEBA_TYP=’PO’) on LGLE_ID and if we find a match then
2 set LEAR_PAYOR_IND=’Y’ Else ‘N’
Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_AGREEMENT_ROLE
3 table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_IPNInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into INDIVIDUAL_PERSON_NAME table.
Job Parameters:
5 G_s_Work_DataSet_D /Staging/eods/dst/
ir String Work Directory
Step Description
1 Use dataset stage to read the DS_Cust_BENInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ENTITY table.
Job #33 – Put_ LEGAL_ CLIENT_BANK_ASSN_LOAD_LE_END_33
Purpose:
The purpose of this job is to read the dataset “DS_CustCBInsert.ds” and populate the CLIENT_BANK_ASSN
table. Bulk Load will be used to load the data into the database.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_CustCBInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into CLIENT_BANK_ASSN table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_CustADDRInsert.ds file.
Derive the CANP_CD column as:
For LEAR_TYP= NIN Join the data with LEAR_TYP=CI data on LGLE_ID and if we found a match
3 (NIN – LEGLE_ID = CI – LGLE_ID) then BEN_NAME from CI record (stream LEAR_TYP=CI)
2 Use DB2 stage and load all the contents from step 1 into ADDRESS table.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_CustLEADInsert.ds file.
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ADDRESS_ASSN table.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_Cust_BAInsert.ds , DS_BusinessAction_Agg.ds files
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_End_Del_LE.ds file.
Join this data with Legal Entity Add data on BA_ID and if we find a New records loaded into a
2 temporary datasets
Purpose:
The purpose of this job is to load BUSINESS_ACTION table for all the legal entities which have been deleted.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cust_End_Del_BAInst.ds
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ACTION table.
Step Description
1 Read the data from the dataset DS_Cust_End_Del_BAInst.ds
Job Parameters:
Step Description
Execute the following query in DB2
SELECT
AGMT_SYS_ID,
AGMT_EFF_DT.
LERL_SYS_ID,
LERL_EFF_DT,
LGLE_SYS_ID,
LGLE_EFF_DT,
LGLE_ID
FROM LEGAL_ENTITY_AGREEMENT_ROLE
1 WHERE LEAR_EXP_DT > #Scheduled run date#
2 Get the data from step1 and write into a temp Dataset DS_CustLEARReference.ds
Purpose:
The purpose of this job is to update LEGAL_ENTITY tables for all the legal entities which have been deleted.
Records would be updated in the following tables LEGAL_ENTITY_AGREEMENT_ROLE,
LEGAL_ENTITY_ROLE, LEGAL_ENTITY_ADDRESS_ASSN
Job Parameters:
ir
Step Description
1 Read the data from the dataset DS_Cust_End_Del_LE_with_BA.ds
Join the data from step 1 with data from dataset DS_CustLEARReference.ds on AGMT_SYS_ID,
2 AGMT_EFF_DT
Use BUSINESS_ACTION table keys inserted in Put_EODS_CUST_LE_Del_BA_Load_36_02 Job
3 for all the tables below.
For the records in step2
Update (expire) LEGAL_ENTITY_AGREEMENT_ROLE table. Following columns would be updated:
LEAR_EXP_DT
BA_ID
BA_EFF_DT
BA_SEQ_NBR
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
And these data should be loaded into a temporary dataset DS_LEAR_Upd.ds
3 Detailed mapping is found in the mapping document
Update (expire) LEGAL_ENTITY_ROLE table. Following columns would be updated:
LERL_EXP_DT
BA_ID
BA_EFF_DT
BA_SEQ_NBR
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
And these data should be loaded into a temporary dataset DS_LERL_Upd.ds
4 Detailed mapping is found in the mapping document
5 Update (expire) ADDRESS table. Following columns would be updated:
ADDR_EXP_DT
BA_ID
BA_EFF_DT
BA_SEQ_NBR
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_LEAR_Upd.ds
Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_AGREEMENT_ROLE
2 table.
Job # – Put_EODS_CUST_LE_Del_LERL_Update_38_03 (New Job)
Purpose :
The purpose of this job is to update LEGAL_ENTITY tables for all the legal entities which have been deleted.
Records would be updated in the following tables LEGAL_ENTITY_ROLE.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_LERL_Upd.ds
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ROLE table.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_LEAD_Upd.ds
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ADDRESS_ASSN table.
Job # – Put_EODS_LE_IPN_Del_Update_38_05 (New Job)
Purpose :
The purpose of this job is to update LEGAL_ENTITY tables for all the legal entities which have been deleted.
Records would be updated in the following tables INDIVIDUAL_PERSON_NAME
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_IPN_Upd.ds
2 Use DB2 stage and load all the contents from step 1 into INDIVIDUAL_PERSON_NAME table.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_BE_Upd.ds
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ENTITY_NAME table.
Job # – Put_EODS_CUST_LE_ADDR_Del_Update_38_07 (New Job)
Purpose :
The purpose of this job is to update LEGAL_ENTITY tables for all the legal entities which have been deleted.
Records would be updated in the following table ADDRESS
Job Parameters:
Step Description
1 Use DB2 stage to read the data from LEGAL_ENTITY_ADDRESS_ASSN
2 Use DB2 stage and load all the contents from step 1 into ADDRESS table.
Job Parameters:
Step Description
Use dataset stage to read the data from DS_Cust_End_Del_BAInst.ds,
1 DS_Cust_End_ADD_LE_With_Ba_Data.ds files
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE.ds file.
Join this data with DS_CustLEARReference.ds on AGMT_SYS_ID,AGMT_EFF_DT and if we find a
2 New records and filter the records into a temporary datasets
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_Addr_Chng.ds file.
Join this data with DS_Cust_End_Del_LE_With_Ba_data on BA_ID and if we find a New records
2 and loaded into a temporary datasets
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_Addr_Chng_Ba_Inst.ds
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ACTION table.
Step Description
1 Read all the records from the dataset DS_Cust_End_Del_LE_With_Ba_data.ds
DS_Cust_End_Chg_LE_with_BA.ds
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_Addr_Chng_NewBa.ds
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ADDRESS_ASSN table.
Step Description
Use dataset stage to read the DS_Cust_End_Chng_LE_Addr_Chng_NewBa.ds and join with updated
1 LEGAL_ENTITY_ADDRESS_ASSN table
2 Use DB2 stage and load all the contents from step 1 into ADDRESS table.
Step Description
1 Read all the record fomr the dataset DS_Cust_End_Chng_LE_Addr_Chng_NewBa.ds
Purpose:
The purpose of this job is to load BUSINESS_ACTION table for all the legal entities which have been changed.
Job Parameters:
Step Description
1 Read all the record from the dataset DS_Cust_End_Chg_LE_with_BA.ds
Join the data from step 1 with the data with the data from dataset DS_LGLESysIds.ds on
2 AGMT_SYS_ID, AGMT_EFF_DT
If the UBA_RSN_CD = ‘CHGAD’ then use ADDR_SYS_ID, ADDR_EFF_DT and update the
ADDRESS table (expire the old records). Following columns would be updated
ADDR_EXP_DT
BA_ID
BA_EFF_DT
BA_SEQ_NBR
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
If the UBA_RSN_CD = ‘CHGAD’ then use LEAD_SYS_ID, LEAD_EFF_DT and update the
LEGAL_ENTITY_ADDRESS_ASSN table (expire the old records). Following columns would
be updated
LEAD_EXP_DT
BA_ID
BA_EFF_DT
BA_SEQ_NBR
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
3 Detailed mapping is present in the data mapping document.
4 If the UBA_RSB_CD = ‘CHGNM’ and LGLE_TYP = ‘BS’ then join the input dataset with the
dataset DS_LGLEBenSysIDs.ds on AGMT_SYS_ID, AGMT_EFF_DT and update the
BUSINESS_ENTITY_NAME table (expire the old records). Following columns would be
updated.
BEN_EXP_DT
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
If the UBA_RSB_CD = ‘CHGNM’ and LGLE_TYP = ‘IN’ then join the input dataset with the
dataset DS_LGLEIPNSysIDs.ds on AGMT_SYS_ID, AGMT_EFF_DT and update the
INDIVIDUAL_PERSON_NAME table (expire the old records). Following columns would be
updated.
IPN_EXP_DT
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
Detailed mapping is present in the data mapping document.
If the UBA_RSB_CD = ‘CHGBA’ then join the input dataset with the dataset DS_LGLESysIds.ds
on AGMT_SYS_ID, AGMT_EFF_DT and update the CLIENT_BANK_ACCOUNT_ASSN table
(expire the old records), Following columns would be updated.
CB_EXP_DT
CB_LOAN_ID
UPDATED_DT
UPDATED_BY
ACT_INSUPD_TS
REC_STUS_CD
5 Detailed mapping is present in the data mapping document.
Job # – Put_EODS_LE_CHG_ADDR_Load_40_05
Purpose:
The purpose of this job is to load ADDRESS table for all the legal entities which have a address change.
Job Parameters:
Step Description
1 Read all the record fomr the dataset DS_Cust_End_Chng_LE_Addr_Chng_AddrInst.ds
2 Use DB2 stage and load all the contents from step 1 into ADDRESS table.
Note : Deleted the following Step 1, 2 & 3.
Step Description
1 Read all the record fomr the dataset DS_Cust_End_Chg_LE_with_BA.ds
Join the data from step 1 with the data with the data from dataset DS_LGLESysIds.ds on
2 AGMT_SYS_ID, AGMT_EFF_DT
3 If the UBA_RSN_CD = ‘CHGAD’ then we have to insert into ADDRESS table. Create a system
generated sequence number and assign it to ADDR_SYS_ID column. Insert into ADDRESS table
Job # – Put_EODS_LE_CHG_ADDR_ASSN_Load_40_06
Purpose:
The purpose of this job is to load LEGAL_ENTITY_ADDRESS_ASSN table for all the legal entities which have a
address change.
Job Parameters:
Step Description
1 Read all the record fomr the dataset DS_Cust_End_Chng_LE_Addr_Chng_LeadInst.ds
2 Use DB2 stage and load all the contents from step 1 into LEGAL_ENTITY_ADDRESS_ASSN table.
Job # – Xfm_EODS_Cust_END_LE_Addr_Chg_BA_Merge_40_07
Purpose:
The purpose of this job is to Merge the Business Action details for dataset
“DS_Cust_End_Del_LE_With_Ba_data.ds” and DS_Cust_End_Chng_LE_Addr_Chng_Ba_Inst.ds populate the data
into Dataset DS_Cust_End_Chng_LE_Addr_Chng_Ba_Data.ds. Because these Dataset is used in the succeeding
jobs in a Legal Entity Change.
Job Parameters:
Step Description
Use dataset stage to read the data from DS_Cust_End_Del_LE_With_Ba_data.ds,
1 DS_Cust_End_Chng_LE_Addr_Chng_Ba_Inst.ds files
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_Nm_Chng.ds file.
Join this data with DS_Cust_End_Chng_LE_Addr_Chng_Ba_Data.ds on BA_ID and if we find a New
2 records and loaded into a temporary datasets
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_Name_Chng_Ba_Inst.ds
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ACTION table.
Purpose:
The purpose of this job is to read the data from DS_Cust_End_Chng_LE_Nm_Chng_NewBa.ds dataset. Filter
LGLE_TYP is BS or IN and generate the sequence for the following tables INDIVIDUAL_PERSON_NAME and
BUSINESS_ENTITY_NAME and loaded the data into temporary datasets.
Job Parameters:
Step Description
1 Read all the record form the dataset DS_Cust_End_Chng_LE_Nm_Chng_NewBa.ds
Step Description
1 Read all the record form the dataset DS_Cust_End_Chng_LE_Nm_Chng_NewBa.ds
From step 1 filter the LGLE_TYP is IN then update the INDIVIDUAL_PERSON_NAME table and data
2 is loaded into the dataset DS_CUST_CHNG_IPNUpdt.ds.
From step 1 filter the LGLE_TYP is BS then update the BUSINESS_ENTITY_NAME table and data
3 is loaded into the dataset DS_CUST_CHNG_BENUpdt.ds
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_CUST_CHNG_BENUpdt.ds
2 Use DB2 stage and update all the contents from step 1 into BUSINESS_ENTITY_NAME table.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_CUST_CHNG_IPNUpdt.ds
2 Use DB2 stage and load all the contents from step 1 into INDIVIDUAL_PERSON_NAME table.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_CUST_CHNG_BENInsert.ds
2 Use DB2 stage and update all the contents from step 1 into BUSINESS_ENTITY_NAME table.
Job Parameters:
Step Description
1 Use dataset stage to read the data from DS_CUST_CHNG_IPNInsert.ds
2 Use DB2 stage and load all the contents from step 1 into INDIVIDUAL_PERSON_NAME table.
Job Parameters:
Step Description
Use dataset stage to read the data from DS_Cust_End_Chng_LE_Nm_Chng_Ba_Inst.ds,
1 DS_Cust_End_Chng_LE_Addr_Chng_Ba_Data.ds files
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_CB_Chng.ds file.
Join this data with DS_Cust_End_Chng_LE_Nm_Chng_Ba_Data.ds on BA_ID and if we find a
2 New records and loaded into a temporary datasets
Step Description
1 Read all the record from the dataset DS_Cust_End_Chng_LE_CB_Chng_NewBa.ds
Step Description
1 Use dataset stage to read the DS_Cust_End_Chng_LE_CB_Chng_Ba_Inst.ds
2 Use DB2 stage and load all the contents from step 1 into BUSINESS_ACTION table.
Job # – Put_EODS_LE_CHG_CB_Update_42_03
Purpose:
The purpose of this job is to Update the CLIENT_BANK_ACCOUNT_ASSN table (expire the old records) for all
the legal entities which have been changed.
Job Parameters:
Step Description
1 Use dataset stage to read the DS_Cb_Updt.ds
2 Use DB2 stage and load all the contents from step 1 into CLIENT_BANK_ACCOUNT_ASSN table.
Step Description
1 Use dataset stage to read the DS_CLIENT_BANK_Insert.ds
2 Use DB2 stage and load all the contents from step 1 into CLIENT_BANK_ACCOUNT_ASSN table.
Purpose:
The purpose of this job is to load BUSINESS_ENTITY_NAME or INDIVIDUAL_PERSON_NAME table for all the
legal entities which have a name change.
Job Parameters:
Step Description
1 Read all the record from the dataset DS_Cust_End_Chg_LE_with_BA.ds
Purpose:
The purpose of this job is to load CLIENT_BANK_ACCOUNT_ASSN table for all the legal entities which have a
bank account change.
Job Parameters:
Step Description
1 Read all the record from the dataset DS_Cust_End_Chg_LE_with_BA.ds
Job # 44 - Xfm_EODS_MASTER_ADT_LOG_00
Purpose:
This job is to log an audit message into EODS processing Audit table.
EODS process will insert an entry into the Audit_trail_log table for auditing and validation purpose.
This is a common job that is used to set the value of CRTE_TS, LAST_UPDT_TS and PROCESS_STATUS fields
in the table.
Job Parameters:
Step Description
Insert a log record into Audit_trail_log table saying “EODS – MASTER file” processing has started.
Populate the Audit_trail_log:
CRTE_TS as “Current System Timestamp”
And I_rec_actv to ‘Y’
SEQ_FLOW_NM to to ”Master_Legal_Entity_Data_Load”
PROCESS_STATUS to ‘E (Finished)
1 LAST_UPD_TS to “Current System Timestamp”
For more details about the job refer to the common flow section.
Description:
1. Copy <Src_Loc>\<FILE_NM> to <Wrk_folder_Loc>\<FILE_NM>;
2. Move <Src_Loc>\<FILE_NM> <Archv_folder_Loc>\<FILE_NM>;
3. Remove <Src_Loc>\<FILE_NM;
Description:
1. For all the files in the work folder order by the created date
2. Call the DataStage sequence which would process the Master Customer data.
3. If any error occurs during this process, send mail to application support team mentioning the reason
for job abort.
6
K_REC Number of records Integer No No
7.2.3 Restartability
Master Customer processing will not support automated restartability. But the checkpoints would be
implemented at every job so, if any job fails due to any reason, manual intervention is needed to sort out the issue
and restart the sequence which then would skip all the jobs which have been executed successfully in the
previous run and start from the job which has aborted. But if the job has failed due to data problem, then the
corrected files should be loaded into the source directory and the whole sequence should be reset so that the new
data would be processed from the beginning.
9 Environment Details
Please refer “EODS Batch Common Framework Tasks – Section – Environment Topology” detail design
document.
10 Logging/Auditing
EODS logs a complete trace of the master file legal entity process execution in audit tables.
All input, output and master files are archived for a period of 45 days as per the NFR requirements for audit
reasons.
11 Performance Considerations
The projected volume of customer data is less than 20000 records a day.
The volumes being small, Master Customer jobs for EODS will follow the default configurations.
12 Reporting
N/A
13 Security
File Transfer
Master customer data files are transferred from the PI Client to the DataStage AIX servers using the SFTP
Process.
The SFTP typically relies upon SSH, a protocol that provides secure communications using a key_based
encryption scheme. SSH is a proven secure method to transmit files. It fully encrypts the file transfer process,
from start to finish with limited threat exposure for the user. Due to the use of keys, it is easy to setup a scripted
SFTP session that transmits files as part of an automated process.
14 Deployment Details
Please refer “EODS Batch Common Framework Tasks – Section - Version Control” detail design document.
Worksheet in EODS
Master Customer Data Processing ETL Design Specification v1.1.xls
15 Scheduling
ESPX scheduling tool would be used for scheduling Datastage Jobs, Sequences and UNIX scripts so as to
control and monitor the execution of the flow, ESPX is a mainframe based scheduling tool.
For more details please refer “EODS Batch Job Scheduling R1A”.
16 SLA
For details please refer “EODS Batch Job Scheduling R1A”.
17 Special Considerations
N/A
19 Appendix
Acronym/Abbreviation Description
POLICY ADMIN Legacy (HOACT,AUTOACT,FLASH..) and EPAS applications that handle
SYSTEMS Policy transactions in FFIC
HOACT Home Owners Automated Computer Transactions
AUTOACT Automobile Automated Computer Transactions
PVR Premium Verification and Recording System
FLASH Dwelling Fire System
EPAS Enterprise Policy Admin System
BCWS Billing and Collections Workstation / System. The current functional billing
system for Direct Billing
SAP - FSCD SAP for Insurance Collections/Disbursements (FS-CD) has the task of
Collecting Premiums for running policies, and of disbursing benefits.
Currently used for Billing operations in FFIC
NWPA Nationwide Premium Accounting. Currently used for Agency Billing.
LEID Legal Entity ID
19.3 References
20 Approvals