Beruflich Dokumente
Kultur Dokumente
----------
Source:
-----------
ID NAME
1 100
2 Ramesh
3 India
1 101
2 Rakesh
3 India
1 102
2 John
3 USA
----------
Target:
---------
EID ENAME COUNTRY
100 Ramesh INDIA
101 Rakesh INDIA
102 John USA
Can you please explain in detail, I am trying but I didn't get any successful results. Kindly help
me out.
An expression and filter is sufficient to solve this problem. You could have a variable called
OLD_NAME_V which stored the value of the NAME field of the previous row.
ex - OLD_NAME_V = CURR_NAME_V
CURR_NAME_V = NAME
Have another variable called concatenated name
OLD_CONC_NAME_V = CONC_NAME_v
CONC_NAME_V = OLD_CONC_NAME_V || NAME
Also have a IS_CHANGE flag which is always 0 and set to 1 when value of ID is 1. Pass all rows
to a filter. The filter condition should be IS_CHANGE = 1. This should solve your question.
You would have to take care of a couple of corner cases like the first record etc.
hi
union is active why?
pls explain
Change the number of rows that pass through the transformation. For example, the Filter
transformation is active because it removes rows that do not meet the filter condition.
All multi-group transformations are active because they might change the number of
rows that pass through the transformation. and hence Union is a Multi group
transformation so an active transformation
Change the transaction boundary. For example, the Transaction Control transformation is active
because it defines a commit or roll back transaction based on an expression evaluated for each
row.
Change the row type. For example, the Update Strategy transformation is active because it flags
rows for insert, delete, update, or reject.
------
this transformation is used to merge data from different sources to a target. As the no. of rows
are changed in the ETL, it is an active transformation.
performance tuning
Tuning
Error Handling
Can any one tell error handling in INformatica?
Hi Informaticam,
For the error handling, either you build up to yourself your own solution, or use the error
handling proposed by INFORMATICA.
In this case, you must inform 3 properties in the tab Config Object of the session:
** Error Log Type : You select if you catch the error in the database or in the file. Thus
according to your choice, you inform the properties (Error Log DB Connexion, Error Log Table
Name Prefix, Error Log File Directory, Error Log File Name)
Best Regards,
Chris
Hi Chirs,
As you explained(you build up to yourself your own solution), can you please explain with the
scenario?
thanks
Informaticam
Just read informatica help guide topic name is "Row Error Logging" and "Error Handling
Settings" this will help u
Difference between Informatica 8 and 9
Infa 9 can:
1) Has Data Masking Transformation
2) Has Unstructured Data Transformation
3) can access two tables with same names(Ex:- same table name but one in uppercase and one
in lower case)
4) Can process tables with . in their table names
the major difference is lookup transformation which is active(returns all the matched rows) in
informatica 9.0
Thanks in advance
Priya
First of all, the two flat files should reside on the local machine where the Informatica server
resides. In the source analyzer, import the two source definitions as flat files. Then, with the
help of the flat file Wizard, configure the flat files settings or properties to correspond to the
required data types and other configurations.
After you have imported the flat file source definitions, go to the source analyzer and import
the oracle target table definition.
Use a joiner transformation to join the two flat files and load them into the oracle target table.
When you create a session to run this mapping, make sure that you specify the correct name
and directory where the source flat files are residing.
If you have 6 records in File1 and 5 Records in File2, you may not get 11 records in the target
table unless you are performing a full outer join and none of the records of the two flat files
match the other one. If you make an equi-join, the maximum number of records you may get
is 6 records.
GOOD LUCK,
If file1 and File2 have the same layout, you can use a "filelist" comprised of these two file
names. This will allow you mapping to read file1 and then file2 and write them to the Oracle
table. The filelist will be in a parameter file.
U can use Indirect flat file system. Create a flat file and give path and filename of the
files(which u need to join).
import any one flat file in source analyser. and go to the session / files and directory - change
the flat file type as Indirect flat file and give the name of the indirect flatfile. then start running
ur session it gives the result which u wanted....
Nandhini, I was able to do the scenario given by you. I have a new requirement on top of this.
Each file will have a date tagged to filename. When i load multiple files to target. I want to
capture even the date that is tagged to filename for all the records in the file.
Here is an eg:
Flat File names i have in my informatica src directory
FF_07_11_2008.csv
FF_07_12_2008.csv
Content in FF_07_11_2008.csv
Name Age
User1 26
User2 27
Content in FF_07_12_2008.csv
Name Age
User3 28
User4 30
Thanks
If a flat file is given as source to a mapping and the target data has to be distinct rows, how do
we achieve this in informatica mappings?
There is no way to do a SELECT DISTINCT directly from a flat file; however you can use the
"Distinct" option in a Sorter transformation.
Another alternate option is to create a mapping that loads the flat file data to a staging table.
The next mapping would be able to do a SELECT DISTINCT from this relational staging table.
Or use an aggregator
loading data directly from DB Tables and TGT desgined without key or constraint..perform
bulk load u will get result faster.
as well as both src and tgt flat file load on local server example trying to load both on local
server directory instead off ftp will also bring result faster.
how to load data into a flat file to another flat file informatica..??
plz help me out from this as soon as possible...???
how to load a flat file data into another flat file using informatica tool..??
plz help me out from this as soon as possible...???
@kiran: It all depends on the scenario....If you have a well tuned DB and a partitioned table
structure, then you can load from one table to other table wayyyyy quicker than a flat file
sitting on the network. You can use push-down optimization in that scenario which helps to
quicken the load. On the contrary, if the flat file is sitting on the Informatica box then it would
load faster than the table. You can use BULK loading and finish the task quicker. So in the
end you have to see the scenario and go for a trial and error method. Hope it helps.
@shiva: It is simple....You will have the source and target defined as a Flat file !!!
There are two different tables from two different SQL server database in my source and I
need to populate the updated records in the target. My target contains two flat files(customer
& product) and I need to populate updated records of customer records to customer flat file
and products records to product flat file. How do i start my mapping? I dont know whether
there is an matching column in the source or not? If there is no matching column how do i join
the source tables? Could anyone suggest me regarding this?
if dont have matching column in ur source tables then u can create dummay ports and u can
join.
or else single in Source quilifier we can join by specifing schema of taht tables.
Note*** I cannot do truncate or delete on all the records in the target it got some child tables I
have to delete if the records is not in source.
This is going to be a Cascade Delete, so we can delete records which will delete all the
records in child tables.
sree,
You may have to develop a seperate pipeline in your mapping.Once ur insert and delete
operation is over,
in the second pipe line use a SQ for the Target and your source using the Business key
create an SQL with MINUS operator which will give you the keys not present in the source
.With those keys you can identify the target keys by a lkp and then the child keys .using an
update stratergy delete the corresponding rows in the child tables and in the parent.
Question:
Hi,
I need to delete records in target table:
Source:
name source_value(Not in key)
Ram 6
Ram 6
Target:
source_value(not in key)
6
6
I need to delete target table, but my target is not having any primary keys,
so could you please help me how i can delete the record and it's physical delete in target
table.
Thx,
Ramesh.
can you be specific about how SQL MINUS operation is being performed ?
thnx,
AA - Lansing, MI.