Sie sind auf Seite 1von 6

How to peform Row Data Concatination?

----------
Source:
-----------
ID NAME
1 100
2 Ramesh
3 India
1 101
2 Rakesh
3 India
1 102
2 John
3 USA
----------
Target:
---------
EID ENAME COUNTRY
100 Ramesh INDIA
101 Rakesh INDIA
102 John USA

How to solve the above scenario??

Kindly help me out.


Thanks in Advance.

use expression to concatenate the value then use aggregator..

Basically we use Aggregator to convert cols into rows

Can you please explain in detail, I am trying but I didn't get any successful results. Kindly help
me out.

Thanks & Regards,


Manohar Pattem.

An expression and filter is sufficient to solve this problem. You could have a variable called
OLD_NAME_V which stored the value of the NAME field of the previous row.
ex - OLD_NAME_V = CURR_NAME_V
CURR_NAME_V = NAME
Have another variable called concatenated name
OLD_CONC_NAME_V = CONC_NAME_v
CONC_NAME_V = OLD_CONC_NAME_V || NAME

Also have a IS_CHANGE flag which is always 0 and set to 1 when value of ID is 1. Pass all rows
to a filter. The filter condition should be IS_CHANGE = 1. This should solve your question.
You would have to take care of a couple of corner cases like the first record etc.

Union active or passive

hi
union is active why?
pls explain

Read The following taken from Informatica Help Guide:


Active Transformations
An active transformation can perform any of the following actions:

Change the number of rows that pass through the transformation. For example, the Filter
transformation is active because it removes rows that do not meet the filter condition.
All multi-group transformations are active because they might change the number of
rows that pass through the transformation. and hence Union is a Multi group
transformation so an active transformation

and other condition for active transformation

Change the transaction boundary. For example, the Transaction Control transformation is active
because it defines a commit or roll back transaction based on an expression evaluated for each
row.

Change the row type. For example, the Update Strategy transformation is active because it flags
rows for insert, delete, update, or reject.
------
this transformation is used to merge data from different sources to a target. As the no. of rows
are changed in the ETL, it is an active transformation.

it same as Union in SQL when u do union it will increase count

performance tuning

what is performance tuning in informatica

optimizing the mapping (transformations etc)

Tuning

Fine tuning the mappings created is performance tuning.


Check Sources, targets and mappings if there are any issues related to performance.
Then fixing those issues is performance tuning.

Error Handling
Can any one tell error handling in INformatica?

Hi Informaticam,

For the error handling, either you build up to yourself your own solution, or use the error
handling proposed by INFORMATICA.

In this case, you must inform 3 properties in the tab Config Object of the session:

** Error Log Type : You select if you catch the error in the database or in the file. Thus
according to your choice, you inform the properties (Error Log DB Connexion, Error Log Table
Name Prefix, Error Log File Directory, Error Log File Name)

** Log Row Data

** Log Source Row Data.

Best Regards,

Chris

Hi Chirs,

As you explained(you build up to yourself your own solution), can you please explain with the
scenario?

That would be helpful.

thanks
Informaticam

Just read informatica help guide topic name is "Row Error Logging" and "Error Handling
Settings" this will help u
Difference between Informatica 8 and 9

Can anyone share the difference between Informatica 8 and 9?

Infa 9 can:
1) Has Data Masking Transformation
2) Has Unstructured Data Transformation
3) can access two tables with same names(Ex:- same table name but one in uppercase and one
in lower case)
4) Can process tables with . in their table names

the major difference is lookup transformation which is active(returns all the matched rows) in
informatica 9.0

Combining data of two similar flat files


Can anybody tell me How to join two flat files so that the data from two of the comes to oracle
target... all source and target have same format...
Like I have 6 records in File1 and 5 Records in File2
soI want 11 Records in my Target now?

Thanks in advance
Priya

First of all, the two flat files should reside on the local machine where the Informatica server
resides. In the source analyzer, import the two source definitions as flat files. Then, with the
help of the flat file Wizard, configure the flat files settings or properties to correspond to the
required data types and other configurations.
After you have imported the flat file source definitions, go to the source analyzer and import
the oracle target table definition.
Use a joiner transformation to join the two flat files and load them into the oracle target table.
When you create a session to run this mapping, make sure that you specify the correct name
and directory where the source flat files are residing.

If you have 6 records in File1 and 5 Records in File2, you may not get 11 records in the target
table unless you are performing a full outer join and none of the records of the two flat files
match the other one. If you make an equi-join, the maximum number of records you may get
is 6 records.

GOOD LUCK,

If file1 and File2 have the same layout, you can use a "filelist" comprised of these two file
names. This will allow you mapping to read file1 and then file2 and write them to the Oracle
table. The filelist will be in a parameter file.

Use a Union transform

U can use Indirect flat file system. Create a flat file and give path and filename of the
files(which u need to join).

import any one flat file in source analyser. and go to the session / files and directory - change
the flat file type as Indirect flat file and give the name of the indirect flatfile. then start running
ur session it gives the result which u wanted....

Nandhini, I was able to do the scenario given by you. I have a new requirement on top of this.
Each file will have a date tagged to filename. When i load multiple files to target. I want to
capture even the date that is tagged to filename for all the records in the file.
Here is an eg:
Flat File names i have in my informatica src directory
FF_07_11_2008.csv
FF_07_12_2008.csv

File format for both flatfiles are same.

Content in FF_07_11_2008.csv
Name Age
User1 26
User2 27

Content in FF_07_12_2008.csv
Name Age
User3 28
User4 30

When i load these. I would need to get this in target.


Name Age Date
User1 26 07_11_2008
User2 27 07_11_2008
User3 28 07_12_2008
User4 30 07_12_2008

Thanks

Use file parser C# string reader and writer


Tanvtech.com

If a flat file is given as source to a mapping and the target data has to be distinct rows, how do
we achieve this in informatica mappings?

There is no way to do a SELECT DISTINCT directly from a flat file; however you can use the
"Distinct" option in a Sorter transformation.

Another alternate option is to create a mapping that loads the flat file data to a staging table.
The next mapping would be able to do a SELECT DISTINCT from this relational staging table.

Or use an aggregator

Which is faster in informatica? Flat file to database


or database to database
Which load is faster in informatica? loading data from flat file to oracle schema.
Or oracle table to another oracle table.

flat file to oracle would be faster

In addition to above answer , some more scenarios as well support..

loading data directly from DB Tables and TGT desgined without key or constraint..perform
bulk load u will get result faster.

as well as both src and tgt flat file load on local server example trying to load both on local
server directory instead off ftp will also bring result faster.
how to load data into a flat file to another flat file informatica..??
plz help me out from this as soon as possible...???

how to load a flat file data into another flat file using informatica tool..??
plz help me out from this as soon as possible...???

@kiran: It all depends on the scenario....If you have a well tuned DB and a partitioned table
structure, then you can load from one table to other table wayyyyy quicker than a flat file
sitting on the network. You can use push-down optimization in that scenario which helps to
quicken the load. On the contrary, if the flat file is sitting on the Informatica box then it would
load faster than the table. You can use BULK loading and finish the task quicker. So in the
end you have to see the scenario and go for a trial and error method. Hope it helps.

@shiva: It is simple....You will have the source and target defined as a Flat file !!!

There are two different tables from two different SQL server database in my source and I
need to populate the updated records in the target. My target contains two flat files(customer
& product) and I need to populate updated records of customer records to customer flat file
and products records to product flat file. How do i start my mapping? I dont know whether
there is an matching column in the source or not? If there is no matching column how do i join
the source tables? Could anyone suggest me regarding this?

if dont have matching column in ur source tables then u can create dummay ports and u can
join.

or else single in Source quilifier we can join by specifing schema of taht tables.

Insert update and delete on Target


have a source and target and I need to update insert and delete the target based on the
source.
For this I made a mapping with Dynamic lookup and two update strategy one insert and the
other for update.
This is working fine with inserts and updates, now I need to delete the records in target which
are not in source.

Note*** I cannot do truncate or delete on all the records in the target it got some child tables I
have to delete if the records is not in source.
This is going to be a Cascade Delete, so we can delete records which will delete all the
records in child tables.

This is my current mapping which is working for insert and update..

Source ------> Source Qualifier ----->Dynamic Lookup ------>Router ------->update(insert)


-------> Target
------->update(update) -------> Target

So now I need the delete logic how can I implement ……????????????

sree,

You may have to develop a seperate pipeline in your mapping.Once ur insert and delete
operation is over,
in the second pipe line use a SQ for the Target and your source using the Business key
create an SQL with MINUS operator which will give you the keys not present in the source
.With those keys you can identify the target keys by a lkp and then the child keys .using an
update stratergy delete the corresponding rows in the child tables and in the parent.

Question:
Hi,
I need to delete records in target table:
Source:
name source_value(Not in key)
Ram 6
Ram 6
Target:
source_value(not in key)
6
6
I need to delete target table, but my target is not having any primary keys,
so could you please help me how i can delete the record and it's physical delete in target
table.

Thx,
Ramesh.

yes above are useful

can you be specific about how SQL MINUS operation is being performed ?

thnx,
AA - Lansing, MI.

Das könnte Ihnen auch gefallen