Beruflich Dokumente
Kultur Dokumente
1.
Difference between static cache and dynamic cache?
In case of Dynamic cache when you are inserting a new row it looks at the lookup
cache to see if the row existing or not, If not it inserts in the target and ca
che as well in case of Static cache when you are inserting a new row it checks t
he cache and writes to the target but not cache
If you cache the look up table, you can choose to use a dynamic or static cache.
By default, the lookup cache remains static and does not change during the sess
ion. With a dynamic cache, the Informatica Server inserts or updates rows in the
cache during the session. When you cache the target table as the lookup, you ca
n look up values in the target and insert them if they do not exist, or update t
hem if they do.
2.
Which transformation should we use to normalize the COBOL and relational s
ources?
Normalizer Transformation.
When we drag the COBOL source in to the mapping Designer workspace, the normaliz
er transformation automatically appears, creating input and output ports for eve
ry column in the source.
3.
What are the join types in joiner transformation?
Normal, Master Outer, Detail Outer, Full Outer
4. In which conditions we can not use joiner transformation (Limitations of join
er transformation)?
No restrictions.
you perform the following task before configuring the joiner transformation conf
igure the transformation to use sorted data and define the join condition to rec
eive sorted data in the same order as the sort origin .
In the conditions; either input pipeline contains an Update Strategy transformat
ion, You connect a Sequence Generator transformation directly before the Joiner
transformation
1.Both input pipelines originate from the same Source Qualifier transformation.
2. Both input pipelines originate from the same Normalizer transformation. 3. Bo
th input pipelines originate from the same Joiner transformation. 4. Either inpu
t pipeline contains an Update Strategy transformation. 5. We connect a Sequence
Generator transformation directly before the Joiner transformation.
5. What is the look up transformation?
Look-up is a passive transformation and used to look up data in a flat file or a
relational table.
6.What are the difference between joiner transformation and source qualifier tra
nsformation?
1. Source Qualifier Operates only with relational sources within the same schema
. Joiner can have either heterogeneous sources or relation sources in different
schema 2. Source qualifier requires at least one matching column to perform a jo
in. Joiner joins based on matching port. 3. Additionally, Joiner requires two se
parate input pipelines and should not have an update strategy or Sequence genera
tor (this is no longer true from Infa 7.2).
7. Why use the lookup transformation?
Used to look up data in a relational table or view. In Inf 7.1, we can get from
flat file also
Generally, look up is used to perform one of the following task:
1. To get related value 2. to perform calculation 3. To update slowly changing d
imension table
4) check whether the record already existing in the table.
8. How can you improve session performance in aggregator transformation?
Yes, we can use a Sorted Input option to improve the performance. Basically aggr
egate transformation reduces the performance because it uses caches. Also, By us
ing Incremental Aggregation.
9. What is meant by lookup caches?
Session will read all unique rows from the reference table/ file to fill the
local buffer first; then for each row received from up-stream transformation, i
t tries to match them against the local buffer.
Informatica server builts a cache in memory when it process the first row of a c
ached lookup transformation.
1. When server runs a lookup transformation, the server builds a cache in memory
, when it process the first row of data in the transformation. 2. Server builds
the cache and queries it for the each row that enters the transformation. 3. The
server creates index and data cache files in the lookup cache drectory and used
the server code page to create the files. 4 index cache contains conductional v
alues and data cache contains output values.
The informatica server builds a cache in memory when it processes the first row
of a data in a cached look up transformation. It allocates memory for the cache
based on the amount you configure in the transformation or session properties. T
he informatica server stores condition values in the index cache and output valu
es in the data cache.
10. What is source qualifier transformation?
SQ is an active tramsformation. It performs one of the following task: t
o join data from the same source database to filtr the rows when Power centre re
ads source data to perform an outer join to select only distinct values from the
source.
In source qualifier transformation a user can defined join conditions,
filter the data and eliminating the duplicates. The default source qualifier can
over written by the above options, this is known as SQL Override.
The source qualifier represents the records that the Informatica server reads wh
en it runs a session.
When we add a relational or a flat file source definition to a mapping,we n
eed to connect it to a source qualifier transformation.The source qualifier tran
sformation represents the records that the informatica server reads when it runs
a session.
11. How the informatica server increases the session performance through partiti
oning the source?
Partitioning the session improves the session performance by creating multiple c
onnections to sources and targets and loads data in parallel pipe lines
12. What are the settings that you use to configure the joiner transformation?
Master group flow detail group flow join condition type of join take less no. of
rows table as master table, more no of table as detail table and join condition
. joiner will put all row from master table into cache and check condition with
detail table rows.
1) Master Source 2) Detail Source 3) Type Of Join 4) Condition of Join
13. What are the rank caches?
The informatica server stores group information in an index catche and row
data in data cache. when the server runs a session with a Rank transformation, i
t compares an input row with rows with rows in data cache. If the input row outranks a stored row,the Informatica server replaces the stored row with the input
row.
During the session ,the informatica server compares an input row with rows i
n the data cache. If the input row out-ranks a stored row, the informatica serve
r replaces the stored row with the input row. The informatica server stores grou
p information in an index cache and row data in a data cache.
32. Under what circumstance can a target definition are edited from the mapping
designer. Within the mapping where that target definition is being used?
We can't edit the target definition in mapping designer. we can edit the target
in warehouse designer only. But in our projects, we haven't edited any of the ta
rgets. if any change required to the target definition we will inform to the DBA
to make the change to the target definition and then we will import again. We d
on't have any permission to the edit the source and target tables.
33. Can a source qualifier be used to perform a outer join when joining 2 databa
se?
No, we can't join two different databases join in SQL Override.
34. If u r source is flat file with delimited operator.when next time u want cha
nge that delimited operator where u can make?In the session properties go to map
pings and click on the target instance click set file properties we have to chan
ge the delimited option.
35. If index cache file capacity is 2MB and data cache is 1 MB. If you enter the
data of capacity for index is 3 MB and data is 2 MB. What will happen?
Nothing will happen based the buffer size exists in the server we can change the
cache sizes. Max size of cache is 2 GB.
36. Difference between next value and current value ports in sequence generator?
Assume that they r both connected to the input of another transformer?
It will gives values like nextvalue 1, currval 0.
37. How does dynamic cache handle the duplicates rows?
Dynamic Cache will gives the flags to the records while inserting to the cache i
t will gives flags to the records, like new record assigned to insert flag as "0
", updated record is assigned to updated flag as "1", No change record assigned
to rejected flag as "2"
38. How will u find whether your mapping is correct or not without connecting se
ssion?
Through debugging option.
39. If you are using aggregator transformation in your mapping at that time your
source contain dimension or fact?
According to requirements, we can use aggregator transformation. There is no lim
itation for the aggregator. We should use source as dimension or fact.
40. My input is oracle and my target is flat file shall I load it? How?
Yes, Create flat file based on the structure match with oracle table in warehous
e designer than develop the mapping according requirement and map to that target
flat file. Target file is created in TgtFiles directory in the server system.
41. for a session, can I use 3 mappings?
No, for one session there should be only one mapping. We have to create separate
session for each mapping.
42. Type of loading procedures?
Load procedures are two types 1) Normal load 2) bulk loads if you are talking ab
out informatica level. If you are talking about project load procedures based on
the project requirement. Daily loads or weekly loads.
43. Are you involved in high level r low level design? What is meant by that hig
h level design n low level design?
Low Level design:
Requirements should be in the excel format which describes field to field valida
tions and business logic needs to present. Mostly onsite team will do this Low L
evel design.
High Level Design:
Describes the informatica flow chart from source qualifier to target simply we c
an say flow chart of the informatica mapping. Developer will do this design docu
ment.
44. what r the dimension load methods?
Daily loads or weekly loads based on the project requirement.
45. where we are using lkp t/r source to stage or stage to target?
Depend on the requirement. There is no rule we have to use in this stage only.
46. How will you do SQL tuning?
We can do SQL tuning using Oracle Optimizer, TOAD software.
47. did u use any other tools for scheduling purpose other than workflow manager
or pmcmd?
Using third party tools like "Control M".
48. What is SQL mass updating?
Update (select hs1.col1 as hs1_col1
, hs1.col2 as hs1_col2
, hs1.col3 as hs1_col3
, hs2.col1 as hs2_col1
, hs2.col2 as hs2_col2
, hs2.col3 as hs2_col3
From hs1, hs2
Where hs1.sno = hs2.sno)
set hs1_col1 = hs2_col1
, hs1_col2 = hs2_col2
, hs1_col3 = hs2_col3;
49. what is unbounded exception in source qualifier?
"TE_7020 Unbound field in Source Qualifier" when running session
A) Problem Description:
When running a session the session fails with the following error:
TE_7020 Unbound field in Source Qualifier "
Solution:
This error will occur when there is an inconsistency between the Source Qualifie
r and the source table.
Either there is a field in the Source Qualifier that is not in the physical tabl
e or there is a column
of the source object that has no link to the corresponding port in the Source Qu
alifier.
To resolve this, re-import the source definition into the Source Analyzer in Des
igner.
Bring the new Source definition into the mapping.This will also re-create the So
urce Qualifier.
Connect the new Source Qualifier to the rest of the mapping as before.
50. Using unconnected lookup how we you remove nulls n duplicates?
We can't handle nulls and duplicates in the unconnected lookup. We can handle in
dynamic connected lookup.
51. I have 20 lookup, 10 joiners, 1 normalizer how will you improve the session
performance?
We have to calculate lookup & joiner caches size
52. What is version controlling?
It is the method to differentiate the old build and the new build after changes
made to the existing code. For the old code v001 and next time u have to increas
e the version number as v002 like that. In my last company we haven't use any ve
rsion controlling. We just delete the old build and replace with the new code.
We don't maintain version controlling in informatica. We are maintaining the cod
e in VSS (Virtual visual Source) that is the software with maintain the code wit
h versioning. Whenever client made change request came once the production start
s we have to create another build.
53. How is the Sequence Generator transformation different from other transforma
tions?
The Sequence Generator is unique among all transformations because we c
annot add, edit, or delete its default ports (NEXTVAL and CURRVAL).
Unlike other transformations we cannot override the Sequence Generator
transformation properties at the session level. This protects the integrity of t
he sequence values generated.
54.What are the advantages of Sequence generator? Is it necessary, if so why?
We can make a Sequence Generator reusable, and use it in multiple mappings. We m
ight reuse a Sequence Generator when we perform multiple loads to a single targe
t.
For example, if we have a large input file that we separate into three sessions
running in parallel, we can use a Sequence Generator to generate primary key val
ues. If we use different Sequence Generators, the Informatica Server might accid
entally generate duplicate key values. Instead, we can use the same reusable Seq
uence Generator for all three sessions to provide a unique value for each target
row.
55. What are the uses of a Sequence Generator transformation?
We can perform the following tasks with a Sequence Generator transformation:
o
Create keys
o
Replace missing values
o
Cycle through a sequential range of numbers
56.What is the difference between connected lookup and unconnected lookup?
Connected Lookup
Receives input values directly from the pipeline.
We can use a dynamic or static cache
Supports user-defined default values
Unconnected Lookup
Receives input values from the result of a :LKP expression in another transforma
tion.
We can use a static cache
Does not support user-defined default values
57. What is a Look up transformation and what are its uses?
We use a Look up transformation in our mapping to look up data in a relational t
able, view or synonym. We can use the Lookup transformation for the following pu
rposes:
1. Get a related value. For example, if our source table includes employee ID, b
ut we want to include the employee name in our target table to make our summary
data easier to read.
2. Perform a calculation. Many normalized tables include values used in a calcul
ation, such as gross sales per invoice or sales tax, but not the calculated valu
e (such as net sales).
3. Update slowly changing dimension tables. We can use a Lookup transformation t
o determine whether records already exist in the target.
58.What is a lookup table?
The lookup table can be a single table, or we can join multiple tables in the sa
me database using a lookup query override. The Informatica Server queries the lo
okup table or an in-memory cache of the table for all incoming rows into the Loo
kup transformation.
If your mapping includes heterogeneous joins, we can use any of the mapping sour
ces or mapping targets as the lookup table.
59. Where do you define update strategy?
We can set the Update strategy at two different levels:
1. Within a session: When you configure a session, you can instruct the Informat
ica Server to either treat all records in the same way (for example, treat all r
ecords as inserts), or use instructions coded into the session mapping to flag r
ecords for different database operations.
2. Within a mapping: Within a mapping, you use the Update Strategy transformatio
n to flag records for insert, delete, update, or reject.
60. What is Update Strategy?
When we design our data warehouse, we need to decide what type of information to
store in targets. As part of our target table design, we need to determine whet
her to maintain all the historic data or just the most recent changes.
The model we choose constitutes our update strategy, how to handle changes to ex
isting records.
Update strategy flags a record for update, insert, delete, or reject. We use thi
s transformation when we want to exert fine control over updates to a target, ba
sed on some condition we apply. For example, we might use the Update Strategy tr
ansformation to flag all customer records for update when the mailing address ha
s changed, or flag all employee records for reject for people no longer working
for the company
61. What are the tools provided by Designer?
The Designer provides the following tools:
1. Source Analyzer. Use to import or create source definitions for flat file, XM
L, Cobol, ERP, and relational sources. 2. Warehouse Designer. Use to import or c
reate target definitions. 3. Transformation Developer. Use to create reusable tr
ansformations. 4. Mapplet Designer. Use to create mapplets. 5. Mapping Designer.
Use to create mappings.
62. What are the different types of Commit intervals?
The different commit intervals are:
1)Target-based commit. The Informatica Server commits data based on the number o
f target rows and the key constraints on the target table. The commit point also
depends on the buffer block size and the commit interval.
2) Source-based commit. The Informatica Server commits data based on the number
of source rows. The commit point is the commit interval you configure in the ses
sion properties.
63. What is Event-Based Scheduling?
When you use event-based scheduling, the Informatica Server starts a session whe
n it locates the specified indicator file. To use event-based scheduling, you ne
ed a shell command, script, or batch file to create an indicator file when all s
ources are available. The file must be created or sent to a directory local to t
he Informatica Server. The file can be of any format recognized by the Informati
ca Server operating system. The Informatica Server deletes the indicator file on
ce the session starts.
Use the following syntax to ping the Informatica Server on a UNIX system:
pmcmd ping [{user_name | %user_env_var} {password | %password_env_var}] [hostnam
e:]portno
Use the following syntax to start a session or batch on a UNIX system:
pmcmd start {user_name | %user_env_var} {password | %password_env_var} [hostname
:]portno [folder_name:]{session_name | batch_name} [:pf=param_file] session_flag
wait_flag
Use the following syntax to stop a session or batch on a UNIX system:
pmcmd stop {user_name | %user_env_var} {password | %password_env_var} [hostname:
]portno[folder_name:]{session_name | batch_name} session_flag
Use the following syntax to stop the Informatica Server on a UNIX system:
pmcmd stopserver {user_name | %user_env_var} {password | %password_env_var} [hos
tname:]portno
64.I have the Administer Repository Privilege, but I cannot access a repository
using the Repository Manager?
To perform administration tasks in the Repository Manager with the Administer Re
pository privilege, you must also have the default privilege Browse Repository.
You can assign Browse Repository directly to a user login, or you can inherit Br
owse Repository from a group.
65. My privileges indicate I should be able to edit objects in the repository, b
ut I cannot edit any metadata?
You may be working in a folder with restrictive permissions. Check the folder pe
rmissions to see if you belong to a group whose privileges are restricted by the
folder owner.
66. How does read permission affect the use of the command line program, pmcmd?
To use pmcmd, you do not need to view a folder before starting a session or batc
h within the folder. Therefore, you do not need read permission to start session
s or batches with pmcmd. You must, however, know the exact name of the session o
r batch and the folder in which it exists.
With pmcmd, you can start any session or batch in the repository if you have the
Session Operator privilege or execute permission on the folder.
67. I do not want a user group to create or edit sessions and batches, but I nee
d them to access the Server Manager to stop the Informatica Server?
To permit a user to access the Server Manager to stop the Informatica Server, yo
u must grant them both the Create Sessions and Batches, and Administer Server pr
ivileges. To restrict the user from creating or editing sessions and batches, yo
u must restrict the user's write permissions on a folder level.
Alternatively, the user can use pmcmd to stop the Informatica Server with the Ad
minister Server privilege alone.
68. I created a new group and removed the Browse Repository privilege from the g
roup. Why does every user in the group still have that privilege?
Privileges granted to individual users take precedence over any group restrictio
ns. Browse Repository is a default privilege granted to all new users and groups
. Therefore, to remove the privilege from users in a group, you must remove the
privilege from the group, and every user in the group.
69. After creating users and user groups, and granting different sets of privile
ges, I find that none of the repository users can perform certain tasks, even th
e Administrator?
Repository privileges are limited by the database privileges granted to the data
base user who created the repository. If the database user (one of the default u
sers created in the Administrators group) does not have full database privileges
in the repository database, you need to edit the database user to allow all pri
vileges in the database.
70. What are the different types of locks?
There are five kinds of locks on repository objects:
1. Read lock. Created when you open a repository object in a folder for which yo
u do not have write permission. Also created when you open an object with an exi
sting write lock.
2. Write lock. Created when you create or edit a repository object in a folder f
DDS as a source.
74. What are Shortcuts?
We can create shortcuts to objects in shared folders. Shortcuts provide the easi
est way to reuse objects. We use a shortcut as if it were the actual object, and
when we make a change to the original object, all shortcuts inherit the change.
Shortcuts to folders in the same repository are known as local shortcuts.
Shortcuts to the global repository are called global shortcuts.
We use the Designer to create shortcuts.
75. What are Sessions and Batches?
Sessions and batches store information about how and when the Informatica Server
moves data through mappings. You create a session for each mapping you want to
run. You can group several sessions together in a batch. Use the Server Manager
to create sessions and batches.
76. Why do we need SQL overrides in Lookup transformations?
In order to lookup more than one value from one table, we go for SQL overrides i
n Lookups.
77. Which ETL
Preference of
ly a tradeoff
leader since
t system. The Different types of Data Modeling are: 1. Dimension Modelling 2. ER Modelling
86. What is the need of building a data warehouse?
The need of building a data warehouse is that, it acts as a storage fill for a l
arge amount of data. It also provides end user access to a wide variety of data,
helps in analyzing data more effectively and also in generating reports. It act
s as a huge repository for integrated information.
87. What is drill-down and drill-up?
Both drill-down and drill-up are used to explore different levels of dimensional
ly modeled data. Drill-down allows the users view lower level (i.e. more detaile
d level) of data and drill-up allows the users to view higher level (i.e. more s
ummarized level) of data.
88. What is cube?
Cube is a multidimensional representation of data. It is used for analysis purpo
se. A cube gives multiple views of data.
89. Where did u use the Unix shell scripts in informatica projects?
To concatenate 2 or more Flat files, Workflow scheduling, File watcher script.
90. how to u generate surrogate keys for tables with more than 2 billion records
(surrogate key is a primary key field)?
Do not use sequence generator, But use a Expression variable increment & look u
p transformation to get the last value used from target.
91. how do u propagate date column to a flat file, if u need format to be DD-MON
-YYYY?
Using To_date function before loading to flat file?
92. If you use sorted I/P option in Aggregator but gave it Unsorted I/P then wha
t will happen?
Fails the session with error messg 'Expecting keys to be ascending'.
93. If i have 100 rows given as I/P to aggregator & want 100 rows as O/P then ho
w can u achieve that?(none of the columns are primary key)?
Aggregator is an active transformation. So you can't expect it to give exact num
ber of output rows for all input rows you have given to it.
94. If i have 100 rows given as I/P to aggregator & want just 100th row as O/P t
hen how can u achieve that?
If you don't select any group by port in ports tab of aggregator transformation
then informatica is only going to give last row as output for all the numerous r
ecords given to it as input.
95. What are all the conditions (=, Not Between) you have in Lookup in Joiner?
Lookup (=, !=, >=, <=, >, <), Joiner (= only )
96. If i had a flat file, Can i over ride SQL in Source Qualifier or Lookup?
You can never over ride a SQL Query when you are playing with flat files.
97. If i have a flat file target, When i click it in the Workflow manager, What
all properties would i get?
File Writer, Merge Partitioned file, Merge File name, Merge File Dir, O/P file n
ame, O/P file dir, Reject file name, Reject file dir.
98. What is the use of Return Port & Output port in Lookup Transformation?
By default all the ports in lookup transformation are Look up & O/P ports. Retur
n ports are only used in Unconnected lookup & You need have at least 1 Output po
rt in Connected lookup. (cant explain in detail here...)
99. If i have used 2 update strategies in my mapping..1 for insert & other for d
elete, then I changed target option in session properties from data driven to de
lete...then how is my mapping going to perform ? (all deletes or insert & delete
)
Workflow succeeds but you get an error mesg in logs saying target did not permit
to insert & all the records marked for inserts are loaded into bad file.
100. Scenario for using dynamic lookup?
Loading data from flat file to a table, But the data in file is violating primar
y key constraint of table due to duplicate data
101. Why you need surrogate key instead of OLTP primary key, Tell a scenario whe
re it s mandatory to use surrogate key?
Ans: If prod key is numeric before & OLTP People decided to go for alphanumeric,
we may have to change all data types. in warehouse where that key is involved.
So best to keep it away from business defined values, To track SCD
102. What s the default & Max size of data cache?
20 MB, 2 GB
103. What is materialized view, how it s going to improve performance.
Ans: It creates physical table unlike normal view
104. When you use bitmap indexes?
Ans: when cardinality is low, distinct rows <> Newfile.txt
105. what is implicit cursor & explicit cursor in oracle?
Ans: Implicit cursor is a defined for every query executed. Its attributes can b
e retrieved after executing query using SQL as prefix Ex: sql%rowcount. Explicit
cursor is the cursors what we define manually.
106. what is the cursor attributes?
Ans:%found
%notfound
%rowcount
107. how would you stop session row if a value for particular column is matched
to a given value?
Ans: use abort function of informatica in expression transformation
108. how about error function in informatica?
Ans: it logs the error message you defined in the session log if particular cond
ition is satisfied
109. levels of logs you can maintain in informatica sessions?
Ans: Terse,Normal,And Verbose
110. how to run sequential sessions only if previous session loads at least one
row in target?
Ans: Defined a link condition between sessions with TgtSuccessrows>0
111. difference between max & greatest functions in oracle?
Ans: Greatest gives greatest of 2 values, max is the max value of all available
values
112. how you get number of rows returned by lookup when a condition matches?
Ans: Define a new column & use SQL override with count(*)
113. how can you configure informatica workflow to run only on first working day
of the month?
Ans: Define a calendar & go with a session or script.
114. when you go for sql overrides?
Ans: When you want to share some processing load with Database, When it s more eff
ective
115. what are types of triggers, difference between row level & statement level
trigger?
Ans: after/before statement level/row level insert/delete/update
116. Can you join 2 tables using SQL override in Lookup transformation
Ans: Yes
117. If a session fails, What are the steps you follow in resolving it?
Ans: check session logs with various tracing levels.
118. How you filter records in TOAD?
Ans: Use filter icon & Add rule on columns
119. What is persistent cache in lookup transformation?
Ans: Its remains even after session run, useful in incremental aggregation
120. how did you implement incremental extraction?
Ans: Mostly using set variable or some way of truncating stage tables or use par
ameter files
121. tell me about 'set variable' function in Informatica?
Ans: Set a value to variable depending on the last row processed
122. if you change properties in session, which one takes preference, is it mapp
ing level or session level?
Ans: Session Level
123. Syntax for defining a parameter in parameter file?
Ans: Folder - workflow - session mapplet
124. How would you identify why a record is rejected?
Ans: (D,O,N,T Indicators).