Sie sind auf Seite 1von 5

What is the relation between EME , GDE and Co-operating system ?

ans. EME is said as enterprise metdata env, GDE as graphical devlopment env and Co-operating
sytem can be said as asbinitio server
relation b/w this CO-OP, EME AND GDE is as fallows
Co operating system is the Abinitio Server. this co-op is installed on perticular O.S platform that is
called NATIVE O.S .comming to the EME, its i just as repository in informatica , its hold the
metadata,trnsformations,db config files source and targets informations. comming to GDE its is end user
envirinment where we can devlop the graphs(mapping just like in informatica)
desinger uses the GDE and designs the graphs and save to the EME or Sand box it is at user side.where
EME is ast server side.

What is the use of aggregation when we have rollup
as we know rollup component in abinitio is used to summirize group of data record. then where we will
use aggregation ?
ans: Aggregation and Rollup both can summerise the data but rollup is much more convenient to use. In
order to understand how a particular summerisation being rollup is much more explanatory compared to
aggregate. Rollup can do some other functionalities like input and output filtering of records.
Aggregate and rollup perform same action, rollup display intermediat
result in main memory, Aggregate does not support intermediat result
what are kinds of layouts does ab initio supports

Basically there are serial and parallel layouts supported by AbInitio. A graph can have both at the same
time. The parallel one depends on the degree of data parallelism. If the multi-file system is 4-way parallel
then a component in a graph can run 4 way parallel if the layout is defined such as it's same as the
degree of parallelism.

How can you run a graph infinitely?
To run a graph infinitely, the end script in the graph should call the .ksh file of the graph. Thus if the
name of the graph is abc.mp then in the end script of the graph there should be a call to abc.ksh.
Like this the graph will run infinitely.

How do you add default rules in transformer?
Double click on the transform parameter of parameter tab page of component properties, it will open
transform editor. In the transform editor click on the Edit menu and then select Add Default Rules from
the dropdown. It will show two options - 1) Match Names 2) Wildcard.

Do you know what a local lookup is?
If your lookup file is a multifile and partioned/sorted on a particular key then local lookup function can be
used ahead of lookup function call. This is local to a particular partition depending on the key.

Lookup File consists of data records which can be held in main memory. This makes the transform
function to retrieve the records much faster than retirving from disk. It allows the transform component
to process the data records of multiple files fastly.

What is the difference between look-up file and look-up, with a relevant example?
Generally Lookup file represents one or more serial files(Flat files). The amount of data is small enough to
be held in the memory. This allows transform functions to retrive records much more quickly than it could
retrive from Disk.
A lookup is a component of abinitio graph where we can store data and retrieve it by using a key
parameter.
A lookup file is the physical file where the data for the lookup is stored.
How many components in your most complicated graph? It depends the type of components you us.

usually avoid using much complicated transform function in a graph.

Explain what is lookup?
Lookup is basically a specific dataset which is keyed. This can be used to mapping values as per the data
present in a particular file (serial/multi file). The dataset can be static as well dynamic ( in case the
lookup file is being generated in previous phase and used as lookup file in current phase). Sometimes,
hash-joins can be replaced by using reformat and lookup if one of the input to the join contains less
number of records with slim record length.
AbInitio has built-in functions to retrieve values using the key for the lookup
What is a ramp limit?
The limit parameter contains an integer that represents a number of reject events

The ramp parameter contains a real number that represents a rate of reject events in the number of
records processed.
no of bad records allowed = limit + no of records*ramp.
ramp is basically the percentage value (from 0 to 1)
This two together provides the threshold value of bad records.

Have you worked with packages?
Multistage transform components by default uses packages. However user can create his own set of
functions in a transfer function and can include this in other transfer functions.

Have you used rollup component? Describe how.
If the user wants to group the records on particular field values then rollup is best way to do that. Rollup
is a multi-stage transform function and it contains the following mandatory functions.
1. initialise
2. rollup
3. finalise
Also need to declare one temporary variable if you want to get counts of a particular group.

For each of the group, first it does call the initialise function once, followed by rollup function calls for
each of the records in the group and finally calls the finalise function once at the end of last rollup call.

How do you add default rules in transformer?
Add Default Rules Opens the Add Default Rules dialog. Select one of the following: Match Names
Match names: generates a set of rules that copies input fields to output fields with the same name. Use
Wildcard (.*) Rule Generates one rule that copies input fields to output fields with the same name.

)If it is not already displayed, display the Transform Editor Grid.
2)Click the Business Rules tab if it is not already displayed.
3)Select Edit > Add Default Rules.

In case of reformat if the destination field names are same or subset of the source fields then no need to
write anything in the reformat xfr unless you dont want to use any real transform other than reducing the
set of fields or split the flow into a number of flows to achive the functionality.

What is the difference between partitioning with key and round robin?
Partition by Key or hash partition -> This is a partitioning technique which is used to partition data when
the keys are diverse. If the key is present in large volume then there can large data skew. But this
method is used more often for parallel data processing.

Round robin partition is another partitioning technique to uniformly distribute the data on each of the
destination data partitions. The skew is zero in this case when no of records is divisible by number of
partitions. A real life example is how a pack of 52 cards is distributed among 4 players in a round-robin
manner.

How do you improve the performance of a graph?
There are many ways the performance of the graph can be improved.
1) Use a limited number of components in a particular phase
2) Use optimum value of max core values for sort and join components
3) Minimise the number of sort components
4) Minimise sorted join component and if possible replace them by in-memory join/hash join
5) Use only required fields in the sort, reformat, join components
6) Use phasing/flow buffers in case of merge, sorted joins
7) If the two inputs are huge then use sorted join, otherwise use hash join with proper driving port
8) For large dataset don't use broadcast as partitioner
9) Minimise the use of regular expression functions like re_index in the trasfer functions
10) Avoid repartitioning of data unnecessarily

Try to run the graph as long as possible in MFS. For these input files should be partitioned and if possible
output file should also be partitioned.
How do you truncate a table?

From Abinitio run sql component using the DDL "trucate table
By using the Truncate table component in Ab Initio

Have you eveer encountered an error called "depth not equal"?
When two components are linked together if their layout doesnot match then this problem can occur
during the compilation of the graph. A solution to this problem would be to use a partitioning component
in between if there was change in layout.

What is the function you would use to transfer a string into a decimal?
In this case no specific function is required if the size of the string and decimal is same. Just use decimal
cast with the size in the transform function and will suffice. For example, if the source field is defined as
string(8) and the destination as decimal(8) then (say the field name is field1).

out.field :: (decimal(8)) in.field

If the destination field size is lesser than the input then use of string_substring function can be used likie
the following.
say destination field is decimal(5).

out.field :: (decimal(5))string_lrtrim(string_substring(in.field,1,5)) /* string_lrtrim used to trim leading
and trailing spaces */
What are primary keys and foreign keys?

In RDBMS the relationship between the two tables is represented as Primary key and foreign key
relationship.Wheras the primary key table is the parent table and foreignkey table is the child table.The
criteria for both the tables is there should be a matching column.

What is the difference between clustered and non-clustered indices? ...and why do you use a
clustered index?

What is an outer join?

An outer join is used when one wants to select all the records from a port - whether it has
satisfied the join criteria or not.

What are Cartesian joins?

joins two tables without a join key. Key should be {}.
What is the purpose of having stored procedures in a database?
Main Purpose of Stored Procedure for reduse the network trafic and all sql statement executing in cursor
so speed too high.
Why might you create a stored procedure with the 'with recompile' option?

Recompile is useful when the tables referenced by the stored proc undergoes a lot of
modification/deletion/addition of data. Due to the heavy modification activity the execute plan becomes
outdated and hence the stored proc performance goes down. If we create the stored proc with recompile
option, the sql server wont cache a plan for this stored proc and it will be recompiled every time it is run.

What is a cursor? Within a cursor, how would you update fields on the row just fetched
The oracle engine uses work areas for internal processing in order to the execute sql statement is called
cursor.There are two types of cursors like Implecit cursor and Explicit cursor.Implicit cursor is using for
internal processing and Explicit cursor is using for user open for data required.

How would you find out whether a SQL query is using the indices you expect?
explain plan can be reviewed to check the execution plan of the query. This would guide if the expected
indexes are used or not.

How can you force the optimizer to use a particular index?
use hints /*+ <hint> */, these acts as directives to the optimizer

select /*+ index(a index_name) full(b) */ *from table1 a, table2 bwhere b.col1 = a.col1 and b.col2=
'sid'and b.col3 = 1;

When using multiple DML statements to perform a single unit of work, is it preferable to use implicit or
explicit transactions, and why.


Because implicit is using for internal processing and explicit is using for user open data requied.
Describe the elements you would review to ensure multiple scheduled "batch" jobs do not "collide" with
each other.

Because every job depend upon another job for example if you first job result is successfull then another
job will execute otherwise your job doesn't work.

Describe the process steps you would perform when defragmenting a data table.
This table contains mission critical data.

There are several ways to do this:
1) We can move the table in the same or other tablespace and rebuild all the indexes on the table.
alter table <table_name> move <tablespace_name> this activity reclaims the defragmented space in the
table
analyze table table_name compute statistics to capture the updated statistics.
2)Reorg could be done by taking a dump of the table, truncate the table and import the dump back into
the table.

Explain the difference between the truncate and "delete" commands.
The difference between the TRUNCATE and DELETE statement is Truncate belongs to DDL command
whereas DELETE belongs to DML command.Rollback cannot be performed incase of Truncate statement
wheras Rollback can be performed in Delete statement. "WHERE" clause cannot be used in Truncate
where as "WHERE" clause can be used in DELETE statement.

What is the difference between a DB config and a CFG file?
A .dbc file has the information required for Ab Initio to connect to the database to extract or load tables
or views. While .CFG file is the table configuration file created by db_config while using components like
Load DB Table.

Describe the Grant/Revoke DDL facility and how it is implemented.
Basically,This is a part of D.B.A responsibilities GRANT means permissions for example GRANT CREATE
TABLE ,CREATE VIEW AND MANY MORE .
REVOKE means cancel the grant (permissions).So,Grant or Revoke both commands depend upon D.B.A.

Das könnte Ihnen auch gefallen