Sie sind auf Seite 1von 4

1. Explain what is SQL override for a source table in a mapping. Ans.

: The Source Qualifier provides the SQL Query option to override the default query. You can enter any SQL statement supported by your source database. You might enter your own SELECT statement, or have the database perform aggregate calculations, or call a stored procedure or stored function to read the data and perform some tasks. 2. What is lookup override? Ans.: This feature is similar to entering a custom query in a Source Qualifier transformation. When entering a Lookup SQL Override, you can enter the entire override, or generate and edit the default SQL statement. The lookup query override can include WHERE clause. 3. Informatica settings are available in which file? Ans.: Informatica settings are available in a file pmdesign.ini in Windows folder. 4. What is the difference between OLTP & OLAP? Ans.: OLTP stand for Online Transaction Processing. This is standard, normalized database structure. OLTP is designed for Transactions, which means that inserts, updates, and deletes must be fast. Imagine a call center that takes orders. Call takers are continually taking calls and entering orders that may contain numerous items. Each order and each item must be inserted into a database. Since the performance of database is critical, we want to maximize the speed of inserts (and updates and deletes). To maximize performance, we typically try to hold as few records in the database as possible. OLAP stands for Online Analytical Processing. OLAP is a term that means many things to many people. Here, we will use the term OLAP and Star Schema pretty much interchangeably. We will assume that star schema database is an OLAP system. (This is not the same thing that Microsoft calls OLAP; they extend OLAP to mean the cube structures built using their product, OLAP Services). Here, we will assume that any system of read-only, historical, aggregated data is an OLAP system. A data warehouse (or mart) is way of storing data for later retrieval. This retrieval is almost always used to support decision-making in the organization. That is why many data warehouses are considered to be DSS (Decision-Support Systems). Both a data warehouse and a data mart are storage mechanisms for read-only, historical, aggregated data. By read-only, we mean that the person looking at the data wont be changing it. If a user wants at the sales yesterday for a certain product, they should not have the ability to change that number. The historical part may just be a few minutes old, but usually it is at least a day old. A data warehouse usually holds data that goes back a certain period in time, such as five years. In contrast, standard OLTP systems usually only hold data as long as it is current or active. An order table, for example, may move orders to an archive table once they have been completed, shipped, and received by the customer. When we say that data warehouses and data marts hold aggregated data, we need to stress that there are many levels of aggregation in a typical data warehouse.

5. Why did u choose STAR SCHEMA only? What are the benefits of STAR SCHEMA? Ans.: Because its denormalized structure, i.e., Dimension Tables are denormalized. Why to denormalize means the first (and often only) answer is speed. OLTP structure is designed for data inserts, updates, and deletes, but not data retrieval. Therefore, we can often squeeze some speed out of it by denormalizing some of the tables and having queries go against fewer tables. These queries are faster because they perform fewer joins to retrieve the same recordset. Joins are also confusing to many End-users. By denormalizing, we can present the user with a view of the data that is far easier for them to understand. Benefits of STAR SCHEMA: Far fewer Tables. Designed for analysis across time. Simplifies joins. Less database space. Supports drilling in reports. Flexibility to meet business and technical needs. 6. What is difference between view and materialized view? Views contains query whenever execute views it has read from base table Where as M views loading or replicated takes place only once which gives you better query performance Refresh m views 1.on commit and 2. on demand (Complete, never, fast, force) 7. Difference between OLTP and DWH? OLTP system is basically application orientation (eg, purchase order it is functionality of an application) Where as in DWH concern is subject orient (subject in the sense customer, product, item, time) OLTP DWH Application Oriented Used to run business Detailed data Current up to date Isolated Data Repetitive access Clerical User Performance Sensitive Few Records accessed at a time (tens) Read/Update Access No data redundancy Database Size 100MB-100 GB Subject Oriented Used to analyze business Summarized and refined Snapshot data Integrated Data Ad-hoc access

Knowledge User Performance relaxed Large volumes accessed at a time(millions) Mostly Read (Batch Update) Redundancy present Database Size 100 GB - few terabytes

8. How will you move mappings from development to production database? Copy all the mapping from development repository and paste production repository while paste it will prompt whether you want replace/rename. If say replace Informatica replace the entire source tables with repository database. 9. What are stored procedure transformations? Purpose of SP transformation. How did you go about using your project? Connected and unconnected stored procedure. Unconnected stored procedure used for data base level activities such as pre and post load Connected stored procedure used in Informatica level for example passing one parameter as input and capturing return value from the stored procedure. Normal - row wise check Pre-Load Source - (Capture source incremental data for incremental aggregation) Post-Load Source - (Delete Temporary tables) Pre-Load Target - (Check disk space available) Post-Load Target (Drop and recreate index) 10.How do you handle a session if some of the records fail. How do you stop the session in case of errors? Can it be achieved in mapping level or session level? It can be achieved in session level only. In session property sheet, log files tab, one option is the error-handling Stop on ------ errors. Based on the error we set Informatica server stop the session. 11.What is external table in oracle. How oracle read the flat file Used for read flat file. Oracle internally writes SQL loader script with control file. 12.Variable v1 has values set as 5 in designer (default), 10 in parameter file, and 15 in Repository. While running session which value Informatica will read? Informatica read value 15 from repository 13. Joiner transformation is joining two tables s1 and s2. S1 has 10,000 rows and s2 has 1000 rows. Which table you will set master for better performance of joiner Transformation? Why? Set table S2 as Master table because Informatica server has to keep master table in the cache so if it is 1000 in cache will get performance instead of having 10000 rows in cache

14. Suppose session is configured with commit interval of 10,000 rows and source has 50,000 rows explain the commit points for source based commit & target based commit. Assume appropriate value wherever required? Target Based commit (First time Buffer size full 7500 next time 15000) Commit Every 15000, 22500, 30000, 40000, and 50000 Source Based commit (Does not affect rows held in buffer) Commit Every 10000, 20000, 30000, 40000, and 50000 15.What is the use of power plug? Ans.: For 3rd party connectors to sap, mainframe, Peoplesoft. 16.What kind of Test plan? What kind of validation you do? Ans.: In Informatica we create some test SQL to compare the number of records and validate scripts if the data in the warehouse is loaded for the logic incorporated. 17.How to use a sequence created in Oracle in Informatica? Ans.: By using Stored procedure transformation. 18.What is Code Page used for? Ans.: Code Page is used to identify characters that might be in different languages. If you are importing Japanese data into mapping, then u must select the Japanese code page for the source data. 19.How do you do error handling in Informatica ? Ans.: Error handling is very primitive. Log files can be generated which contain error details and code. The error code can be checked from troubleshooting guide and corrective action taken. The log file can be increased by giving appropriate tracing level in the session properties. Also we can give that one Session can stop after 1,2 or n number of errors.

Das könnte Ihnen auch gefallen