Sie sind auf Seite 1von 15

Informatica CCP:

1) Can we import VSAM sources in Informatica? Yes 2) Applied Rows and Effected rows are always same in Informatica False 3) If you have a mapping with version 4 in folder1, you take an export and import it in folder2 having no mappings before this. What would be the version no of that mapping in folder2? Version 1 4) Data maps and Controller 5) Which of the nesting is possible? Worklet in a Worklet 6) Dependent/Independent Repository Objects related to Reusable/ Non-Reusable Transformation. 7) How Error Logging is done in Informatica? With pmerr tables 8) Is row level testing possible in Informatica? 9) Purging of Objects: Purging means deleting a version permanently from the repository. We do it by using either a History Window or Query Result Window. 10) Default Permissions when a folder is created: 11) External Loader

12) Active and Passive Mapplets: Active Mapplet: Having one/more Active Xformation Passive Mapplet: Having passive Xformation 13) Master Gateway: It has the below privileges: Authentication, Authorization, Licensing 14) Status of a user when logged in for the first time in the rep: Enabled 15) When does the Power Centre Server run in exclusive mode? During its Downtime 16) When do you need to validate a mapping? Changes might invalid the mappings using Mapplets 17) Which Partitions are used for flat files? Pass Through 18) Types of Partitions: Pass Through, Round Robin, Hash Auto Keys, Hash User Keys, Database Partitioning 1) Pass Through: Passes by pass through based on the partition points defined. Data is distributed unevenly. 2) Round Robin: Evenly distribution of data in each partition. 3) Hash: They are used where we have group of data. Two types:

Hash Auto Keys: It generates the key based on the grouped and sorted ports automatically. Used for Rank, Sorter and Unsorted Aggregator. Hash User Keys: User defines the partition key. 4) Database Partitioning: Only for source and target tables(Only for DB2 target tables). 19) For Flat files, we only have pass-through partitions. 20) We cant add below things in a Mapplet: Normalizer, Cobol sources, XML Source, Another Mapplet 21) Using suitable partitions for below Xformation for better perf: Filter: Choose Round Robin to balance the load Source Qualifier: N partitions for N flat files Sorter: To eliminate overlapping group do partition on sorter and can delete the default one on Aggregator. Target: Key Range to optimize writing to target. 22) Cant create Partition points for below: Source Defn, Seq Gen, XML Parser, XML Target, Unconnected Transformation 23) For Rank and Unsorted Agg, the default partition is Hash Auto Keys 24) For transaction Control related Transformation, upstream and Downstream, partition is passthrough. 25) Workflow Recovery: Recovering the workflow from the last failed task.

26) If Workflow Recovery is enabled and there is no sorter or rank Xformation, then by default pass through is set. 27) Deployment Group: Can be static and dynamic 28) Push Down optimization works with all partition types except database Partitioning.

Round Robin Exp Sorted Agg Unsorted Agg Joiner Source (Relation al) Target (Relation

Hash Auto

Hash User

Key Range

Pass Throug h

Database Partitionin g

(DB2 only)

al) Lookup Normalize r Rank Target (FF)


29 ) You must run the Repository Service in exclusive mode to enable version control for the repository.

If you have the team-based development option, you can enable version control for a new or existing repository. A versioned repository can store multiple versions of objects. If you enable version control, you can maintain multiple versions of an object, control development of the object, and track changes. You can also use labels and deployment groups to associate groups of objects and copy them from one repository to another. After you enable version control for a repository, you cannot disable it. When you enable version control for a repository, the repository assigns all versioned objects version number 1, and each object has an active status. You must run the Repository Service in exclusive mode to enable version control for the repository.

30) For CDC sessions, PWXPC controls the timing of commit processing and uses source-based commit processing in PowerCenter.

31) The following properties can affect outbound IDoc session performance: Pipeline partitioning Outbound IDoc validation Row-level processing 32) Terminating Conditions Terminating conditions determine when the Integration Service stops reading from the source and ends the session. You can define the following terminating conditions: Idle Time Packet Count Reader Time Limit 33) You can configure cache partitioning for a Lookup transformation. You can create multiple partitions for static and dynamic lookup caches. The cache for a pipeline Lookup transformation is built in an independent pipeline from the pipeline that contains the Lookup transformation. You can create multiple partitions in both pipelines. 34) Cache Partitioning Lookup Transformations Use cache partitioning for static and dynamic caches, and named and unnamed caches. When you create a partition point at a

connected Lookup transformation, use cache partitioning under the following conditions: Use the hash auto-keys partition type for the Lookup transformation. The lookup condition must contain only equality operators. The database is configured for case-sensitive comparison. For example, if the lookup condition contains a string port and the database is not configured for case-sensitive comparison, the Integration Service does not perform cache partitioning and writes the following message to the session log:

CMN_1799 Cache partitioning requires case sensitive string comparisons. Lookup will not use partitioned cache as the database is configured for case insensitive string comparisons.

The Integration Service uses cache partitioning when you create a hash auto-keys partition point at the Lookup transformation. When the Integration Service creates cache partitions, it begins creating caches for the Lookup transformation when the first row of any partition reaches the Lookup transformation. If you configure the Lookup transformation for concurrent caches, the Integration Service builds all caches for the partitions concurrently. 35) If you configure multiple partitions in a session on a grid that uses an uncached Sequence Generator transformation, the sequence numbers the Integration Service generates for each partition are not consecutive.

36) Types of Caches Use the following types of caches to increase performance: Shared cache. You can share the lookup cache between multiple transformations. You can share an unnamed cache between transformations in the same mapping. You can share a named cache between transformations in the same or different mappings. Persistent cache. To save and reuse the cache files, you can configure the transformation to use a persistent cache. Use this feature when you know the lookup table does not change between session runs. Using a persistent cache can improve performance because the Integration Service builds the memory cache from the cache files instead of from the database. 37) Indexing the Lookup Table The Integration Service needs to query, sort, and compare values in the lookup condition columns. The index needs to include every column used in a lookup condition. You can improve performance for the following types of lookups: Cached lookups. To improve performance, index the columns in the lookup ORDER BY statement. The session log contains the ORDER BY statement. Uncached lookups. To improve performance, index the columns in the lookup condition. The Integration Service issues a SELECT statement for each row that passes into the Lookup transformation.

38) If you create multiple partitions in the session, the Integration Service uses cache partitioning. It creates one disk cache for the Sorter transformation and one memory cache for each partition. The Integration Service creates a separate cache

for each partition and sorts each partition separately.

Transformation Aggregator Transformation

Joiner Transformation Lookup Transformation Rank Transformation Sorter Transformation

Description You create multiple partitions in a session with an Aggregator transformation. You do not have to set a partition point at the Aggregator transformation. For more information about Aggregator transformation caches, see Aggregator Caches. You create a partition point at the Joiner transformation. For more information about caches for the Joiner transformation, see Joiner Caches. You create a hash auto-keys partition point at the Lookup transformation. For more information about Lookup transformation caches, see Lookup Caches. You create multiple partitions in a session with a Rank transformation. You do not have to set a partition point at the Rank transformation. For more information about Rank transformation caches, see Rank Caches. You create multiple partitions in a session with a Sorter transformation. You do not have to set a partition point at the Sorter transformation. For more information about Sorter transformation caches, see Sorter Caches.

39) When you delete an object in a versioned repository, the repository removes the object from view in the Navigator and the workspace but does not remove it from the repository database. Instead, the repository creates a new version of the object and changes the object status to Deleted. You can recover a deleted object by changing its status to Active.

Recovering a Deleted Object


You can recover a deleted object by changing the object status to Active. This makes the object visible in the Navigator and workspace. Use a query to search for deleted objects. You use the Repository Manager to recover deleted objects. Complete the following steps to recover a deleted object:

Create and run a query to search for deleted objects in the repository. You can search for all objects marked as deleted, or 1 add conditions to narrow the search. Include the following . condition when you query the repository for deleted objects: Version Status Is Equal To Deleted 2 Change the status of the object you want to recover from . Deleted to Active. If the recovered object has the same name as another object 3 that you created after you deleted the recovered object, you . must rename the object.

The following table shows the Repository Manager commands that you can use to purge versions at the object, folder, or repository level: Single Object Version By Object Version X (View History Window) By Object Version X (Query Results Window) Based on Criteria X (Navigator) Based on Criteria X (View History Window) Based on Criteria X (Query Results window) Purge Type Multiple Object Versions X Versions at Folder Versions at Level Repository Level

X X

Setting Environment Variables


http://my.informatica.com. The following table describes each recoverable task status

Status Aborted

Description You abort the workflow or task in the Workflow Monitor or through pmcmd. You can also choose to abort all running workflows when you disable the service or service process in the Administration Console. You can also configure a session to abort based on mapping conditions. You can recover the workflow in the Workflow Monitor to recover the task or you can recover the workflow using pmcmd. Stopped You stop the workflow or task in the Workflow Monitor or through pmcmd. You can also choose to stop all running workflows when you disable the service or service process in the Administration Console. You can recover the workflow in the Workflow Monitor to recover the task or you can recover the workflow using pmcmd. Failed The Integration Service failed the task due to errors. You can recover a failed task using workflow recovery when the workflow is configured to suspend on task failure. When the workflow is not suspended you can recover a failed task by recovering just the session or recovering the workflow from the session. You can fix the error and recover the workflow in the Workflow Monitor or you can recover the workflow using pmcmd. Terminated The Integration Service stops unexpectedly or loses network connection to the master service process. You can recover the workflow in the Workflow Monitor or you can recover the workflow using pmcmd after the Integration Service restarts.

Task Recovery Strategies


Each task in a workflow has a recovery strategy. When the Integration Service recovers a workflow, it recovers tasks based on the recovery strategy: Restart task. When the Integration Service recovers a workflow, it restarts each recoverable task that is configured with a restart strategy. You can configure Session and Command tasks with a restart recovery strategy. All other tasks have a restart recovery strategy by default. Fail task and continue workflow. When the Integration Service recovers a workflow, it does not recover the task. The task status becomes failed, and the Integration Service continues running the workflow.

Configure a fail recovery strategy if you want to complete the workflow, but you do not want to recover the task. You can configure Session and Command tasks with the fail task and continue workflow recovery strategy. Resume from the last checkpoint. The Integration Service recovers a stopped, aborted, or terminated session from the last checkpoint. You can configure a Session task with a resume strategy.

The following table describes the recovery strategy for each task type: Task Type Recovery Strategy Assignment Restart task. Command Restart task. Fail task and continue workflow. Control Restart task. Decision Restart task. Email Restart task. Event-Raise Restart task. Event-Wait Restart task. Session Resume from the last checkpoint. Restart task. Fail task and continue workflow. Timer Restart task. Worklet n/a Comments Default is fail task and continue workflow.

The Integration Service might send duplicate email.

Default is fail task and continue workflow.

If you use a relative time from the start time of a task or workflow, set the timer with the original value less the passed time. The Integration Service does not recover a worklet. You can recover the session in the worklet by expanding the worklet in the Workflow Monitor and choosing Recover Task.

Session Task Strategies


When you configure a session for recovery, you can recover the session when you recover a workflow, or you can recover the session without running the rest of the workflow. When you configure a session, you can choose a recovery strategy of fail, restart, or resume:

Resume from the last checkpoint. The Integration Service saves the session state of operation and maintains target recovery tables. If the session aborts, stops, or terminates, the Integration Service uses the saved recovery information to resume the session from the point of interruption. Restart task. The Integration Service runs the session again when it recovers the workflow. When you recover with restart task, you might need to remove the partially loaded data in the target or design a mapping to skip the duplicate rows. Fail task and continue workflow. When the Integration Service recovers a workflow, it does not recover the session. The session status becomes failed, and the Integration Service continues running the workflow.

Standalone Command tasks. You can use service, service process, workflow, and worklet variables in standalone Command tasks. You cannot use session parameters, mapping parameters, or mapping variables in standalone Command tasks. The Integration Service does not expand these types of parameters and variables in standalone Command tasks. Pre- and post-session shell commands. You can use any parameter or variable type that you can define in the parameter file.

Executing Commands in the Command Task


The Integration Service runs shell commands in the order you specify them. If the Load Balancer has more Command tasks to dispatch than the Integration Service can run at the time, the Load Balancer places the tasks it cannot run in a queue. When the Integration Service becomes available, the Load Balancer dispatches tasks from the queue in the order determined by the workflow service level. For more information about how the Load Balancer uses service levels, see the PowerCenter Administrator Guide. You can choose to run a command only if the previous command completed successfully. Or, you can choose to run all commands in the Command task, regardless of the result of the previous command. If you configure multiple commands in a Command task to run on UNIX, each command runs in a separate shell. If you choose to run a command only if the previous command completes successfully, the Integration Service stops running the rest of the commands and fails the task when one of the commands in the Command task fails. If you do not choose this option, the Integration Service runs all the commands in the Command task and treats the task as completed, even if a command fails. If you want the Integration Service to perform the next command only if the previous

command completes successfully, select Fail Task if Any Command Fails in the Properties tab of the Command task. The following figure shows the Fail Task if Any Command Fails option:

Control Task:
Use the Control task to stop, abort, or fail the top-level workflow or the parent workflow based on an input link condition. A parent workflow or worklet is the workflow or worklet that contains the Control task.

Control Option Fail Me

Description

Marks the Control task as Failed. The Integration Service fails the Control task if you choose this option. If you choose Fail Me in the Properties tab and choose Fail Parent If This Task Fails in the General tab, the Integration Service fails the parent workflow. Fail Parent Marks the status of the workflow or worklet that contains the Control task as failed after the workflow or worklet completes. Stop Parent Stops the workflow or worklet that contains the Control task. Abort Parent Aborts the workflow or worklet that contains the Control task. Fail Top-Level Fails the workflow that is running. Workflow Stop TopStops the workflow that is running. Level Workflow Abort TopAborts the workflow that is running. Level Workflow

To increase performance, specify partition types at the following partition points in the pipeline: Source Qualifier transformation. To read data from multiple flat files concurrently, specify one partition for each flat file in the Source Qualifier transformation. Accept the default partition type, pass-through. Filter transformation. Since the source files vary in size, each partition processes a different amount of data. Set a partition point at the Filter transformation, and choose round-robin

partitioning to balance the load going into the Filter transformation. Sorter transformation. To eliminate overlapping groups in the Sorter and Aggregator transformations, use hash auto-keys partitioning at the Sorter transformation. This causes the Integration Service to group all items with the same description into the same partition before the Sorter and Aggregator transformations process the rows. You can delete the default partition point at the Aggregator transformation. Target. Since the target tables are partitioned by key range, specify key range partitioning at the target to optimize writing data to the target.

Das könnte Ihnen auch gefallen