Sie sind auf Seite 1von 12

Configuration Manager

Administrative UI to manage system configuration for your BI Applications


Global parameters
-Language setup
-Currency setup
-Extract dates
Helps you Manage setup data, load and extend the warehouse
Monitor and Manage Load Plans that perform the ETL
Enables migration of configuration information from file
BIACM helps maintain and monitor Domains Mappings and Domain Member
Mappings, Source Domains, Warehouse Domains and Warehouse Domain
Hierarchies
You can edit current Domains or extend with new Domain Mappings
The Functional Setup Manager tasks screen links to a specific BIACM screen for the
domain so you can edit the specific mapping
BIACM can be used to maintain the Global and Data Load Parameters like
INITIAL_EXTRACT_DATE, IS_SDS_DEPLOYED, etc

Ability to export and import configuration from file.


BI Applications Configuration Manager (BIACM) with prebuilt metadata for
configuring, running, and monitoring the E-LT processes for source-specific
containers .
BI Applications Configuration Manager (BIACM) with prebuilt metadata for
configuring, running, and monitoring the E-LT processes for source-specific
containers .
Load Plan Generator (LPG) is a plug-in to BIACM that generates Load
Plans to load desired Subset of fact tables to be populated into BI Apps Data
Warehouse
The Load Plan Life Cycle consists of the following Phases:
Define Load Plan
Define Load Plan properties
Choose the appropriate Data Source
Select the required Fact Groups
Generate Load Plans
LPG Generates Plan based on design-time and run-time dependencies
LPG trims out unnecessary tasks
Creates steps across 3 phases SDE, SIL and PLP
Adds in additional steps for Table maintenance

Execute Load Plan


Executes

steps of the Load Plan

Creates Load Plan Instances


Store each load plan run information if restarted under the same Load Plan
Instance

Monitor Load Plans


Monitor Load Plan runs from BIACM

Monitor Load Plan runs from Operator


Load plans can be defined from Manage Load Plans
New Load Plans can either be created from scratch or copied from existing Load
Plans
The Load Plans Types are designed with the ability to clearly separate the SDE and
SIL/PLP phases
This enables to reduce source system downtime only to the SDE phase.

Load Plan Types


These are the available Load Plan Types
Source Extract (SDE)
Includes Tasks that extract required tables from source
Source Extract and Load (SDE, SIL & PLP)
Includes Tasks that extract from source, load into warehouse and related post
process
Warehouse Load (SIL & PLP)
Only includes load into Warehouse and post process
Domain only Extract and Load (SDE & SIL)
Extracts only Domain related records from source

Load Plan Generation


Load Plans need to be generated once defined only then they can be run.
Once Generate is clicked, LPG resolves all dependencies based on metadata stored
in repository

Load Plan Execution


Load Plans can be run by clicking the Execute button once generated.
Multiple Load Plans should not be run in parallel

Load Plan Monitoring


Load Plans can be monitored from the ODI Console by clicking Manage Execution
Status

Restarting Load Plans


Load Plans can be restarted if a failure is encountered
In order to avoid inefficient ETL they can be restarted either from Point of failure or
based on the setup in the Load Plan

Restarting Load Plans


Any Serial Load Plan step is always set to Restart from Failure
In case of failure in a sequence of sub-steps, the load resumes from failed step
Any Parallel Load Plan step is always set to Restart from Failed Children
This ensures only the failed steps in a set of parallel steps being run are executed
again.
Any Scenario step is always set to Restart from failure
If a sub-step of a scenario fails, then restart will execute the scenario from the
failed sub step

Handling Load Plan Failures


In case a Load Plan cannot be restarted because of failures within a single step, it
can be Marked as Complete so rest of the Load Plan is not held up
The failed scenario will need to be handled and rerun separately
Care needs to be taken to handle all dependencies if a load plan is run separately
Also care needs to be taken to reset the scenario variables to Load Plan run values

Handling Load Plan Failures


In case a scenario is regenerated in the middle of a Load Plan failure, then the
Load Plan does not pickup the latest scenario
In this case either the Load Plan can be restarted from scratch or the Load Plan can
be Marked as Complete and scenario executed separately with the right variable
values

Stopping Load Plans


Load Plans can be stopped by using the Stop Load Plan from the context menu of
the ODI Studio or the ODI Console
Load Plans can be stopped after a logical point of completion (Stop Normal) or be
stopped immediately at current point of execution (Stop Immediate)

Source Dependent Extract(SDE):


Is source-specific and supports universal business adaptors
Exposes simplified business entities from complex source systems
Converts source-specific data to universal staging table format
Is lightweight and designed for performance and parallelism
Is extensible

Source Independent Load(SIL):


Encapsulates warehouse load logic
Handles:
Slowly changing dimensions
Key lookup resolution/ surrogate key generation
Insert/update strategies
Currency conversion
Data consolidation

Uses bulk loaders

Post Load Process(PLP):


Post Load Processes are executed after the dimensions and facts are populated
A typical example would be to transform a base fact table into an aggregate table
PLP tasks are source independent

Load plan types include:


- Source Extract (SDE): Includes only those tasks that extract from the source and
loads data into staging tables.
- Source Extract and Load (SDE, SIL, and PLP): Includes all tasks to extract from
the source and load the data warehouse tables.
- Warehouse Load (SIL and PLP): Includes only those tasks that extract from the
staging tables and load the data warehouse tables.
- Domain-Only: Includes all tasks required to extract domain-related records from
the source and load the data into the domain-related tables in the Oracle Business
Analytics Warehouse.

Functional Setup Manager

Administrative UI to track & manage


implementation projects and required functional setup steps
Enables you to select BI Applications offerings and Functional areas that are
required
Generates a list of configuration tasks that need to completed before a Full Load is
run
The generated tasks can be assigned to different developers and can be monitored
from the Functional Setup Manager

The Oracle Business Analytics Warehouse supports the following currency types:
Document currency. The currency in which the transaction was performed and the
related document was stored in. For example, if your company purchases a desk
from a supplier in France, the document currency will likely be in Euros.
Local currency. The accounting currency of the legal entity or ledger in which the
transaction occurred.
Global currency. The Oracle Business Analytics Warehouse stores up to three
group currencies configured using the Oracle Business Intelligence Data
Warehouse Administration Console. For example, if your company is a multinational
enterprise with headquarters in the United States, USD (US Dollars) will be one of the
three global currencies.
The global currencies must be pre-configured prior to loading data so that exchange
rates can be applied to the transactional data as it is loaded into the data
warehouse. For every monetary amount extracted from the source, the ETL mapping
stores the document and local amounts in the target table. It also stores the correct
exchange rates required to convert the document amount into each of the three
global currencies. Generally, there will be eight columns on a fact table for each

amount: one local amount, one document amount, three global amounts, and three
exchange rates used to calculate the global amount.
To configure the global currencies you want to report:
1. In DAC, go to the Design view, and select the appropriate custom container
from the drop-down list.
2. Display the Source System Parameters tab.
3. Locate the following parameters, and set the currency code values for them in
the Value field (making sure to the values are consistent with the source
system exchange rate table):
$$GLOBAL1_CURR_CODE (for the first global currency).
$$GLOBAL2_CURR_CODE (for the second global currency).
$$GLOBAL3_CURR_CODE (for the third global currency).
When Oracle BI Applications converts your transaction records' amount from document
currency to global currencies, it also requires the exchange rate types to use to
perform the conversion. For each of the global currencies, Oracle BI Applications also
enables you to specify the exchange rate type to use to perform the conversion.
Oracle BI Applications also provides three global exchange rate types for you to
configure.
Oracle BI Applications also converts your transaction records' amount from document
currency to local currency. Local currencies are the base currencies in which your
accounting entries and accounting reports are recorded. In order to perform this
conversion, Oracle BI Applications also enables you to configure the rate type that you
want to use when converting the document currency to the local currency.
To configure exchange rate types:
1. In DAC, go to the Design view, and select the appropriate custom container
from the drop-down list.
2. Display the Source System Parameters tab.
3.

Locate the following DAC parameters and set the exchange rate type values
for them in the Value field:
$$GLOBAL1_RATE_TYPE
$$GLOBAL2_RATE_TYPE

(rate type for GLOBAL1_CURR_CODE)


(rate type for GLOBAL2_CURR_CODE)

$$GLOBAL3_RATE_TYPE

(rate type for GLOBAL1_CURR_CODE)

$$DEFAULT_LOC_RATE_TYPE

(rate type for document to local currency).

Exchange Rate ETL:


The exchange rates are stored in the table W_EXCH_RATE_G. For Peoplesoft
implementations, W_EXCH_RATE_G is loaded using from W_EXCH_RATE_GS which is
populated using mapping SDE_PSFT_ExchangeRateDimension.

http://oracletuition.blogspot.in/2014/01/bi
-apps-111171-configurationdocument.html
Tasks in Functional Setup Manager
The configuration tasks listed by Functional Setup Manager can be of the following
types
Tasks to configure Data Load Parameters
TIME_GRAIN, INITIAL_EXTRACT_DATE, etc
Tasks to manage Domains and Mappings
Domain maps for employee dimension
Tasks to configure Reporting Parameters
FSCM_MASTER_ORG
Tasks that provide information
Completion of the basic setup tasks ensures a full load run, but may not be
accurate
All the recommended tasks need to be completed for an accurate load of the data
into the warehouse
Once an implementation is started the tasks can be tracked through FSM and
monitored for completion

Creating an Implementation Project


The steps to create an implementation project are not displayed in the slide. To create
an
implementation project, in the Tasks bar, select Implementations > Manage
Implementation
Projects to display the Manage Implementation Projects page. Then choose Actions >
Create
to enter a name for the project and select the offering to implement. To make offerings
easier
to manage, Oracle recommends that you deploy one offering per project. In other
words, if
you are deploying three offerings, then create three implementation projects.
In this example, you have installed Oracle Financial Analytics and you create an

implementation project to configure the ETL for Oracle Financial Analytics. To configure
ETL
for Oracle Financial Analytics, you must create at least one implementation project.
When you
create an implementation project, you select the offering to deploy as part of that
project.
Once you create an implementation project, FSM generates the tasks required to
configure
the specified offering. By default, the tasks are assigned to the BI Administrator user. If
required, you can optionally assign tasks to functional developers, who will then perform
the
tasks. Use the Go to Task column to complete functional configuration tasks.

Typical Financial Analytics Configurations


Financial Analytics requires various tasks to be completed, some common ones are
explained:
Configure Initial Extract Date
GL account and GL Segment dimension configuration for EBS
Manage Domains and Member Mappings for Employee Dimension
Configure Data Load Parameters for Soft Delete
Specify Gregorian Calendar Date Ranges
Specify the Ledger or Set of Books for which General Ledger Data is Extracted

Configure Initial Extract Date


INITIAL_EXTRACT_DATE is a date parameter which is applied to SDE mappings in
order to limit transactional data extraction from source to DW.
Initial extract date in BIACM

Configure GL Accounts and GL Segments for EBS


GL accounts are key flex fields and which are stored in GL_CODE_COMBINATIONS
table.
Accounting flex field structure will be stored in FND_ID_FLEX_STRUCTURES

Setting up a Full Load


Various steps need to be completed successfully inorder to complete a Full Load
successfully
Overall the steps are as follows:
Setup Source System Details in BIACM
Enable the licensed Offerings and Modules in FSM
Create Implementation Projects in FSM
Assign Pending Tasks to Functional Developers in FSM
Complete Pending Tasks in FSM
Monitor all necessary tasks completion in FSM
Perform Domains and Mappings configuration
Generate a Load Plan
Execute and Run a Full Load
Assign owners and due dates to tasks
-Domain setup

Ensures setups are complete and easily interrogated

Admin Server

A WebLogic Domain consists of one Admin Server.


The Admin Server consists of a J2EE Admin Console application which is
WebLogics UI for providing management functions across the WebLogic domain:
http://localhost:7001/console
In addition, Oracle Enterprise Manager is also deployed to the Admin Server. EM
provides System Management and monitoring capability across the whole Oracle BI
Domain:
http://localhost:7001/em

Managed Server
Contains deployed J2EE application components.
A WebLogic Domain can have multiple Managed Servers, each of which can run on a
different machine
BIACM and FSM are deployed on the Managed Server bi_server1:
http://localhost:9704/biacm
ODI Console and Agent are deployed on the Managed Server odi_server1:
http://localhost:15001/odiconsole
Node Manager
A daemon process that provides remote start, stop, restart and monitoring
capabilities for WebLogic processes. Each machine running WebLogic will have
one (and only one) Node Manager process
Oracle BI System Components
The same processes as with OBIEE 10g, with an additional Oracle Process
Manager Notification Server (OPMN) component responsible for remote
start/stop/ping of System Components. OPMN can be controlled from the
command-line or via Enterprise Manager. OPMN is required on every OBIEE
machine
Oracle BI J2EE Components
Analytics : Gives Web access
Config Manager : Populates the Domain values automatically
FSM : Functional Setup Manger which Populates the Domain values automatically and will
provide a clear check list of tasks that should be completed before doing a full load
Load Plan Generator : LPG is a utility for generating ODI plans
ATG Lite :

ODI SDK : ODI Platform includes an SDK that allows developers to write codes that perform
tasks similar to the ones done using ODI Studio
biacm: http://<servername>:<port>/biacm
Analytics: http://<hostname>:<port>/analytics

Knowledge Module
Knowledge Modules are at the core of the Oracle Data Integrator Enterprise Edition
architecture.
They make all Oracle Data Integrator Enterprise Edition processes modular,
flexible, and extensible.
Knowledge Modules implement the actual data flows and define the templates for
generating code across the multiple systems involved in each process.
Knowledge Modules are generic, because they allow data flows to be generated
regardless of the transformation rules.
ODI Enterprise Edition provides a comprehensive library of Knowledge Modules,
which can be tailored to implement existing best practices (for example, for highest
performance, for adhering to corporate standards, or for specific vertical knowhow).

knowledge module is a code template containing the sequence of commands


necessary to implement a data integration task.
There are different predefined knowledge modules for loading, integration,
checking, reverse-engineering, journalizing, and deploying data services.
All knowledge modules work by generating code to be executed at run time.
KMs can be specified as Global, allowing them to be shared across multiple
projects.

Reverse :
Used for reading the table and other object metadata from source
databases and to import tables, columns, and indexes into a model
Journalize :
Used to record the new and changed data within either a single table
or view or a consistent set of tables or views
Load :
Used for efficient extraction of data from source DBs for loading into a
staging area (DB specific bulk unload utilities can be used where
available)
Check:
Used to validate and cleanse data
Integrate :
Used to load data into a target with different strategies, for example,
slowly changing dimensions and insert/update strategies
Services :
Exposes data in the form of Web services

Oracle BI Applications Specific


KMs
New KMs for BI Apps scenarios:
IKM BIApps Oracle Control Append
IKM BIAPPS Oracle Event Queue Delete Append
IKM BIAPPS Oracle Fact Incremental Update
IKM BIAPPS Oracle Incremental Update
IKM BIAPPS Oracle Period Delete Append
IKM BIAPPS Oracle Slowly Changing Dimension
IKM BIAPPS SQL Target Override
Nested IKM BIAPPS Oracle Control Append
Nested IKM BIAPPS Oracle Event Queue Delete Append

LKM BIAPPS SQL to Oracle (Multi Transport)


The KM is designed to load from any ISO-92 compliant database to Oracle
It reads from the selected database and writes into Oracle Temporary Table created
dynamically
It is typically used in most of the Source Dependent Extracts
It supports extraction from Oracle Databases using DB Links
It supports extraction from SDS Schema when available
It also supports standard JDBC extraction
A DB LINK needs to be pre created manually in the database and should be named
in the specific syntax below for this to work
The option can also be overridden using KM Options in the flow tab
The interface can be changes to pick up from SDS Schema just by changing the
option in Manage Data Load Parameters which sets the Global variable
IS_SDS_DEPLOYED
Database Links are supported for Oracle Data Sources
Global parameter ETL_SRC_VIA_DBLINK needs to be set for the DB LINK to be used

What Is an Agent?
An agent is a runtime component of ODI that orchestrates the integration process.
It is a lightweight Java program that retrieves code from the repository at run time.
At design time, developers generate scenarios from the business rules that they
have designed. The code of these scenarios is then retrieved from the repository by
the agent at run time.
This agent then connects to the data servers, and orchestrates the code execution
on these servers.
Agents are lightweight Java processes that orchestrate the execution of objects at
run time.
Agents can do one of the following:
Execute objects on demand

Execute according to predefined schedules

Using the Two Types of Agents


Deploying a Java EE agent in a Java EE Application Server (Oracle WebLogic
Server):
In ODI, define the Java EE agent in Topology Navigator.
In ODI, create the WLS template for the Java EE agent.
Deploy the template directly using WLS Configuration Wizard.
Using a standalone agent:
Launch an Agent.
Display Scheduling Information.
Stop the Agent.
Advantages of Java EE agents over standalone agents:
High availability
Multiple agents, using Coherence
Load balancing
Connection pooling back to repositories

What Is Reverse-Engineering?
Reverse-engineering is an automated process to retrieve metadata to create or
update a model in ODI
For example, RKMs detects the description of tables, columns, data types,
constraints, and comments from a database to load the repository
Oracle BI Applications ODI repository contains the relevant source data models,
RKM needs to be run only to import customized tables in the source system

Methods for DBMS ReverseEngineering


Standard reverse-engineering:
Uses Java Database Connectivity (JDBC) features to retrieve metadata, which is
then written to the ODI repository
Requires a suitable driver

Customized reverse-engineering:

Reads metadata from the application/database system repository, and then writes
this metadata in the ODI repository
Uses a technology-specific strategy, implemented in a Reverse-Engineering
Knowledge Module (RKM)
For each technology, there is a specific RKM that tells ODI how to extract metadata
for that specific technology.

Variable Steps
Declare Variable step type:
It forces a variable to be taken into account.
Use this step for variables used in transformations, or in the topology.
Set Variable step type:
It assigns a value to a variable or increments the numeric value of the variable.
Refresh Variable step type:
It refreshes the value of the variable by executing the defined SQL query.
Evaluate Variable step type:
It compares the variable value with a given value, according to an operator.
You can use another variable in the Value field.

Das könnte Ihnen auch gefallen