Sie sind auf Seite 1von 13

(2 Hours)

[Max Marks: 75

N.B: (1) All questions are compulsory.


(2) Make suitable assumptions wherever necessary and state the assumptions made.
(3) Answers to the same question must be written together.
(4) Numbers to the right indicate marks.
(5) Draw neat labeled diagrams wherever necessary.
Q1

Answer any two of the following:


(a) What are operational databases? Explain the basic characteristics of a data warehouse.
The Operational databases are often used for on-line transaction processing (OLTP). It
deals with day-to-day operations such as banking, purchasing, manufacturing,
registration, accounting, etc. These systems typically get data into the database. Each
transaction processes information about a single entity. Following are some examples
of OLTP queries:
What is the price of 2GB Kingston Pen drive?
What is the email address of the president?
The purpose of these queries is to support business operations.
Features of Data warehouse
Subject-oriented:
A data warehouse is organized around major subjects, such as customer, vendor,
product, and Sales. It focuses on the modeling and analysis of data rather than dayto-day business operations.
Integrated: A data warehouse is constructed by integrating data from multiple
heterogeneous data sources.
Time variant: A data warehouse is a repository of historical data. It gives the view
of the data for a designated time frame.
Non-volatile: A data warehouse is always a physically separate store of data
transformed from the application data found in the operational environment. Due to
this separation, a data warehouse does not require transaction processing, recovery,
and concurrency control mechanisms.
(b) Describe virtual data warehouse and central data warehouse.
Virtual Data Warehouses
This option provides end users with direct access to multiple operational databases
through middleware tools. That is it provides on-the-fly data for decision support
purposes. The advantages of this approach are:
Flexibility
No data redundancy
Provides end-users with the most current corporate information
Virtual data warehouses often provide a starting point for organizations to learn what
end users are really looking for.

Central Data Warehouses


It is a single physical repository that contains all data for a specific functional area,
department division or enterprise. A central data warehouse may contain information
from multiple operational systems. A central data warehouse contains time variant data.

10

The advantages of this approach are:


security
Ease of management
The disadvantages are:
Performance implications
Expansion is expensive
At times non-reliable
cost

(c)

Explain the various types of additivity of facts with examples.


A fact is something that is measurable and is typically numerical values that can be
aggregated.
Following are the three types of facts:
1. Additive: facts that are additive across all dimensions.
2. Semi-Additive: facts that are additive across some of the dimensions, but not
all.
3. Non-Additive: facts that are not additive across any dimension.
Additive facts are the most useful facts.
In general, facts representing individual transactions are fully additive, although
cumulative totals are semi-additive.
Non-additive facts are usually the result of ratio or other calculations.
Example of Additive Fact:Time
dimension
Customer
dimension
Item
dimension
Location
dimension
Branch
dimension
Sales_Amount
measure/fact
The Sales_Amount can be summed up over all of the dimensions(Time, Customer,
Item, Location, Branch)
The Sales_Amount can be summed up over all of the dimensions(Time, Customer,
Item, Location, Branch)
Example of Semi-Additive Fact: Suppose a Bank stores current balance by account by end of each day.
Date
dimension
Account
dimension
Current_Balance measure/fact
The Balance can not be summed up across Time dimension. It does not make sense
if we sum the current balance by date.
Example of Non-Additive Fact:Time
dimension
Customer
dimension
Item
dimension
Location
dimension

Branch
dimension
Price
measure/fact
The Price can not be summed up across any dimension.
Percentages and ratios are non additive.
(d)

Explain star schema model with the help of diagram.


The relational implementation of dimensional model is done using star schema. It
represents multi dimensional data. A star schema consists of a central fact table
containing measures and a set of dimension tables. In star schema model a fact table
is at the center of the star and the dimension tables as points of the star. A star schema
represents one central set of facts. The dimension tables contain descriptions about each
of the aspects. Say for example a warehouse that store sales data, there is a sales fact
table stores facts about sales while dimension tables store data about location , clients,
items, times, branches.
Examples of sales facts are unit sales, dollar sales, sale cost etc. Facts are numeric
values which enable users to query and understand business performance metrics by
summarizing data. The primary key in each dimension table is related to a foreign key
in the fact table.

Q2

Answer any two of the following:


(a) What is a listener? How is it configured?
Listener
In Oracle all network connections are done through the listener. The Listener is a
named process which runs on the Oracle Server. The listener process runs constantly in
the background on the database server computer awaiting for requests from clients to
connect to the Oracle database. It receives connection requests from the client and
manages the traffic of these requests to the database server.
Configuring Listener
Run Net Configuration Assistant to configure a listener.
Step 1
The first screen is a welcome screen. Select Listener Configuration option from it
and then click next button.
Step 2
The second screen allows you to add, reconfigure, delete or rename a listener.
Choose Add from the given option to configure a new listener and click next.
Step3
The third screen asks you to enter a name for the listener.
The default name is LISTENER.
Enter a new name or continue with the default and then click next button to proceed.

10

Step 4
The fourth screen is the protocol selection screen.
By default the TCP protocol is selected in this screen.
TCP is the standard communication protocol for internet and most local networks
Select the protocol and click next.
Step 5
The fifth and final screen asks the TCP/IP port number for the listener to run.
The default port number is 1521 and continues with the default port number.
It will ask us if we want to configure another listener. Select no to finish the listener
configuration.
(b) What is design center? Explain the functions of project explorer and connection
explorer windows.

(c)

Design Center
The Design Center is the main graphical interface used for the logical design of the data
warehouse. Through Design Center we define our sources and targets and design our
ETL processes to load the target from the source. The logical design will be stored in
a workspace in the Repository on the server.
Project Explorer
Through the Project Explorer window we can create objects that are relevant to our
project. It has nodes for each of the design objects we'll be able to create. We need to
design an object under the Databases node to model the source database. If we expand
the Databases node in the tree, we will notice that it includes both Oracle and NonOracle databases. It also has option to pull data from flat files. The Project Explorer can
also be used for defining the target structure.
Connection Explorer
The Connection Explorer is where the connections are defined to our various objects in
the Project Explorer. The workspace has to know how to connect to the various
databases, files, and applications we may have defined in our Project Explorer. As we
begin creating modules in the Project Explorer, it will ask for connection information
and this information will be stored and be accessible from the Connection Explorer
window. Connection information can also be created explicitly from within the
Connection Explorer.
Explain OWB components and architecture with diagram.
Following are the client side components:
Design Center
Repository Browser.
Following are the server side components:
Control Center Service
Repository
Target Schema.

(d)

The Design Center is the primary graphical user interface for designing a logical design
of the data warehouse.
Design Center is used to :
import source objects
design ETL processes
define the integration solution.
The Control Center Manager is a part of the design center. It manages communication
between target schema and design centre. As soon as you define a new object in the
Design Center, the object is listed in the Control Center Manager under its deployment
location. The design objects are stored as metadata in a centralized repository known
as workspace. This is where all of the design information is stored for the target systems
you are creating. The Repository Browser is another user interface used to browse
design metadata. The Target Schema is where OWB will deploy the object to, and
where the execution of ETL processes that load our data warehouse will take place. It
contains the objects that were designed in the Data Center, as well as the ETL code to
load those objects.
Explain the various steps involved in installing oracle database software.
Download the appropriate install file from Oracle web site
Unzip the install files into a folder to begin the installation
Run the setup.exe file from that folder to launch the Oracle Universal Installer
program (OUI) to begin the installation
step1 (to configure security updates)
Asks your email address and oracle support password to configure security updates.
step2 (to specify installation options)
following are the installation options
create and configure a database
install database software only
upgrade an existing database.
step3 (Install Type)
Here you can select the type of installation you want to perform.
The following are the installation types:
Single Instance database installation
Real Application Cluster database installation (RAC)
step4 (Language) To select the language in which your product will run
step5 (Product edition)
You can choose the edition of the database to install, Enterprise, Standard, Standard
Edition One, or Personal Edition.
step6(Installation Location)
This step asks you to specify the installation location for storing Oracle configuration
files and software files.
step7 (Prerequisite checks)
In this step oracle will checks the environment to see whether it meets the
requirements for successful installation. The prerequisite checks include checking of
operating system, physical memory, swap space, network configuration etc.
step8 shows the installation summary.
step9 (Install Product)
The actual installation happens in step 9. A progress bar proceeds to the right as the
installation happens and steps for Prepare, Copy Files, and Setup Files are checked
off as they are done.
step10 shows the success or failure of database installation

Q3

Answer any two of the following:


(a) What is a target schema? How is a target module created?
Target schema

Q1
10

A target schema contains the data objects that contain your data warehouse data. The
target schema is going to be the main location for the data warehouse. When we talk
about our "data warehouse" after we have it all constructed and implemented, the target
schema is what we will be referring to. You can design a relational target schema or a
dimensional target schema. Every target module must be mapped to a target schema.

(b)

Creation of target module


Launch Design Center.
Right-click on the DatabasesOracle node in the Project Explorer.
Select New... from the pop-up menu.
Welcome screen appears. Click next
Step1
Enter module name and select the module status as Development
Select the module type as Data Warehouse Target
Step2 (CONNECTING INFORMATION)
Specify location name, user name, password, host, port and service name
Click finish
What is time dimension? Discuss various steps involved in creating a time dimension
using time dimension wizard.
The Time/Date dimension provides the time series information to describe warehouse
data. Most of the data warehouses include a time dimension. Also the information it
contains is very similar from warehouse to warehouse. It has levels such as days, weeks,
months, etc. The Time dimension enables the warehouse users to retrieve data by time
period.

(c)

Creation of Time dimension


Launch Design Center
Expand the Databases node under any project where you want create a Time
dimension
Then right-click on the Dimensions node, and select New | Using Time Wizard... to
launch the Time Dimension Wizard.
The first screen is a welcome screen which shows various steps involved in creation
of a time dimension.
Step1:Provide name and description
Step2:Set the storage type
Step3:Define the range of data stored in the time dimension
Step4:Choose the levels
Step5:Summary of Time Dimension before creation of the Sequence and Map
Step6:Progress Status
Explain various characteristics of a dimension.
A dimension has the following four characteristics:
Levels
Dimension Attributes
Level Attributes
Hierarchies
Levels
Each dimension has one or more levels. It defines the levels where aggregation takes
place or data can be summed. The OWB supports the following levels for Time
dimension: day, fiscal week, calendar week, fiscal month, calendar month, fiscal
quarter, calendar quarter, fiscal year and calendar year
Attributes
The Attributes are actual data items that are stored in the dimension that can be found
at more than one level. Say for example the time dimension has following attributes in

each level: id (identifies that level), Start and end date (designate time period of that
level), time span (number of days in the time period), description etc.
Level Attributes
Each level has Level Attributes associated with it that provide descriptive information
about the value in that level. For example, Day level has level attributes such as day of
week, day of month, day of quarter, day of year etc.
Hierarchies
It is composed of certain levels in order. There can be one or more hierarchies in a
dimension. The month, quarter and year can be a hierarchy. The data can be viewed at
each of these levels, and the next level up would simply be a summation of all the
lower-level data within that period.
(d)

Write notes on the following


i) Slowly changing dimension
ii) Surrogate keys
i) Slowly changing dimension
Dimensions that change slowly over time are known as slowly changing
dimensions. These changes need to be tracked in order to report historical data. The
OWB allows the following options for slowly changing dimensions.
Type 1 - Do not keep a history. This means we basically do not care what the old
value was and just change it.
Type 2 - Store the complete change history. This means we definitely care about
keeping that change along with any change that has ever taken place in the
dimension.
Type 3 - Store only the previous value. This means we only care about seeing what
the previous value might have been, but don't care what it was before that.
ii) Surrogate keys
Surrogate keys are artificial keys that are used as a substitute for source system
primary keys. They are generated and maintained within the data warehouse. A
Surrogate Key is a NUMBER type column and is generated using a Sequence. The
management of surrogate keys is the responsibility of the data warehouse. The
Surrogate Keys are used to uniquely identify each record in a dimension. The source
tables have columns such as AIRPORT_NAME or CITY_NAME which are stated
as the primary keys (according to the business users) but, these can change and you
could consider creating a surrogate key called, say, AIRPORT_ID. This would be
internal to the warehouse system and as far as the client is concerned you may
display only the AIRPORT_NAME. Surrogate keys are numeric values and hence
Indexing is faster.
Q1

Q4
(a)

What is ETL? Explain the importance of source target map.


ETL stands for extract, transform and load. The ETL process transforms the data from
an application-oriented structure into a corporate data structure. Once the source and
target structures defined, we can move on to the following activities in constructing a
data warehouse.
Work on extracting data from sources
Perform any transformations on the data
Load into target data warehouse structure
The data warehouse architect builds a source to-target data map before ETL
processing starts. The source target map specifies:
what data must be placed in the data warehouse environment
where that data comes from (known as source or system of record)
the logic or calculation or data reformatting that must be done to the data.

(b)

The data mapping is the input needed to feed the ETL process. Mappings are visual
representations of the flow of data from source to target and the operations that need to
be performed on the data.
What is staging? What are its benefits? Explain the situation where staging is essential.
Staging
Staging is the process of copying the source data temporarily into tables in target
database. The purpose is to perform any cleaning and transformations before loading
the source data into the final target tables. Staging stores the results of each logical step
of transformation in staging tables. The idea is that in case of any failure you can restart
your ETL from the last successful staging step.
Staging make sense in the following case
large amount of data to load
many transformations to perform on that data while loading.
Pulling data from non-oracle databases
This process will take a lot longer if we directly access the remote database to pull and
transform data. We'll also be doing all of the manipulations and transformations in
memory and if anything fails; we'll have to start all over again.
Benefits
Source database connection can be freed immediately after copying the data to the
staging area. The formatting and restructuring of the data happens later with data in
the staging area.
If the ETL process needs to be restarted, there is no need to go back to disturb the
source system to retrieve the data.

(c)

(d)

Briefly explain the functions of filter and joiner operators.


Filter
This will limit the rows from an output set to criteria that we specify.
It is generally implemented in a where clause in SQL to restrict the rows that are
returned.
We can connect a filter to a source object, specify the filter criteria, and get only
those records that we want in the output.
It has Filter Condition property to specify the filter criteria.
Joiner
This operator will implement an SQL join on two or more input sets of data,
and produces a single output row set.
That is it combines data from multiple input sources into one.
A join takes records from one source and combines them with the records from
another source using some combination of values that are common between the
two.
It has a property called Join Condition through which you can specify the
criterion for join.
What are data flow operators? Explain the concept of pivot operator with example.
Data flow operators
A data warehouse requires restructuring of the source data into a format that is
congenial for the analysis of data. The data flow operators are used for this purpose.
These operators are dragged and dropped into our mapping between our sources and
targets. Then they are connected to those sources and targets to indicate the flow of data
and the transformations that will occur on that data as it is being pulled from the source
and loaded into the target structure.
Pivot
The pivot operator enables you to transform a single row of attributes into multiple
rows. Suppose we have source records of sales data for the year that contain a column

for each quarter of the year. But we need to save that information by quarter, and not
by year. So taking a simple example as follows:
YEAR Q1_sales
Q2_sales Q3_sales Q4_sales
---------- ---------- ---------- ---------- ---------2005 10000 15000
14000 25000
we wish to transform the data set to the following with a row for each quarter:
YEAR QTR SALES
---------- -- ---------2005 Q1 10000
2005 Q2 15000
2005 Q3 14000
2005 Q4 25000
Q5

Answer any two of the following:


10
(a) What is the purpose of main attribute group in a cube? Discuss about dimension
attributes and measures in the cube.

(b)

The first group represents main attributes for the cube and contains data elements to
which we will need to map. Other groups represent the dimensions that are linked to
the cube. As far as the dimensions are concerned we make separate map for them prior
to cube mapping. The data we map for the dimensions will be to attributes in the main
cube group, which will indicate to the cube which record is applicable from each of the
dimensions.
Cube has attributes for surrogate and business identifiers defined for each dimension
of the cube.
All business identifiers are prefixed with the name of the dimension
The name of a dimension is used as the surrogate identifier for that dimension.
Say for example, if SKU and NAME are two business identifiers in PRODUCT
dimension, then the main attribute group will have three PRODUCT related
identifiers; PRODUCT_SKU, PRODUCT_NAME, PRODUCT.
Apart from surrogate and business identifiers, the main attribute group also contains
the measures we have defined for the cube.
What is expression operator? Explain the mapping of a date field SALE_DATE to a
numeric field DAY_CODE by applying TO_CHAR() and TO_NUMBER() functions
through expression operator. The string format for TO_CHAR() function is
YYYMMDD'.
The expression operator represents an SQL expression that can be applied to the output
to produce the desired result. Any valid SQL code for an expression can be used, and
we can reference input attributes to include them as well as functions.
Drag the Expression operator onto the mapping.
It has two groups definedan input group, INGRP1and an output group,
OUTGRP1.
Link the SALE_DATE attribute of source table to the INGRP1 of the
EXPRESSION operator.
Right-click on OUTGRP1 and select Open Details... from the pop-up menu.
This will display the Expression Editor window for the expression.
Click on the Output Attributes tab and add a new output attribute OUTPUT1 of
number type and click OK.
Click on OUTPUT1 output attribute in the EXPRESSION operator and turn our
attention to the property window of the Mapping Editor.
The Properly Window shows Expression as its first property.
Click the blank space after the label Expression.
This shows a button with three dots.

(c)

Click on this button to open the Expression Builder.


Through
Expression
Builder
enter
the
following
expression.
TO_NUMBER(TO_CHAR( SALE_DATE,YYYYMMDD))
Click on OK to close the Expression Builder.
Link the DAY_CODE attribute of expression to DAY_CODE attribute of target
operator.
Explain the concept of validating and generating objects.

Validating Objects
The process of validation is all about making sure the objects and mappings we've
defined in the Warehouse Builder have no obvious errors in design.
Oracle Warehouse Builder runs a series of validation tests to ensure that data object
definitions are complete and that scripts can be generated and deployed.
When these tests are complete, the results are displayed.
Oracle Warehouse Builder enables you to open object editors and correct any invalid
objects before continuing.
Validating objects and mapping can be done with the help of Design Center.
Validation of repository objects can be done with the help of Data Object Editor.
Validation of mapping can be done through Mapping Editor.
Generating Objects
Generation deals with creating the code that will be executed to create the objects
and run the mapping
With the generation step in the Warehouse Builder, we can generate the code that
we need to use to build and load our data warehouse.
The objectsdimensions, cube, tables, and so onwill have SQL Data Definition
Language (or DDL) statements produced, which when executed will build the
objects in the database.
The mappings will have the PL/SQL code produced that when it's run, will load the
objects.
Like validation, generation also can be done with the help of Data Object Editor and
Mapping Editor.
(d) What is object deployment? Explain the functions of control center manager.

Q6

Deployment is the process of creating physical objects in the target schema based
on the logical definitions created using the Design Center.
The process of deploying is where the database objects are actually created and
PL/SQL code is actually loaded and compiled in the target database.
During initial stages of design no physical objects have been created in the target
schema.
The operations such as importing metadata for tables, defining objects, mapping
and so on and do forth are performed with respect to OWB Design Center client.
These objects are created as Warehouse Builder repository objects.
So for the actual deployment of object in the target database, we have to use
Control Center Service, which must be running for the deployments to function.
The Design Center creates a logical design of the data warehouse.
The logical design will be stored in a workspace in the Repository on the server.
The Control Center Manager is used for the creation of physical objects into the
target schema by deploying the logical design.
The Control Center Manager is used to execute the design by running the code
associated with the ETL that we have designed.
The Control Center Manager interacts with the Control Center Service, which runs
on the server.
The Target Schema is where OWB will deploy the object to, and where the
execution of the ETL processes that load our data warehouse will take place.

Answer any two of the following:

10

(a)

What is recycle bin? Describe the features of warehouse builder recycle bin window.

The Recycle Bin in OWB is similar to recycle bin in operating systems.


OWB keeps deleted objects in the recycle bin.
The deleted objects can be restored from the Recycle Bin.
To undo a deletion select an object from the Recycle Bin and click Restore. .
If Put in Recycle Bin check box is checked while deleting, then the object will be
send to the Recycle Bin.
Warehouse Builder Recycle Bin
The Warehouse Builder Recycle Bin window can be opened by clicking on Tools
menu and selecting Recycle Bin option from the pop-up menu
This window has a content area which shows all deleted objects.
The content shown with Object Parent as well as Time Deleted information.
The Object Parent means the project from which it was deleted from and the Time
Deleted is when we deleted the object.
Below the content area it has two buttons:
One for restoring a deleted object
Another for emptying the content of recycle bin
(b) Explain data sparsity and data explosion.
(c) What is a snapshot? Explain full snapshot and signature snapshot.
Snapshot
A snapshot is a point in time version of an object.
The snapshot of an object captures all the metadata information about that object at
the time when the snapshot is taken.
It enables you to compare the current object with a previously taken snapshot.
Since objects can be restored from snapshots, it can be used as a backup mechanism.
Full Snapshots :
Full snapshots provide complete metadata of an object that you can use to restore it
later.
So it is suitable for making backups of objects.
Full snapshots take longer time to create and require more storage space than
signature snapshots.
Signature Snapshots:
It captures only the signature of an object.
A signature contains enough information about the selected metadata component to
detect changes when compared with another snapshot or the current object
definition.
Signature snapshots are small and can be created quickly.
(d) Explain the export feature of Metadata Loader.
The workspace objects can be exported and save them to a file. We can export anything
from an entire project down to a single data object or mapping. Following are the
benefits of Metadata Loader exports and imports
Backup
To transport metadata definitions to another repository for loading
If we choose an entire project or a collection such as a node or module, it will export
all objects contained within it. If we choose any subset, it will also export the context
of the objects so that it will remember where to put them on import.
Say for example if you export a table, the metadata also contains the definition for:
the module in which it resides
the project the module is in.
We can also choose to export any dependencies on the object being exported if they
exist. To export an object select the object which is to exported and click on Design |
Export |Warehouse Builder Metadata from the main menu.

Q7

Answer any two of the following:


(a) Write any five significant differences between OLTP database and Data warehouse
database.
OLTP database
Application Oriented
Detailed data
Designed for real-time business transactions
and concurrent processes
Isolated Data
Repetitive access
Performance Sensitive
Few Records accessed at a time
Optimized for common and known set of
transactions, usually intensive nature;
addition, updation and deletion of rows at a
time per table.
Database Size 100 MB to 100 GB

(b)

(c)

Data warehouse database


Subject Oriented
Summarized and refined
Designed for analysis of business
Integrated Data
Ad-hoc access
Performance relaxed
Large volumes accessed at a time
Optimized for bulk loads and
complex, unpredictable queries that
access many rows per table

Database Size 100 GB to few


terabytes
Very minimal historical data
Current as well as historical data
What are the hardware and software requirements for installing oracle warehouse
builder?
Following are the Databases which support OWB:
Oracle Database 12c R1 Standard Edition
Oracle Database 12c R1 Enterprise Edition
Oracle Database 11g R2 Standard Edition
Oracle Database 11g R2 Enterprise Edition
Oracle Database 11g R1 Standard Edition
Oracle Database 11g R1 Enterprise Edition
The enterprise edition of the database gives you the power to use full features of the
data warehouse. But the standard edition does not support all warehouse features. The
OWB with the standard edition allows you to deploy only limited types of objects.
Hardware Requirements
Intel Core 2 duo or higher processor
1 GB RAM
10GB to 15GB Hard disk space
Operating System Requirements
It support Unix or Windows Platform
Windows Support(Windows Vista(Business Edition/Enterprise
Edition/Ultimate Edition) / Windows XP / Windows Server 2003 / Windows 7
)
Explain multidimensional implementation of data warehouse.
In relational implementation data is organized into dimension tables, fact tables and
materialized views. A multidimensional implementation requires a database with
special features that allow it to store cubes as actual objects in the database. It also
provides advanced calculation and analytic content built into the database to facilitate
advanced analytic querying. These analytic databases are quite frequently utilized to
build a highly specialized data mart, or a subset of the data warehouse, for a particular
user community. MOLAP uses array-based multidimensional storage instead of
relational database. MOLAP tools generally utilize a pre-calculated data set referred to
as data cube. The data required for the analysis is extracted from relational data

10

(d)

warehouse or other data sources and loaded in a multidimensional database which looks
like a hypercube. Hypercube is a cube with many dimensions.
What are mapping operators? Explain any two source target mapping operators in
detail.
Mapping operators
These are the basic design elements to construct an ETL mapping. Used to represent
sources and targets in the data flow. Also used to represent how to transform the data
from source to target.
Explain about any two source target operators

(e)

(f)

What are the two ways of validating repository objects in object editor?
Briefly explain various deploy actions of Object Details window.
Following are the two ways to Validate an object from Data Object Editor:
Right-click on the object displayed on the Canvas and select Validate from the popup menu
Select the object displayed on the canvas and then click on the Validate icon from
the toolbar.
Deploy Action: Following are the actions
Create: Create the object; if an object with the same name already exists, this can
generate an error upon deployment
Upgrade: Upgrade the object in place, preserving data
Drop: Delete the object
Replace: Delete and recreate the object; this option does not preserve data
What are the matching strategies for synchronizing workspace objects with its
mapping operator? Explain inbound and outbound synchronization.
Inbound uses the specified repository object to update the operator in our mapping
for matching. It means that the changes in workspace object will be reflected in
mapping operator.
Outbound option would update the workspace object with the changes we've made to
the operator in the mapping. It means that the changes in mapping operator will be
reflected in workspace object
Following are the three matching strategies
Match by Object Identifier
Each source attribute is identified with a uniquely created ID internal to the
Warehouse Builder metadata. The unique ID stored in the operator for each attribute is
exactly same as that of the corresponding attribute in the workspace object to which the
operator is synchronized with. This matching strategy compares the unique object
identifier of an operator attribute with that of a workspace object.
Match by Object Name
This strategy matches the bound names of the operator attributes to the physical names
of the workspace object attributes.
Match by Object Position
This strategy matchs operator attributes with attributes of the selected workspace
object by position. The first attribute of the operator is synchronized with the first
attribute of the workspace object, the second with the second, and so on.

Das könnte Ihnen auch gefallen